text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Biobutanol from Renewable Agricultural and Lignocellulose Resources and Its Perspectives as Alternative of Liquid Fuels Biobutanol (n-C4H9OH, available as fermentation product of various carbohydrate derivatives obtained from different resources of agricultural production such as crops and wastes) is one of the most promising biofuels in the near future. It can be produced by the so-called ABE (acetone-butanol-ethanol) type anaerobic fermentation discovered by Pasteur [1, 2] and industrialized by Weizmann [3]. Main problems associated with industrial production of biobutanol include high energy demand for processing of dilute ferment liquors and high volume of wastewater. A bioreactor with a volume of 100 m3 produces at 90% filling ratio 1053 kg of butanol, 526 kg of acetone and 175 kg of ethanol together with 2900 kg of carbon dioxide, 117 kg of hydrogen and 84150 kg of wastewater. Efforts to increase productivity and decrease production costs resulted in many new methods. This chapter summarizes some selected results on methods of biobutanol production. Introduction Biobutanol (n-C 4 H 9 OH, available as fermentation product of various carbohydrate derivatives obtained from different resources of agricultural production such as crops and wastes) is one of the most promising biofuels in the near future. It can be produced by the so-called ABE (acetone-butanol-ethanol) type anaerobic fermentation discovered by Pasteur [1,2] and industrialized by Weizmann [3]. Main problems associated with industrial production of biobutanol include high energy demand for processing of dilute ferment liquors and high volume of wastewater. A bioreactor with a volume of 100 m 3 produces at 90% filling ratio 1053 kg of butanol, 526 kg of acetone and 175 kg of ethanol together with 2900 kg of carbon dioxide, 117 kg of hydrogen and 84150 kg of wastewater. Efforts to increase productivity and decrease production costs resulted in many new methods. This chapter summarizes some selected results on methods of biobutanol production. History of industrial biobutanol production During investigations aimed at discovering cheaper sources of acetone and butanol for chemical industry, Weizmann [3] isolated an organism which could ferment a fairly concentrated corn mash with good yields of acetone and butanol. In 1915 the British Admirality took over the research and carried out large-scale tests in an improvised apparatus but without of the final consumption of butyric acid, because a high glucose flux is required to generate as much amount of ATP as is enough to supply the energy demand of the butyric acid-butanol transformation [9]. Hartmanis et al. studied the pathway for uptake of acids during the solvent formation phase of ABE fermentation by C. acetobutylicum using 13 C NMR [10]. Actively metabolizing cells showed that butyrate can be taken up from the medium and quantitatively converted to butanol without accumulation of intermediates. The activities of acetate phosphotransacetylase, acetate kinase, and phosphate butyryltransferase rapidly decreased to very low levels when the organism began to form solvents. This indicates that the uptake of acids does not occur via a reversal of these acid-forming enzymes. No short-chain acyl-CoA synthetase activity could be detected. Apparently, an acetoacetyl-CoA:acetate (butyrate) CoAtransferase is solely responsible for uptake and activation of acetate and butyrate in C. acetobutylicum. The transferase exhibits broad carboxylic acid specificity. The key enzyme in the uptake is acetoacetate decarboxylase which is induced late in the fermentation and pulls the transfer reaction towards formation of acetoacetate. The major implication is that it is not feasible to obtain a batch-wise BuOH fermentation without acetone formation and retention of a good yield of BuOH [10]. Ferredoxin enzymes also play important role in the ABE processes, thus the presence of iron in the appropriate form and concentration is essential factor in the appropriate solvent production. When Clostridium acetobutylicum was grown in batch culture under Fe limitation (0.2 mg/L) at pH 4.8, glucose was fermented to BuOH as the major fermentation end product, and small quantities of HOAc were produced. The final conversion yield of glucose into BuOH could be increased from 20% to 30% by Fe limitation. The BuOHacetone ratio was changed from 3.7 (control) to 11.8. Hydrogenase specific activity was decreased by 40% and acetoacetate decarboxylase specific activity by 25% under Fe limitation. Thus, Fe limitation affects C and electron flow in addition to hydrogenase [11]. Terracciano and Kashket investigated the intracellular physiological conditions associated with the induction of butanol-producing enzymes in Clostridium acetobutylicum. During the acidogenic phase of growth, the internal pH decreased in parallel with decrease in the external pH, but the internal pH did not go below 5.5 throughout batch growth. Butanol was found to dissipate the proton motive force of fermenting C. acetobutylicum cells by decreasing the transmembrane pH gradient, whereas the membrane potential was affected only slightly. In growing cells, the switch from acid to solvent production occurred when the internal undissociated butyric acid concentration reached 13 mM and the total intracellular undissociated acid concentration (acetic plus butyric acids) was at least 40 to 45 mM [12]. C. acetobutylicum ATCC 824 cells harvested from a phosphate-limited chemostat culture maintained at pH 4.5 had intracellular concentrations of acetate, butyrate and butanol which were 13-, 7-and 1.3fold higher, respectively, than the corresponding extracellular concentrations. Cells from a culture grown at pH 6.5 had intracellular concentrations of acetate and butyrate, which were only 2.2-fold higher than the respective external concentrations. The highest intracellular concentrations of these acids were attained at pH 5.5. When cells were suspended in anaerobic citrate-phosphate buffer at pH 4.5, exogenous acetate and butyrate caused a concentrationdependent decrease in the intracellular pH, while butanol had relatively little effect until the external concentration reached 150 mM. Acetone had no effect at concentrations ≤200 mM. These data demonstrate that acetate and butyrate are concentrated within the cell under acidic conditions and thus tend to lower the intracellular pH. The high intracellular butyrate concentration presumably leads to induction of solvent production thereby circumventing a decrease in the intracellular pH great enough to be deleterious to the cell [13]. Harris et al. suggested [14] that butyryl phosphate (BuP) is a regulator of solventogenesis in Clostridium acetobutylicum. Determination of BuP and acetyl phosphate (AcP) levels in various C. acetobutylicum strains (wild(WT), M5, a butyrate kinase (buk) and a phosphotransacetylase (pta) mutant) showed that the buk mutant had higher levels of BuP and AcP than the wild strain; the BuP levels were high during the early exponential phase, and there was a peak corresponding to solvent production [15]. Consistently with this, solvent formation was initiated significantly earlier and was much stronger in the buk mutant than in all other strains. For all strains, initiation of butanol formation corresponded to a BuP peak concentration that was more than 60 to 70 pmol/g (dry wt.), and higher and sustained levels corresponded to higher butanol formation fluxes. The BuP levels never exceeded 40 to 50 pmol/g (dry wt.) in strain M5, which produces no solvents. The BuP profiles were bimodal, and there was a second peak midway through solventogenesis that corresponded to carboxylic acid reutilization. AcP showed a delayed single peak during late solventogenesis corresponding to acetate reutilization. As expected, in the pta mutant AcP levels were very low, yet this strain exhibited strong butanol prodn. These data suggest that BuP is a regulatory mol. that may act as a phosphodonor of transcriptional factors. DNA array-based transcriptional anal. of the buk and M5 mutants demonstrated that high BuP levels corresponded to downregulation of flagellar genes and upregulation of solvent formation and stress genes [15]. Basic reasons for autoinhibition or butanol and intermediate acids toxicity The toxicity of accumulated butanol and the intermediates is a very important feature of the ABE fermentation. Costa studied [16] the growth rates of Clostridium acetobutylicum in presence of BuOH, EtOH, Me 2 CO, acetate and butyrate. Acetate and butyrate were the most toxic compounds, with concentrations of 5 and 8.5 g/L, respectively, stopped the cell growth. An EtOH concentration of 51 g/L or 11 g BuOH/L reduced cell growth by 50%. Acetone did not inhibit cell growth at 29 g/L, thus ethanol and acetone were nontoxic at a normal fermentation. Some mutant strains, however, more tolerant towards butanol, for example Lin and Bladchek [17] obtained a derivative of C. acetobutylicum ATCC 824 which grew at concentrations of BuOH that prevented growth of the wildtype strain at a rate which was 66% of the uninhibited control. This strain produced consistently higher concentrations of BuOH (5-14%) and lower concentrations of acetone (12.5-40%) than the wild-type strain in 4-20% extruded corn broth. Characterization of the wild-type and the mutant strain demonstrated the superiority of the latter in terms of growth rate, time of onset of BuOH production, carbohydrate utilization, pH resistance, and final BuOH concentration in the fermentation broth [17]. Moreira et al. [18] initiated a fundamental study attempting to elucidate the mechanism for BuOH toxicity in the acetone-BuOH fermentation by Clostridium acetobutylicum. Butanol as a hydrophobic compound inserted into the membrane increases the passive proton flux, forms a "hole" for proton on the membrane. This eliminates hydrogen ions form the cell and the intracellular pH increases. The strains which are able to decrease the membrane fluidity are more resistant towards butanol. The cells have deacidifying mechanism to keep the intracellular pH value at 6 when the pH value of the ferment liquor is located between 4 and 5 can reduce acids into alcohols, which increases their butanol producing ability. Lepage et al [19] studied the changes in membrane lipid composition of C. acetobutylicum during ABE fermentation. Large changes were found in phospholipid composition and in fatty acid composition, the latter characterized mainly by a decrease in the unsaturated/saturated fatty acid (U/S) ratio. Effects of the addition of alcohols (EtOH, BuOH, hexanol, and octanol) and of acetone were also studied. In all cases, large changes were observed in the U/S ratio, but with differences which were related to the chain length of the alcohols. The effect of solvents appears to account for a large part of changes in lipid composition observed during the fermentation. The pH was also important, a decrease in pH resulting in a decrease in the U/S ratio and in an increase in cyclopropane fatty acids. The effect of increasing temperature was mainly to increase fatty acid chain lengths [19]. Fermenting microorganisms Depending on the composition and properties of raw materials, the selection and conditioning of the appropriate bacterium strain are essential. In order to improve the economic efficacy of ABE fermentation, the butanol ratio is to be increased by eliminating the production of other byproducts such as acetone and specific mutants are to be developed which show high butanol tolerance, high productivity or other advantageous properties. Harada [28,29] isolated a new strain of Clostridium (Cl. Madisonii) which produced BuOH amounting to 28.7% of the initial total sugar and the fermented broth included 1.38% BuOH. The age of the culture also plays important role in the productivity. By using older inoculated bacteria, the production of acetone increased and the ratio of BuOH to Me 2 CO decreased from 2.24 to 1.88 [30]. Harada [31] concluded that the seed culture at the last stage of the aciddecreasing phase gave the best yield as inoculum in the main fermentation. Butanol-resistant mutants have been isolated by Hermann from soil which produced significantly higher solvent concentrations (about 30%) than the wild-type strain [32]. The sporulation-deficient (spo) early-sporulation Clostridium acetobutylicum P262 mutants produced higher solvent yields than did the spoB mutant which was a late-sporulation one. In conventional batch fermentation, the wild-type strain produced 15.44 g L -1 of solvents after 50 h at a productivity of 7.41 g L -1 d -1 of solvents. The spoA2 mutant produced 15.42 g L -1 of solvents at a productivity of 72.4 g L -1 d -1 of solvents with a retention time of 2.4 h in a continuous immobilized cell system employing a fluidized bed reactor [33]. Using two different types of Clostridia to improve the productivity of each (acidogenic and solventogenic) phase is also known. Bergstroem and Foutch [34] improved the BuOH production from sugars by combining two cultures of Clostridium: one that produces butyric acid, and another that converts butyrate to BuOH. Thus, C. butylicum NRRL B592 and C. pasteurianum NRRL B598 were cultured together in thioglycolate medium containing 2.5% added glucose and a CaCO 3 chip to maintain pH, at 37 °C under anaerobic conditions. The yield of BuOH was 20 % more as compared to the value when C. butylicum was cultured alone. Initiation of gene-structure changes by destructive methods such as irradiations or chemicals followed by selection is a well known method in the production of highly effective Cl. Acetobutylicum strains. Yasuda [35] heated ABE producing microorganisms at 100 °C to destroy all vegetative forms except spores which were kept at -10 °C, then treated with electric discharge in vacuum by using 50,000 V and 0.002 A DC for stimulation. High-yield butanol producing Clostridium strain was prepared through irradiation of the wild strain with 60 Co γ-rays at an irradiation dosage of 100-1,000 Gy and a dosage rate of 3-5 Gy/min [36]. Chemical mutation with N-methyl-N'-nitrosoguanidine is one of the most frequently used method to produce excellent ABE fermenting strains. Hermann et al [37] prepared a strain of C. acetobutylicum that hyperproduces acetone and BuOH by mutation of C. acetobutylicum IFP903. A new mutant (CA101) of C. pasteurianum prepared in this way could produce 2.1 g BuOH/L in 2 days. By using the parental strain, the production of BuOH was only 0.6 g/L [38]. The C. acetobutylicum strain 77 was isolated from the parent strain ATCC 824 with the abovementioned method in the presence of butanol. The mutant grew more rapidly (μ = 0 69 h −1 ) than the parent strain (μ = 0 27 h −1 ) and, at the stationary phase, the cell dry weight of mutant strain was about 50% higher than that of the parent strain. Strain 77 metabolised glucose faster than wild strain and solvent production started earlier with higher specific production rates than the parent strain. From 65 g of glucose, 20 g L −1 of solvents (butanol, 14 5 g; acetone, 3 5 g; ethanol, 2 g) were formed by the wild strain in 53 h, whereas the mutant used 75 g of glucose and excreted nearly 24 g L −1 of solvents (butanol, 15.6 g; acetone, 4.5 g; ethanol, 3.7 g) in 44 h [39]. A frequently used chemical to inititate mutation in C. Acetobutylicum strains is methanesulfonic acid ethyl ester (EMS). EMS is effective in inducing mutants resistant to ampicillin, erythromycin, and butanol (15 g/l). Optimal mutagenesis occurs at 85-90% kill corresponding to a 15 minute exposure to 1.0% (v/v) EMS at 35 C. At optimal conditions, the frequency of resistant mutant CFU/ total CFU plated increases 100-200 fold [40]. Genetical engineering opened unlimited perspectives in the preparation of ABE fermenting microorganisms. Genetically modified C. Acetobutylicum, E. Coli and S. Cereviase and other microorganisms play important role in the future production of ABE solvents under more convenient conditions than in classical ABE fermentation. The acetoacetate decarboxylase gene (adc) in the hyperbutanol-producing industrial strain Clostridium acetobutylicum EA 2018 was disrupted when the butanol ratio was increased from 70 to 80.05%, while acetone production decreased to approx. 0.21 g/L in the adc-disrupted mutant (2018adc). Regulation of the electron flow by addition of methylviologen altered the carbon flux from acetic acid production to butanol production in strain 2018adc, which resulted in an increased butanol ratio of 82% and a corresponding improvement in the overall yield of butanol from 57 to 70.8% [41]. Larossa and Smulski found genes involved in a complex that is a three-component proton motive force-dependent multidrug efflux system to be involved in E. coli cell response to butanol by screening of transposon random insertion mutants. Reduced production of the AcrA and/or AcrB proteins of the complex confers increased butanol tolerance [42]. Green and Bennett subcloned the genes coding for enzymes involved in butanol or butyrate formation into a novel Escherichia coli-Clostridium acetobutylicum shuttle vector constructed from pIMP1 and a chloramphenicol acetyl transferase gene [43]. The resulting replicative plasmids, referred to as pTHAAD (aldehyde/alcohol dehydrogenase) and pTHBUT (butyrate operon), were used to complement C. acetobutylicum mutant strains, in which genes encoding aldehyde/alcohol dehydrogenase (aad) or butyrate kinase (buk) had been inactivated by recombination with Emr constructs. Complementation of strain PJC4BK (buk mutant) with pTHBUT restored butyrate kinase activity and butyrate production during exponential growth. Complementation of strain PJC4AAD (aad mutant) with pTHAAD restored NAD(H)dependent butanol dehydrogenase activity, NAD(H)-dependent butyraldehyde dehydrogenase activity and butanol production during solventogenic growth [43]. Shen and Liao constructed an Escherichia coli strain that produces 1-butanol and 1-propanol from glucose [44]. First, the strain converts glucose to 2-ketobutyrate, a common keto-acid intermediate for isoleucine biosynthesis. Then, 2-ketobutyrate is converted to 1-butanol via chemicals involved in the synthesis of the unnatural amino acid norvaline. The synthesis of 1-butanol is improved through deregulation of amino-acid biosynthesis and elimination of competing pathways. The final strain demonstrated a production titre of 2 g/L with nearly 1:1 ratio of butanol and propanol [44]. Green et al [45] made recombinant thermophilic bacteria of the family Bacillaceae which have been engineered to produce butanol and/or butyrate. The Bacillaceae is preferably of the genus Geobacillus or Ureibacillus [45]. Young et al described a method of modifying prokaryotic and eukaryotic hosts for the fermentation production of aliphatic alcohols. Elements of the gene for a CAAX proteinase (prenylated protein-processing Cterminal proteinase) are used to increase alcohol tolerance. This can be used in combination with other changes to increase alcohol tolerance [46]. Fermenting with modified eukaryotic cells in a suitable fermentation broth, wherein butanol and ethanol are produced at a ratio between 1:2 to 1:100, is described by Dijk et al. [47]. Since fermentations with yeasts do not require sterile environment, genetically modified yeasts are very prosperous microrganisms in ABE fermentation. Yeast cells capable of producing butanol and comprising a nucleotide sequence encoding a butyryl-CoA dehydrogenase and at least one nucleotide sequence encoding an electron transfer flavoprotein were described by Mueller et al. [48]. Effect of medium composition, temperature and nitrogen sources The appropriate temperature for optimal fermentation ability of C. Acetobutylicum strains strongly depends not only on the type of strain, but on the composition of the medium and raw materials as well, and is strongly influenced by a series of factors such as presence or absence of additives, sugar concentration, pH, and others. McNeil and Christiahsen studied the effect of temperature on the solvent production by C. acetobutylicum in the range 25 to 40°C [49]. It was found that the solvent yield decreased with increasing temperature. Considering total solvent yield and productivity only, the optimum fermentation temperature was found to be 35°C [49]. Comparison of the solvent production by using strains of C. acetobutylicum and C. butylicum from whey showed that higher yields of solvents were observed at 37 °C or 30 °C, respectively [50]. Oda and Yamaguchi [51] concluded that temperature control played important role in the solvent yield and the optimal temperatures were not found to be the same during different stages of the process. Harada [52] concluded that the yield of BuOH was increased from 18.4-18.7% to 19.1-21.2% by lowering the temperature from 30 °C to 28 °C when the growth of the bacteria reached a maximal rate. Fouad et al. [53] studied fourteen different media in the fermentative production of acetone and butanol. The highest total yields were achieved in medium containing potato starch and soluble starch as C sources. Compositon and pH of the medium have important influence on ABE fermentation. The contaminants in the media have decisive effect on the ABE fermentation. For example, hydrolysates obtained by enzymatic saccharification of wheat straw or cornstover pretreated by steam explosion in classical or acidic conditions, were found non-fermentable into acetone-butanol. A simple treatment involving heating the hydrolysates in presence of calcium or magnesium compounds such as Ca(OH) 2 or MgCO 3 at neutral pH values restored normal fermentability to these hydrolysates [54]. Sugar concentration of the media also influences the ABE fermentation. Fond et al. [55] studied growing of C. acetobutylicum in fed-batch cultures at different feeding rates of glucose. The sugar conversion to BuOH and Me 2 CO increased with increasing the glucose flow whereas, on the contrary, conversion to butyric acid was highest at slow glucose feeding rate. The AcOH concentration was constant at different flows of glucose and the solventogenesis was not inhibited at high flow of sugar [55]. The amount and chemical form of inorganic and organic nitrogen sources basically affect on the ABE process. They influence also strongly depends on presence or absence of other important additives. Among studied inorganic nitrogen compounds, ammonium nitrate and urea could stop the fermentation in the middle, (NH 4 )HSO 4 , NH 4 Cl, and (NH 4 ) 2 HPO 4 resulted acetone-rich fermentation, while (NH 4 ) 2 CO 3 and NH 4 OH gave BuOH-rich fermentation [56]. Baghlaf et al [57] studied the effect of different concentrations of corn steep liquor, fodder yeast, soybean meal, corn bran, rice bran, and KH 2 PO 4 in the ABE fermenatation, and the organism preferred utilization of natural organic sources. The best concentration of KH 2 PO 4 , favouring the ABE production was found to be 2 g/L. Oda [58,59] occurred a little effect of adding (NH 4 ) 2 SO 4 to EtOH-extracted soybean meal in the yield of solvents, however, cane molasses and dried yeasts were good supplements to the same soybean meal. Addition of asparagine retarded the fermentation. When used as the sole N source, soybean press cake and egg white were good; the others tested were, in the order of decreasing suitability, EtOH-extracted soybean meal, casein, fish protein, zein, gluten, yeast protein, and gelatine. With peanut cake as the N source, Ca salts were not desirable. The stimulants tested were mostly effective: they were, in the order of decreasing effect, liver (best), rice bran-clay, α-alanine, α-methylphenethylamine-H 2 SO 4 , β-alanine, p-aminobenzoic acid, naphthaleneacetic acid, and cane molassesclay (the last two were slightly worse than the control without stimulant). Doi et al. [60] could occur that growth-promoting amino acids in the casein acid-hydrolyzate can be divided into three groups: the bacteria required isoleucine, valine, and glutamic acid; asparagine, serine, threonine, alanine, and glycine accelerated fermentation. Leucine, phenylalanine, methionine, tryptophan, proline, lysine, histidine, and arginine were not required for growth and cystine and tyrosine inhibited fermentation. Acetate and butyrate additives Since both acetic and butyric acids are intermediate products of ABE fermentation, and butyrate is almost completely consumed during the solventogenic phase, addition of these intermediates to increase the yield of butanol has already been studied in detail. Beneficial effects were observed with addition of AcOH (completely reduced to Me 2 CO), butyric acid (50-80% recovery as BuOH), and sodium acetate NaOAc (60% recovery as Me 2 CO), while bad results were obtained with addition of formic acid and calcium acetate [61,62]. Nakhmanocivh and Shcheblikina [63] used a 4% glucose medium, with corn gluten or flour mash, and additon of 0.1N Ca(OAc) 2 raised acetone yield by 20-24% and 0.1 N calcium butyrate raised BuOH yield by 45-60% in C. acetobutylicum fermentations. Though Ca(OAc) 2 accelerated the fermentation, it was only 40-50% fermented itself. Utilization of Ca(OOCPr) 2 goes further (above 70%), mostly by conversion to Ca(OAc) 2 . Tang concluded [64] that addition of 1.5 g/L acetic acid increased the cell growth and enhanced acetone production in ABE fermentation. The final concentration of acetone was 21.05%, and the butanol production was not improved. Similarly, addition of 1.0 g/L butyric acid increased the cell growth and enhanced butanol production, the final concentration of butanol was 24.32% while the acetone production was not improved. Additon of acetic acid and butyric acid together (10 mM each) to C. acetobutylicum grown on glucose (2%) in a pH-controlled minimal medium caused rapid induction of acetone and butanol synthesis (within 2 h) [65]. The specific growth rate of the culture and the rate of H 2 production decreased gradually from the onset of the experiment, whereas the rate of CO 2 production remained unchanged. No correlation was found between solvent production and sporulation of the culture [65]. A 32 % conversion rate of the glucose into solvents took place when the same fermentation was carried out on a synthetic medium (BuOH:acetone:EtOH was 0.6:1.9:6). This was changed to 34 and 35 % (BuOH:acetone:EtOH was 5:3:6 or 0.8:2.4:6) by adding HOAc or butyric acid, respectively [66]. Fond et al. [67] studied the effect of HOAc and butyric acid additon in the fermentation of various kinds of carbohydrates using fed-batch fermentations. Different specific rates of carbohydrate utilisation were obtained by variations in feeding rates of sugar. At low catabolic rates of sugar addition of acetic acid or butyric acid, alone or together, increased the rate of metabolic transition by a factor 10 to 20, the amount of solvents by a factor 6 and the percentage of fermented glucose to solvents by a factor 3. The same results were obtained with both glucose and xy-lose fermentations. Depending on the rates of growth, butanol production began at acid levels of 3-4 g L -1 for fast metabolism and at acid levels of 8-10 g L -1 for slow metabolism. Associated with slow metabolism, reassimilation of acids required values as high as 6.5 g L -1 of acetic acid and 7.5 g L -1 of butyric acid. At a high rate of metabolism, acetic and butyric acids were reassimilated at concentrations of 4.5 g L -1 [67]. Significant increases in acetone and BuOH production could be observed by Yu and Saddler [68] by growing C. acetobutylicum on xylose in presence of added HOAc or butyric acid. Increased yields could not be accounted for by conversion of the low amounts of acetic or butyric acid added. The effect was greater when the acid was added before, rather than during fermentation, so pH change alone is probably not responsible and enzyme induction may be involved in this process. Addition of acetate or butyrate ensures fermentation at neutral pH conditions as well. Holt et al. used C. acetobutylicum NCIB 8052 (ATCC 824) and monitored a batch culture at 35 °C in a glucose (2%) minimal medium. At pH 5, good solvent production was obtained in the unsupplemented medium, although addition of acetate plus butyrate (10 mM each) caused solvent production to be initiated at a lower biomass concentration. At pH 7, although a purely acidogenic fermentation was maintained in the unsupplemented medium, low concentrations of acetone and n-butanol were produced when the glucose content of the medium was increased (to 4% [wt./vol.]). Substantial solvent concentrations obtained at pH 7 in a 2% glucose medium supplemented with high concentrations of acetate plus butyrate (100 mM each, supplied as their K salts). Thus, C. acetobutylicum NCIB 8052, like C. beijerinckii VPI 13436, are able to produce solvents at neutral pH, although good yields are obtained only when adequately high concentrations of acetate and butyrate are supplied. Supplementation of the glucose minimal medium with propionate (20 mM) at pH 5 led to production of some n-propanol as well as acetone and n-butanol; the final culture medium was virtually acid free. At pH 7, supplementation with propionate (150 mM) again led to formation of n-propanol but also provoked production of some acetone and n-butanol, although in considerably smaller amounts than those obtained when the same basal medium had been fortified with acetate and butyrate at pH 7 [69]. Effect of carbon monoxide, carbon dioxide and hydrogen From technological viewpoint, fermentation can be divided into two well separable phases: acid formation phase and, after reaching an autoinhibition limit value of the acids, solvent formation phase. These steps can be performed in separated technological environments as well [70]. Hydrogen formation takes place in the acidogenic phase, so the composition of the gases (CO 2 , H 2 ) changes during the fermentation process. The larger part of the carbon dioxide is formed in the pathway of acetone formation. Presence of hydrogen and carbon dioxide has large influence on each metabolic step. The effect of H 2 and CO 2 as product gases on solvent production was studied in a continuous culture of alginate-immobilized C. acetobutylicum. Fermentations were carried out at various dilution rates. With 10% H 2 and 10% CO 2 in the sparging gas, a dilution rate of 0.07 h -1 was found to maximize volumetric productivity (0.58 g×L -1 ×h -1 ), while maximal specific productivity of 0.27 g -1 ×h -1 occurred at 0.12 h -1 . Continuous cultures with vigorous sparging of N 2 produced only acids. It was concluded that in the case of continuous fermentation H 2 is essential for good solvent production, although good solvent production is possible in an H 2 -absent environment in case of batch fermentations. When the fermentation was carried out at atmospheric pressure under H 2 -enriched conditions, presence of CO 2 in the sparging gas did not slow down glucose metabolism; rather it changed the direction of the phosphoroclastic reaction and, as a result, increased the butanol/acetone ratio [71]. Klei et al. [72] studied the effect of pure CO 2 on the second phase of ABE fermentation. CO 2 pressures up to 100 psig were used in a batch fermentor using glucose as substrate. Maximal solvent production occurred near 25 psig CO 2 at the expense of cell growth. In addition, the BuOH:Me 2 CO ratio changed sharply at 40 psig from 5:1 to 20:1 and EtOH production was eliminated at >50 psig. As the pressure increased, both conversion rates of organic acids to solvents and the utilization rate of substrate glucose decreased. Pressurization of the fermentation vessel with H 2 appeared to decrease, rather than increase, the formation of neutral solvents in batch fermentations [73]. However, increasing H 2 partial pressure increased BuOH and EtOH yields from glucose by an average of 18% and 13%, respectively, and the yields of acetone and of endogenous H 2 decreased by an average of 40% and 30%, respectively, and almost no effect was observed on the growth of the culture. The BuOH-to-acetone ratio and the fraction of BuOH in the total solvents also increased with H 2 partial pressure. There were no major differences in the observed pattern of change with pressurization at either t = 0 or t = 18 h [74]. Redox active additives such as carbon monoxide have important influence on the ABE fermentation processes. Addition of CO inhibited the hydrogenase activity of cell extracts and viable metabolizing cells. Increasing the partial pressure of CO (2 to 10%) in unshaken anaerobic culture tube headspaces significantly inhibited (90% inhibition at 10% CO) both growth and H 2 production. The growth was not sensitive to low partial pressures of CO (~15%) in pH-controlled fermentors (pH 4.5). CO addition dramatically altered the glucose fermentation balance of C. acetobutylicum by diverting carbon and electrons away from H 2 , CO 2 , acetate and butyrate production and towards production of EtOH and BuOH. The BuOH concentration increased from 65 to 106 mM and the BuOH productivity (the ratio of BuOH produced/total acids and solvents produced) increased by 31% when glucose fermentation was maintained at pH 4.5 in presence of 85% N 2 -15% CO vs. N 2 alone [75]. Carbon monoxide sparged into batch fermentations of C. acetobutylicum inhibited production of H 2 and enhanced production of solvents by making available larger amounts of NAD(P)H 2 to the cells. CO also inhibited biomass growth and acid formation as well. Its effect was mostly pronounced under fermentation conditions of excess carbon-and nitrogen-source supply [76]. When continuous, steady-state, glucose-limited cultures of Clostridium acetobutylicum were sparged with CO, complete or almost complete acidogenic fermentations became solventogenic. Alcohol (butanol and ethanol) and lactate production at very high specific production rates were initiated and sustained without acetone, and little or no acetate and butyrate formation. In one fermentation strong butyrate uptake without acetone formation was observed. Growth could be sustained even with 100% inhibition of H 2 formation. Although CO gasing inhibited growth up to 50%, and H 2 formation up to 100%, it enhanced the rate of glucose uptake up to 300%. These results support the hypothesis that solvent formation is triggered by an altered electron flow [77]. The metabolic modulation by CO was particularly effective when organic acids such as acetic and butyric acid were added to the fermentation as electron sinks. The uptake of organic acids was enhanced, and increase in butyric acid uptake by 50-200% over control was observed. H 2 production could be reduced by 50% and the ratio of solvent could be controlled by CO modulation and organic acid addition. Acetone production could be eliminated if desired. BuOH yield could be increased by 10-15%. Total solvent yield could be increased by 1-3% and the electron efficiency to acetone-BuOH-EtOH solvents could be increased from 73% for controls to 80-85% for CO-and organic acidmodulated fermentations. The dynamic nature of electron flow in this fermentation was elucidated and mechanisms for metabolic control were hypothesized [78]. Other factors Wyne [79] studied the inhibition of ABE fermentation of maize mash by C. acetobutylicum influenced by 30 representative inorganic and organic acids. Several acids caused complete inhibition when the initial reaction was between pH 3.90 and 3.65, the following being included: HCl, HNO 3 , H 2 SO 4 , H 3 PO 4 , succinic, maleic, malonic, levulinic, crotonic, glycolic, phydroxybutyric, formic, acetic, propionic, butyric and isobutyric. The toxic effects are probably associated with a critical C H+ in the cell interior, closely approximating the observed extracellular C H+ associated with an inhibitory effect. All three chloroacetic acids are much more toxic than acetic acid, but hydroxy derivatives of the lower fatty acids are not more toxic than the corresponding normal acids. Pyruvic, lactic and glyceric acids are tolerated at higher C H+ levels. In the lower fatty acids the inhibiting C H+ was appreciably lower with each successive higher homolog. On the basis of molar concentration the order of effectiveness of inhibition was as follows: nonylic > caprylic > heptylic > formic > caproic = isocaproic > valeric = isovaleric > isobutyric = butyric ≥ propionic = acetic. Capillary activity has relatively little effect with formic, acetic, propionic and butric acids, but was very marked with higher homologs [79]. Inhibitory effect of these acids can easily be removed by neutralization [80]. When the ABE fermentation is over, the culture medium may be treated by blowing NH 3 to neutralize most of organic acids and, after distilling out the solvents, the residue can be treated with non-N-containing nutrients, e.g. dried sweet potatoes, and the fermentation may be repeated in the same way in order to save the quantity of nutrient and to increase the yield [81]. The effect of agitation speed and pressure was studied by Doremus et al [82]. Batch fermentations were run at varying agitation rates and were either pressurized to 1 bar or nonpressurized. Agitation and pressure both affect the level of dissolved H 2 in the media which, in turn, influence solvent production. In nonpressurized fermentations volumetric productivity of BuOH increased as the agitation rate decreased. While agitation had no significant effect on BuOH productivity under pressurized conditions, overall BuOH productivity increased over that obtained in nonpressurized runs. Maximal butyric acid productivity, however, occurred earlier and increased as agitation increased. Peak H 2 productivity occurred simultaneously with peak butyric acid productivity. The proportion of reducing equivalents used in forming the above products was determined using a redox balance based on the fermentation stoichiometry. An inverse relationship between the final concentrations of acetone and acetoin was found in all fermentations studied [82]. Using shear activation of C. acetobutylicum by pumping the cells through capillaries, the cell growth, glucose consumption and product formation rates are considerably increased. Shear-activated continuous cell culture can be used as an inoculum with a well-defined fermentation activity for batch cultures. Different runs of such batch cultivation yield well-reproducible results which could not be obtained from inocula of other cultures or even of heat-shocked spores. The cells can attain a growth rate higher than 1.6 h -1 . The shear-activated continous culture growth is affected already at a butanol concentration lower than 1.6 g L -1 [83], Afschar et al (1986) [80]. The effect of viscosity on the ABE fermentation was studied by Korneeva et al. [84]. Viscosity of the medium was a limiting factor in ABE production by C. acetobutylicum during fermentation with starch and grains such as wheat and rye flour. Various concentrations of agar-agar (0.1, 0.5 and 0.8%) were added to the medium which showed that elevation of viscosity reduces saccharification, increases the concentration of nonfermented sugars, and decreases the yield of solvents. Prior treatment of the substrate with α-amylase reduced the viscosity of the medium and improved fermentation and solvent yields [84]. Although the ABE fermentation is a strictly anaerobic process, [2] Nakhmanovich and Kochkina [85] could increase the BuOH yield by 3.4-9.1% by short periodical aeration of the medium. Redox potential was measured before and after bubbling and decreased sharply by aeration. In batch and continuous cultivations of C. acetobutylicum ATCC 824 on lactose, a strong relationship was observed between redox potential of broth and cellular metabolism [86]. The specific productivity of BuOH and of butyric acid was maximal at a redox potential of -250 mV. The specific production rate of butyric acid decreased rapidly at higher and lower redox potentials. For BuOH, however, it achieved a lower but stable value. This was true for both dynamic and steady states. Continuous fermentations involving lactose exhibited sustained oscillation at low dilution rates. Such oscillation appears to be related to BuOH toxicity to the growth of cells. At higher dilution rates, where BuOH concentrations were relatively low, no such oscillation was observed. Broth redox potential apparently is an excellent indicator of the resulting fermentation product partitioning [86]. Some selected examples are given in Table 2 and 3. Immobilization Immobilization of C. Acetobutylicum strains prevents bacteria from existing in the ferment mash and is a very essential facility in a variety of integrated solvent recovery methods. Haeggstroem and Molin [87] concluded that immobilized vegetative cells of C. Acetobutylicum have a similar product formation pattern when incubated in a simple glucose-salts solution as ordinary growing cells. If vegetative cells of the organism are immobilized in the solvent production phase, solvents are continuously produced on extended incubation. By immobi1izing spores of the organism, the disturbance of the cells metabolic activity during the immobilization procedure was avoided. After the outgrowth of viable cells within the gel, the washed gel preparation retained at a high production capacity in the non-growth stage and the results indicate that continuous production might be fully possible. The butanol productivity was also found to be higher with immobilized cells than in a normal batch process. Haeggstroem [88] used immobilized spores of Clostridium acetobutylicum in a calcium alginate gel. The productivity of the system was 67 g BuOH/L-day and with immobilized cells it was possible to achieve continuous BuOH production for 1000 h. Foerberg et al. [89] developed a technique for maintaining constant activity during continuous production with immobilized, non-growing cells. A single stage continuous system with alginate-immobilized C. Acetobutylicum, was mainly fed with a glucose medium that supported fermentation of acetone-BuOH but did not permit microbial growth. The inactivation that occurred during these conditions was prevented by pulse-wise addition of nutrients to the reactor. By using this technique, the ratio of biomass to BuOH was reduced to 2% compared to 34% in a traditional batch culture. At steady state conditions BuOH was the major end product with yield coefficients of 0.20 (g/g glucose). The productivity of BuOH was 16.8 g L -1 d -1 during these conditions. In a corresponding system with immobilized growing cells the ratio of biomass to BuOH was 52-76% and the formation of butyric and acetic acid increased thereby reducing the yield coefficients for BuOH to 0.11 (g g -1 ). With the intermittent nutrient dosing technique, const. activity from immobilized non-growing cells has been achieved for 8 weeks. Characteristics of the process Yield Content Productivity Ref. Complex medium, yeast extract, glucose, Several carriers have been tested for production of ABE solvents by immobilized local strain of C. acetobutylicum. Thus, both batch and continuous fermentations were performed by using sodium alginate, polyacrylamide, activated carbon, and silica gel carriers. Calcium alginate was found to be the most suitable with batch culture techniques where the total solvent production was 19.55 g L -1 after 4 days. On the other hand, higher solvent yields with continuous fermentation was noticed with silica gel G-60 (0.063-0.2 mm) with 13.06 g L -1 solvent production. In all cases, the tested solid supports were of inferior effect for solvent production under the exptl. conditions used as compared with Ca-alginate [90]. High-strength carriers were also tested for C. acetobutylicum ATCC 824 in batch fermentation. Coke, kaolinite and montmorillonite clay appeared to have a beneficial effect on the fermentation, although the effectiveness appeared to be dependent on the medium used. One of the least expensive materials, coke, was suitable for use in continuous culture. Steady state conditions could be maintained for more than 30 days with total solvent productivity and a yield of 12 g L -1 , 1.12 g L -1 h -1 and 0.3 g total solvent/g glucose used, respectively [91]. Entrapment of C.acetobutylicum AS 1.70 with PVA as the base and by means of absorption in the corncob as the carrier is recommended. Experiments have been done to produce acetone and butanol in a statical way in batches and by changing the corn as medium circulatingly [92]. The vegetative cells of C. acetobutylicum AS 1.70 were also immobilized onto CR (ceramic ring) carriers by adsorption. The continuous production of acetone-BuOH from 8% corn mash concentration was carried out for 90 days in a system of 3-stage packed column reactor (total vol. 5.18 L). The maximal concentration of solvent (acetone, BuOH, and EtOH) was 21.9 g L -1 and the productivity of the column was 24.73 g L -1 d -1 . The residual starch concentration was 0.43% and the conversion efficiency of starch was 40.5% [93]. ABE solvent production was also carried out with C. acetobutylicum DSM 792 (ATCC 824) in a twostage stirred tank cascade using free and immobilized cells. The cells were immobilized by alginate, κ-carrageenan or chitosan. The cell-containing pellets were dried or chemically treated to improve their long-term stability. Dried calcium alginate yielded the best matrix system. It remained stable after a fermentation time of 727 h in stirred tank reactors. The solvent (sum of acetone, butanol and ethanol) productivity of 1.93 g L -1 h -1 at a solvent concentration of 15.4 g L -1 with free cells was increased to 4.02 g L -1 h -1 at a solvent concentration of 4.0 g L -1 h -1 with calcium alginate-immobilized cells (25% cell loading, 12 g L -1 pellet concentration, 3 g L -1 wet cell mass concentration). With pellet diameter of 0.5 mm, the biocatalyst efficiency was <50% [94]. Immobilized cells of C. saccharoperbutylacetonicum N1-4 (ATCC 13564) were tested in an anaerobic batch culture system. Two different methods of immobilization, active immobilization in alginate and passive immobilization by employing stainless steel scrubber, nylon scrubber, polyurethane with uniform pore's size, polyurethane with different pore's size and palm oil empty fruit bunch fiber were studied. Immobilization in alginate was carried out on the effect of cell's age, initial culture pH and temperature on the production of ABE. Immobilized solventogenic cells (18 h) produced the highest total solvents concentration as compared to other phases with productivity of 0.325 g L -1 h -1 . The highest solvents production by active immobilization of cells was obtained at pH 6.0 with 30 °C with productivity of 0.336 g L -1 h -1 . Polyurethane with different pore's size is significantly better than other materials tested for solvents productivity and YP/S at 3.2 times and 1.9 times, respectively, compared to free cells after 24 h fermentation. We concluded that passive immobilization technique increases the productivity (215.12 %) and YP/S (88.37 %) of solvents by C. saccharoperbutyl-acetonicum N1-4 [95]. C.beijerinckii was immobilized in calcium alginate to produce BuOH continuously from glucose. Two different alginate geometries (beads and coated wire-netting) were used for continuous experiments and two mathematical models (sphere and flat plate) were developed. Calculations revealed that no glucose limitation was present in both cases. Furthermore, the biomass build-up in the alginate was probably a surface process [96]. Cells of C.acetobutylicum immobilized on bonechar were used for the production of ABE solvents from whey permeate. When the process was performed in packed bed reactors operated in a vertical or inclined mode, solvent productivities up to 6 kg m -3 h -1 were obtained. However, the systems suffered from blockage due to excess biomass production and gas hold-up. These problems were less apparent when a partially-packed bed reactor was operated in horizontal mode. A fluidized bed reactor was the most stable of the systems investigated, and a productivity of 4.8 kg m -3 h -1 was maintained for 2000 h of operation. The results demonstrate that this type of reactor may have a useful future role in the ABE fermentation [97]. Schoutens determined the optimal conditions necessary for the continuous BuOH production from whey permeate with C. beyerinckii LMD 27.6 immobilized in calcium alginate beads. The influence of three parameters on the BuOH production was investigated: fermentation temperature, dilution rate (during start-up and at steady state) and concentration of Ca 2+ in the fermentation broth. Both a fermentation temperature of 30 °C and a dilution rate of ≤0.1 h-1 during the start-up phase are required to achieve continuous BuOH production from whey permeate. BuOH can be produced continuously from whey permeate in reactor productivities 16-fold higher than those found in batch cultures with free C. beyerinckii cells on whey media [98]. Fermentation of cane sugar molasses by immobilized C. acetobutylicum cells was greatly affected by inoculum size, calcium alginate concentration and molar ammonium nitrogen to molasses ratios. The pH value of the medium and incubation temperature both influenced the ABE production. The maximum total solvent content reached 22.54 g L -1 at inoculum size 6% (w/w), molasses concentration 140 g/l, sodium alginate amount 3 %, and molar ammonium nitrogen to molasses ratios 0.48, pH 5.5. Attempts to recyclize the fermentation process by using immobilized spores of C. acetobutylicum afforded total solvent contents of 22.54, 20.64, 19.31 g L -1 during the first 3 runs, respectively [99]. Continuous fermentation Continuous fermentation is a preferred operational mode to decrease cost of production and increase efficiency. It can easily be performed with using cascade reactors with suppressing butanol concentration below the inhibition limit. The butanol concentration supressing can be performed by dilution or with various methods of recovery with adsorption, extraction, stripping, membrane techniques or with combination of these methods. Increase of the active biomass amount in the mash by cell recycling plays key role in continuous ABE fermentation processes, as well. Dyr et al. [100] observed formation of neutral solvents in continuous ABE fermentation process by means of C. acetobutylicum without morphological adaptation due to the altered way of cultivation. The results obtained leave no doubt as to the possibility of employing the continuous method for acetone-butanol fermentation [101]. A cascade type continuous ABE fermentation method was developed from soluble starch by building an equipment consistsing a battery of 11 fermenting tanks [102]. The first tank is used as an incubator and an activator for the culture. In the remaining tanks, the actual fermentation is carried out. The feed liquor is continuously supplied. The continuous fermentation process for Me2CO-BuOH production is a 1st-order reaction. A continuous ABE fermentation process was developed and adopted in plants using starch raw materials by Yarovenko [103]. The basis for a continuous process is knowledge of laws of continuous mixing of liquids in batteries of connected vessels which are discussed by Yarovenko [104]. The length of fermentation considerably influences the acidity of the fermented mixture at the end of the process. Owing to differences in mash composition and duration of process, acid level is mostly higher in continuous fermentation than in a discontinuous one. With continuous acetone-butanol process fermentation, speed could be raised 1.58 times compared to the semicontinuous method. In the continuous fermentation, it is useful to operate with 2-5 parallel batteries and to cultivate bacteria in separated vessels. The carbohydrates produced by saccharification under different conditions were studied as they were of great importance on length and course of fermentation. Operation of the battery's head fermentor has a great influence on the whole process, the amount of inoculum, acid production, and fermentation speed. To provide an adequate microorganism concentration and to reduce the risk of infection in the battery's head fermentor, mash from the 2nd vessel is recycled. The acidity increase was evident primarily in the last tank. Optimum concentration of the cells to be inoculated at the start of the fermentation 7×10 9 /ml for C. acetobutylicum and physiologically mature cells should comprise about 80% of the total inoculum. The flow rate into the main fermentor should be harmonized with the utilization rate of carbohydrate in the battery. Bacteria in the main vessel must be maintained at their respective stationary phase of growth. The continuous ABE fermentation increased productivity efficiency 20%. The carbohydrate utilization was improved by 2.4%, along with the characteristics of the beer [103,104]. The Japanese K.F. Engineering [105] described an apparatus for production of Me 2 CO and BuOH by immobilized ABE-producing microorganisms, where the immobilized microorganisms are first exposed to a batch process until active gas formation is observed, and then, a continuous production process was performed. The availability and demand of biosynthetic energy (ATP) is an important factor in the regulation of solvent production in steady state continuous cultures of C. acetobutylicum. The effect of biomass recycle at a variety of dilution rates and recycle ratios on product yields and selectivities was determined. Under conditions of non-glucose limitation, when the ATP supply is not growth-limiting, a lower growth rate imposed by biomass recycle leads to a reduced demand for ATP and substantially higher acetone and butanol yields. When the culture is glucose limited, however, biomass recycle results in lower solvent and higher acid yields [106]. Wijjeswarapu et al. studied continuous BuOH fermentation by C. acetobutylicum in a stirred tank reactor. The results of glucose fermentation with cell recycling revealed the formation of small amounts of EtOH, moderate amounts of Me 2 CO and BuOH, and large amounts of AcOH and butyric acid. Without cell recycling overall BuOH production was decreased by a factor of 3.5 [107]. Afshar et al. used a cascade system and cell recycling. At a dry cell mass concentration of 8 g/L and a dilution rate of D=0.64 h -1 , a solvent productivity of 5.4 g/L -1 h -1 could be attained. To avoid degeneration of the culture which occurs with high concentrations of ABE solvents a 2-stage cascade with cell recycling and turbidostatic cell concentration control was used as optimal solution, the 1st stage of which was kept at relatively low cell and product concentrations. A solvent productivity of 3 and 2.3 g L -1 h -1 , respectively, was achieved at solvent concentrations of 12 and 15 g L -1 [108]. Huang and Ramey [109] determined the influence of dilution rate and pH in continuous cultures of Clostridium acetobutylicum in a fibrous bed bioreactor with high cell density and butyrate concentrations at pH 5.4 and 35°C. By feeding glucose and butyrate as cosubstrates, the fermentation was maintained in the solventogenesis phase, and the optimal butanol productivity of 4.6 g L -1 h -1 and a yield of 0.42 g g -1 were obtained at a dilution rate of 0.9 h -1 and pH 4.3. Eight Clostridium acetobutylicum strains were examined for α-amylase and strains B-591, B-594 and P-262 had the highest activities. Defibered-sweet-potato-slurry containing starch supplemented with potassium phosphate, cysteine-HCl, and polypropylene glycol was used as continuous feedstock to a multistage bioreactor system. The system consisted of four columns (three vertical and one near horizontal) packed with beads containing immobilized cells of C. acetobutylicum P-262. The effluent contained 7.73 g solvents L -1 (1.56 g acetone; 0.65 ethanol; 5.52 g butanol) and no starch. Productivity of total solvents synthesized during continuous operation was 1.0 g L -1 h -1 and 19.5% yield compared to 0.12 g L -1 h -1 with 29% yield in the batch system [110]. Pierrot et al. introduced a hollow-fiber ultrafiltration to separate and recycle cells in continuous ABE fermentation. Under partial cell recycling and at a dilution rate of 0.5 h -1 , a cellular concentration of 20 g L -1 and a solvent productivity of 6.5 g L -1 h -1 is maintained for several days at a total solvent concentration of 13 g L -1 [111]. The device developed was sterilizable by steam and permitted drastic cleaning of the ultrafiltration membrane without interrupting continuous fermentation. With total recycle of biomass, a dry weight concentration of 125 g L -1 was attained, which greatly enhanced the volumetric solvent productivity averaging 4.5 g L -1 h -1 for significant periods of time (>70 h) and maintaining solvent concentration and yield at acceptable levels [112]. A stable continuous production system with nongrowing cells of C. acetobutylicum adsorbed to beechwood shavings was obtained by different types of adsorption procedures for production of ABE solvents by Foerberg and Haegsstroem [113]. The system was started with continuous flow of a complete nutrient medium. A thick cell layer was formed on the wood shavings during the 1st day but it disappeared rapidly. Under glucose limitation, a new cell layer developed during the following period (2-5 days). After this phase, a continuous flow of nongrowth medium with nutrient dosing (8 h dosing interval) was started. This led to a washout of most adsorbed cells and ~85% of suspended cells. Another cell layer was formed during this period and the system was controlled by the nutrient dosing technique. The system was stable with no cell leakage for weeks. The maximal productivity of butanol, acetone, and EtOH was 36 g L -1 d -1 with a product ratio of 6:3:1 [113]. A continuous ABE production system with high cell density obtained by cell-recycling of Clostridium Saccharoperbutylacetonicum N1-4 was also studied. In a conventional continuous ABE culture without cell-recycling, the cell concentration was below 5.2 g L -1 and the maximal ABE productivity was only 1.85 g L -1 h -1 at a dilution rate of 0.20 h -1 . To obtain a high cell density at a faster rate, we concentrated the solventogenic cells of the broth 10 times by membrane filtration and were able to obtain ~20 g L -1 of active cells after only 12 h of cultivation. Continuous culture with cell recycling was then started, and the cell concentration increased gradually through cultivation to a value greater than 100 g L -1 . The maximum ABE productivity of 11.0 g L -1 h -1 was obtained at a dilution rate of 0.85 h -1 . However, a cell concentration >100 g L -1 resulted in heavy bubbling and broth outflow, which made it impossible to carry out continuous culture. Therefore, to maintain a stable cell concentration, cell bleeding and cell recycling were performed. At dilution rates of 0.11 h -1 and above for cell bleeding, continuous culture with cell recycling could be operated for more than 200 h without strain degeneration and an overall volumetric ABE productivity of 7.55 g L -1 h -1 was achieved at an ABE concentration of 8.58 g L -1 [114]. Characteristics of the process Yield Content Productivity Ref. Removal methods of ABE solvents from ferment liquors The complexity and chemical interactions of the aqueous mixture of BuOH, 2-PrOH, acetone, and EtOH produced by the bacterial fermentation of various carbohydrate launched development of numerous innovative separation processes. Belafi-Bako et al. [115] reviewed with 246 reference results and developments on simultaneous product removal in ethanol and acetone/butanol/ethanol fermentation regarding thermal processes (e.g., evaporation, distillation), physical and chemical methods (e.g., extraction, adsorption as well as catalytic reactions), and different membrane separation techniques (e.g., perstraction, reversed osmosis, pervaporation, dialysis). Removal by vacuum The simplest recovering method is removal of ABE solvent during fermentation by using vacuum, because the relative volatility of ethanol, acetone or butanol is much higher than the volatility of water. The I.G.Farbenindustrie] [116] used this method periodically, and removed the butanol in vacuum before completion of the fermentation, fresh wort was added and the fermentation was continued. Dreyfus [117] removed the ABE solvents during fermentation by vaporization with the aid of cyclohexane forming an azeotropic mixture therewith and, passing an oxygen-free gas through the liquor. The cyclohexane may be added continuously or intermittently and may be carried as vapor by the gas stream. Removal of ABE solvents in vacuum-assisted in-situ pervaporation techniques at the temperature of fermentation is discussed in chapter 6.6. Removal by gas stripping Qureshi reviewed the ABE fermentation in various types of reactor systems and recovery by gas stripping with 13 references. Gas stripping is a simple technique which does not require expensive apparatus, does not harm the culture, does not remove nutrients and reaction intermediates and reduces butanol toxicity (inhibition). As a result of butanol removal by gas stripping, concentrated sugar solutions can be used to produce ABE solvents. Compared to sugar utilization of 30 g L -1 in a control batch reactor, sugar utilization of 199 g L -1 has been reported with 69.7 g L -1 solvent production. In fed-batch reactors concentrated sugar solutions (350 g L -1 ) have been used. Additionally, the process of ABE production results in concentrated product streams containing 9.1-120 g L -1 ABE solvent. In the integrated ABE production and recovery systems, selectivity figures of 4-30.5 have been reported [118]. The effect of factors such as gas recycle rate, bubble size, presence of acetone, and ethanol in the solutions or broth were investigated in order to remove butanol from model solution or fermentation broth. Butanol stripping rate was found to be proportional to the gas recycle rate. In the bubble size range attempted (< 0.5 and 0.5-5.0 mm), the bubble size did not have any effect on butanol removal rate. In C. beijerinckii fermentation, ABE productivity was reduced from 0.47 g L -1 h -1 to 0.25 g L -1 h -1 when smaller (< 0.5 mm) bubble size and an excessive amount of antifoam (to inhibit production of foam caused by smaller bubbles) were used. This suggested that fermentation was negatively affected by antifoam [119]. Gas stripping can be performed by using fermentation gases (H 2 and CO 2 ) formed during fermentation. Concentrated sugar solutions (250-500 g/L) were used in continuous fermentation of Clostridium beijerinckii BA101, which operated for 21 d (505 h), producing 460 g acetone-BuOH/L [120]. In the integrated fed-batch fermentation and product recovery system, solvent productivities were improved to 400% of the control batch fermentation productivities. In a control batch reactor, the culture used 45.4 g glucose L -1 and produced 17.6 g total solvents L -1 (yield 0.39 g g -1 , productivity 0.29 g L -1 h -1 ). Using integrated fermentation-gas stripping product recovery system with CO 2 and H 2 as carrier gases, the fed-batch reactor was operated for 201 h. At the end of fermentation, an unusually high concentration of total acids (8.5 g L -1 ) was observed. A total of 500 g glucose was used to produce 232.8 g solvents (77.7 g acetone, 151.7 g butanol, 3.4 g ethanol) in 1 L culture broth. The average solvent yield and productivity were 0.47 g g -1 and 1.16 g L -1 h -1 , respectively [121]. Using a potential industrial substrate (liquefied corn starch, 60 g L -1 ) in a batch process integrated with gas stripping resulted in the production of 18.4 g L -1 ABE solvents, with 92% utilization of sugars present in the feed. In a fed-batch reactor fed with saccharified liquefied corn starch, 81.3 g L -1 ABE was produced as compared to 18.6 g L -1 in the control. In this integrated system, 225.8 g L -1 corn starch sugar (487% of control) was consumed. In absence of product removal, it is not possible for C. beijerinckii BA101 to utilize more than 46 g L -1 glucose [122]. Removal by adsorption Tomota and Fujiki [123] observed that the presence of a small amount of activated carbon promotes BuOH fermentation of corn. Oda [124] compared various activated carbons for the removal of BuOH in order to avoid its toxicity. The commercial supernorite proved to be the most effective with intermittent additions. Oda [125] studied the effect of pre-treatment of carbons on BuOH removing capacity, but similar beneficial results were obtained by adding commercial active C to the mash and the acid or alkali treated activated carbons. Yamazaki et al. [126][127][128] packed activated carbon into a column, and after saturation with ABE solvents it was heated at 150 °C, and steamed to recover solvents, when 98% of the BuOH and 99% of the Me 2 CO could be recovered. Activated carbon could be used repeatedly without refreshing. The efficiency of carbon was a little reduced by repeated sorption with soft carbon but hardly reduced with hard carbon. Freundlich's adsorption isotherm by some commercial carbons was, respectively, for Me 2 CO and BuOH at 37º, x/m=0.151C 0.52 and x/m=0.275C 0.57 , where x was the amount of solvent (millimole L -1 ) adsorbed by a mg of adsorbent, and C the concentration, (millimole L -1 ) of solvent remaining after equilibrium was reached. The amount of BuOH absorbed by carbon was ≥4 times as large as that of Me 2 CO, and this selective sorption was more marked with increasing concentration of solvents. The sorption of BuOH was slower than that of Me 2 CO, and >48 h was necessary for reaching equilibrium. Smaller granules of carbon were more effective, and carbon packed in a bag suspended in fermenting mash was convenient. Fermentation experiments with 12-18 g sugar and 5-6 g C/100 ml proved to be the optimal. Carbon granules of the size 2-4 mm 3 were most adequate. Addition of carbon after the growth phase or the maximal acidity phase gave best results. A sugar mash (12 g/100 ml) was fermented with 6 g/100 ml. active C in 3 days by C. acetobutylicum to give a solvent yield of 36% (based on added sugar). The ratio of produced Me 2 CO and BuOH was 1:2. Urbas [129] developed a method for adsorption of ABE components from ferment mash produced by C. acetobutylicum on activated carbons with elution by a volatile solvent. Elution was carried out by feeding the solvent vapor to the carbon bed that is maintained at, or slightly less than, the solvent condensation temperature at a rate of ~1/2 bed vol h -1 until the volatile solvent is detected in the eluate and continuing until ~1/2 additional bed volume of eluate is collected. The 1 st fraction was mainly water (up to ~96% of the initial amount) and the 2 nd a concentrated aqueous solution of the organic compound in the volatile solvent. The solvent is distilled off. The concentration of the final aq. solution is ~30%. The volatile solvents were Me 2 CO, 2-butanone, EtOAc, i-PrOH, MeOH, and Et 2 O. A series of adsorbents such as bone charcoal, activated charcoal, silicalite, polymeric resins (XAD series), bonopore, and polyvinylpyridine were tested in the separation of butanol from aqueous solutions and/or fermentation broth by adsorption. Usage of silicalite appeared to be the more attractive as it could be used to concentrate butanol from dilute solutions (5 to 790-810 g L -1 ) and resulted in complete desorption of ABE solvents. In addition, silicalite could be regenerated by heat treatment. The energy requirement for butanol recovery by adsorption-desorption processes was 1,948 kcal kg -1 butanol as compared to 5,789 kcal kg -1 butanol during steam stripping distillation. Other techniques such as gas stripping and pervaporation required 5,220 and 3,295 kcal kg -1 butanol, respectively [130]. Milestone and Bibby [131] studied the usability of silicalite, which provided a possible economic route for the separation of alcohols from dilute solutions. Thus, EtOH was concentrated from a 2% (wt/vol) solution to 35% and BuOH from 0.5 to 98% (wt/vol) by adsorption on silicalite and subsequent thermal desorption. Maddox [132] found that 85 mg BuOH/g silicalite can be adsorbed from ferment liquors. Polymeric resins with high n-butanol adsorption affinities were identified from a candidate pool of commercially available materials representing a wide array of physical and chemical properties. Resin hydrophobicity, which was dictated by the chemical structure of its constituent monomer units, most greatly influenced the resin-aqueous equilibrium partitioning of nbutanol, whereas ionic functionalization appeared to have no effect. In general, those materials derived from poly(styrene-co-divinylbenzene) possessed the greatest n-butanol affinity, while the adsorption potential of these resins was limited by their specific surface area. Resins were tested for their ability to serve as effective in situ product recovery devices in the n-butanol fermentation by C. acetobutylicum ATCC 824 [133]. In small-scale batch fermentation, addition of 0.05 kg L -1 Dowex Optipore SD-2 facilitated achievement of effective n-butanol titers as high as 2.22% (w/v), well above the inhibitory threshold of C. acetobutylicum ATCC 824, and nearly twice that of traditional, single-phase fermentation. Retrieval of n-butanol from resins via thermal treatment was demonstrated with high efficiency and predicted to be economically favorable [133]. Testing performed on four different polymeric resins in the fermentation by C. acetobutylicum showed that the pH increasing could prevent adsorption of intermediates such as acetic and butyric acids. Bonopore, the polymer giving the best adsorption pattern for butanol with no undesirable effects.The adsorption characteristic of butanol from aqueous fermentation broth were also determined on RA, GDX-105, and PVP resins. The adsorption order is GDX-105>RA>PVP and the isotherms could be represented by the Langmuir equation. The adsorption increases with increasing temperature excepting very low concentrations of butanol. The ΔG 0 , ΔH 0 and ΔS 0 values for the butanol adsorption processes from aqueous solutions on GDX-105 showed that the enthalpy decreased and the entropy increased [134]. In butanol/isopropanol batch fermentation, adsorption of alcohols can increase the substrate conversion. The fouling of adsorbents by cell and medium components is severe, but this has no measurable effect on the adsorption capacity of butanol in at least three successive fermentations. With the addition of some adsorbents it was found that the fermentation was drawn towards production of butyric and acetic acids [135]. Basic considerations of solvent extraction Solvent extraction techniques have the potential for tremendous energy savings in the recovery of fermentation products such as ABE solvents. Such savings will have a direct impact on the economics for the entire fermentation. In order to find the optimal conditions of extractive butanol recovery, however, numerous conditions and factors have to be taken into consideration. A special case of the extractive recovery is the so-called in-situ extractive fermentation (see in chapter 6.5), where extraction is performed during, and together with fermentation. In this case, not only the separation and recovering characteristics play key role in the process, but also the toxicity of the non-miscible solvents basically determines the applicability of the method. In presence of an extractant solvent, however, due to distribution equilibriums, the concentration of each component of the fermentation broth (acids used in decreasing the pH to initiate the solventogenic stage, substrates or intermediates such as glucose, acetate, butyrate) changes. Being aware of these concentration relationships is essential to be able to control the process. Mass and energy balances of side-stream and countercurrent extraction were compared with the appropriate parameters of a classic distillation procedure in recovery of ABE solvents from fermentation broth [136] A general mathematical model for performance evaluation of acetone-butanol continuous flash extractive fermentation system was formulated in terms of productivity, energy requirement (energy utilization efficiency) and product purity. Simulation results based on experimental data showed that the most pronounced performance improvement could be achieved by using a highly concentrated substrate as feed and the increase in solvent dilution rate could only improve the total productivity at the expense of energy utilization efficiency. A two-vessel partial flash system, with the first vessel of two to three plates and the second vessel as a complete flash vessel, is required to ensure high product purity [137]. Extraction with solvents having distribution coefficients above one appears to have a more favourable energy balance than in case of distillation [136]. Distribution coefficients of ABE solvents between water and the selected extractant and biocompatibility of the extractant are crucial parameters. A solvent screening criterion was developed based on the maximum product concentartion attainable for the assessment of batch and semicontinuous multicomponent extractive fermentations [138]. Dadgar and Foutsch evaluated 47 solvents for the ability to recover Clostridium fermentatiuon products. Equilibrium distribution coefficients and separation factors from water for ethanol, butanol, and acetone were determined [139]. Griffith et al. [140] measured the organic/aqueous distribution coefficients of numerous potential BuOH extractants and simultaneously tested several in Liquid, Gaseous and Solid Biofuels -Conversion Techniques bacterial culture. The most effective appeared to be polyoxyalkylene ethers which had distribution coefficients in the range of 1.5-3 and showed little or no toxicity toward the fermentation. The esters and alcohols tested generally had better distribution coefficients but higher biotoxicity. Barton and Daugulis performed biocompatibility tests on 63 organic solvents, including alkanes, alcohols, aldehydes, acids, and esters. Thirty-one of these solvents were further tested to determine their partition coefficients for butanol in fermentation medium of C. acetobutylicum. The biocompatible solvent with the highest partition coefficient for BuOH (4.8) was poly(propylene glycol) 1200 which was selected for fermentation experiments F141G [141]. Thirty-six chemicals were tested for the distribution coefficients for BuOH, the selectivity of alcohol/water separation and the toxicity towards Clostridia. Convenient extractants were found in the group of esters with high molar mass. Liquid-liquid extraction was carried out in a stirred fermentor and a spray column. Formation of emulsions and fouling of the solvent in fermentation broth causes problems with the operation of this type of equipment [142]. Based on the solvent screening criterion and practical experience, one of the best solvents proved to be oleyl alcohol [143]. Oleyl alcohol was used in 40% that of the culture medium to extract BuOH and acetone from the fermentation broth produced from glucose by C. Acetobutylicum and fermentation of the raffinate was continued after the extraction [144]. With a known biocompatibility of extractants such as oleyl alcohol, 1-decanol, 1-octanol, 1-heptanol and ethyl acetate, considering economic viewpoints as well, a mixed extractant of oleyl-alkohol and decanol was chosen for extraction at phase rate of 1:5 [145]. Both butyric acid and butanol could readily be extracted from microbial fermentation broth with vinyl bromide. The vinyl bromide fraction was separated from the aqueous broth and evaporated to give substantially pure butyric acid and (or) BuOH. Three passes of broth through separation columns of vinyl bromide at 4° enabled to isolate ~65% of total butyric acid and ~60% of BuOH in the broth [146]. The methyl, ethyl, propyl and butyl esters of vegetable oils are effective extractants for butanol from aqueous solutions. The effect of four salts, three alcohols and a ketone could be expected to affect the extraction of BuOH from industrial fermentation systems were evaluated. Variations in NaCl, Na 2 SO 4 , Na 2 SO 3 and KH 2 PO 4 from 0 to 0.15 M on the extraction of 0.1-4.1% BuOH from aqueous solutions at 25, 40, and 55° gave small changes in distribution coefficients. Mild increases occurred with increasing temperature and increasing NaCl, Na 2 SO 4 , and KH 2 PO 4 . Mild decreases in BuOH extraction occurred with increasing Na 2 SO 3 . Variations in acetone, EtOH, and 2-PrOH concentration ranging between 0 and 4% at 25, 40, and 55° gave small changes in distribution coefficients at BuOH concentrations of 0.1-4.1%. A slight increase in BuOH extraction was observed with increasing 1pentanol under similar conditions [147]. Extraction of ABE solvents with long-chain fatty acid esters/using the extracts without separation as diesel fuel is discussed in chapter 8. Ionic liquids are novel green solvents that have the potential to be employed as extraction agents to remove butanol from aqueous fermentation media. An extraction procedure used 1butyl-3-methylimidazolium-bis(trifluoromethylsulfonyl)imide or 1-butyl-3-methylimidazolium hexafluoro-phosphate ionic liquids was developed by Eom et al. [148]. Knowledge of phase behaviour of ionic-liquid-butanol-water systems is essential in selection the appropriate solvent [149,150]. The 1-hexyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide exhibits Type 2 liquid-liquid equilibrium behavior toward butanol-water system, thus this ionic liquid can easily separate 1-butanol from water [150]. In situ extractive fermentation End product inhibition can be reduced by in situ removal of inhibitory fermentation products as they form. The first experiments were performed by Bekhtereva who studied the effect of BuOH on the ABE fermenting process and on the development of Clostridium acetobutylicum in concentrated mash. Experimental removal of neutral products from the substrate during fermentation was tested by continuous extraction with castor oil. This oil could extract acetone 13-60, EtOH 5-20 and BuOH 50-88% from the wort. By adding the oil to the medium in varying amounts depending on the carbohydrate content, it was possible to ferment corn mash of 3-5 times the usual concentrations. The yield of acetone was 20-37 g L -1 of wort, that of all neutral products 60-100 g L -l . Their concentrations in the wort under the oil layer was usually lower than in control vessels, e. g., total products 1.4-2.3%, BuOH 0.4% against 1.2-1.3% in usual fermentation. The extraction was beneficial to the development of the bacteria [151]. Other starch-containing materials could also be fermented in the usual manner, and BuOH and Me 2 CO were continuously removed by means of a solvent immiscible with H 2 O, e.g. castor oil [152]. The extraction processes were coupled to batch, fed-batch, and continuous BuOH fermentation to affirm the applicability of recovery techniques in the actual process. In batch and fed-batch fermentation, a 3-fold increase in the substrate consumption, in continuous fermentation ~30% increase could be achieved. [142]. Toxicity and selectivity of 13 organic compounds were tested in extractive batch fermentations performed with C. acetobutylicum. Among them, oleyl alcohol and mixed alcohol (the mixture of oleyl alcohol and C 18 alcohol) were the best for acetone-BuOH fermentation. The orthogonalcross-test method with 3 elements and 3 levels was used to evaluate effects of fermentation temperature, inital glucose concentration, and solvent/water ratio on extractive batch ABE fermentation. Extractive batch ABE fermentation in a stirred fermentor was studied at different initial glucose concentrations at 41/35° and at solvent/water ratio 1:2. When initial glucose concentration was 110 g L -1 , at the end of extractive fermentation the BuOH concentration in the broth and in the solvent was 5.12 and 22.3 g L -1 , respectively. The total BuOH and ABE concentrations based on the broth volume were ~16.27 and 33.63 g L -1 , the conversion ratio of glucose was 98% and the total ABE yield was 0.312. In situ extractive fermentation could eliminate the inhibition of BuOH on microbial growth, increased the initial glucose concentration and reduced the wastewater amount, thus the consumption of energy could be reduced for the separation and purification of the products [153]. Roefler et al [154] studied the effect of six extractants in batch extractive fermentation: kerosene, 30 wt.% tetradecanol in kerosene, 50 wt.% dodecanol in kerosene, oleyl alcohol, 50 wt.% oleyl alcohol in a decane fraction, and 50 wt.% oleyl alcohol in benzyl benzoate. Best results were obtained with oleyl alcohol or a mixture of oleyl alcohol and benzyl benzoate. In normal batch fermentation of C.acetobutylicum, glucose consumption is limited to ~80 kg m -3 due to accumulation of BuOH in the broth. In extractive fermentation using oleyl alcohol or a mixture of oleyl alcohol and benzyl benzoate, Liquid, Gaseous and Solid Biofuels -Conversion Techniques >100 kg m -3 of glucose can be fermented. Maximal volumetric BuOH productivity was increased by ~60% in extractive fermentation compared to batch fermentation. BuOH productivities obtained in extractive fermentation compare favorably with other in situ product removal fermentations [154]. A medium for ABE fermentation by C. acetobutylicum was mixed with 0.2-5.0% 1-octanol or 2ethylhexanol and various parameters of fermentation were studied. Glucose consumption, cell growth, ABE formation, and acetate and butyrate formation were inhibited, especially at higher solvent concentrations. Octanol was more toxic than 2-ethylhexanol [155]. A mathematical model for simultaneous fermentation and extraction of the products was derived for ABE production by immobilized C.acetobutylicum cells in a microporous hollow fiber based tubular fermentor-extractor. The solvent, 2-ethyl-1-hexanol, is used for in situ dispersion-free extraction of products. Both predicted and experimental data follow the same trend. The experimentally observed value of total solvent productivity increased by >40% as a result of in situ solvent extraction [156]. Unfortunately, good extractants for BuOH, such as decanol, are toxic to C. acetobutylicum. The use of mixed extractants, namely, mixtures of toxic and nontoxic coextractants, was tested to circumvent this toxicity. Decanol appeared to inhibit BuOH formation by C. acetobutylicum when present in a mixed extractant that also contained oleyl alcohol, however, maintenance of the pH at 4.5 alleviated the inhibition of BuOH production and the consumption of butyrate during solventogenesis. A mixed extractant that contained 20% decanol in oleyl alcohol enhanced BuOH formation by 72% under pH-controlled conditions. A mechanism for the effects of decanol on product formation is proposed [157]. The same mixed extractant that contained 20% decanol in oleyl alcohol were combined by Wang et al. to carry out in-situ extractive acetone-butanol fermentation, resulting 19.21 g/L of butanol concentration. Butanol productivity could be 62.8% higher than that of control; meanwhile, total organic solvent productivity increases by 42.3% as compared to the control [145]. BuOH fermentation was carried out by contact with solvent containing C 10-14 alcohols, as well. A seed culture of C. acetobutylicum IAM19013 was inoculated and mixed with tridecanol. The broth was anaerobically fermented, with stirring, at 37 °C for 60 h. The solvent layer at the top of the fermentor was circulated to the bottom. The concentrations of BuOH in the solvent and vapor were 41.6 g L -1 and 66%, respectively [158]. Higher alcohols, e.g. C [16][17][18] unsaturated alcohols and C [16][17][18][19][20] branched alcohols were also tested for continuous extraction of BuOH from the medium during the fermentation period. Extraction of the BuOH from the medium by using unsaturated or branched alcohols innoxious to the microorganism markedly increased BuOH yield. Thus, C. acetobutylicum was anaerobically cultivated at 37°C on a medium containing 10% glucose and, after 30 h, 40% oleyl alcohol was added to the broth to remove the BuOH from the aqueous phase and thereby reactivate the fermentation.This increased the total BuOH concentration to 2.5-fold in an additional 70 h [159]. Oleyl alcohol was found to be one of the best solvents for in-situ extractive ABE fermentation. Its butanol partition coefficents value was varied between 3.0 and 3.7 depending on the composition of the broth, nontoxic, nonmiscible and its boiling point is high as compared to ABE solvents. Batch and fed-batch extractive fermentation by C. acetobutylicum was studied with oleyl alcohol as extractant. Extractive fermentation could reduce the product inhibition, increase the initial glucose concentration and increase the fermentation rate. A mathematical model was suggested to describe batch fermentation processes. The proposed model could simulate the experimental data fairly well [160]. In situ removal of inhibitory products from C. acetobutylicum resulted in increased reactor productivity; volumetric butanol productivity increased from 0.58 kg m -3 h -1 in batch fermentation to 1.5 kg m -3 h -1 in fed-batch extractive fermentation using oleyl alcohol as the extraction solvent. The use of fed-batch operation allowed glucose solutions of up to 500 kg m -3 to be fermented, resulting in a 3.5-5-fold decrease in waste water vol. Butanol reached a concentration of 30-35 kg m -3 in the oleyl alcohol extractant at the end of fermentation, a concentration that is 2-3 times higher than is possible in regular batch or fed-batch fermentation. Butanol productivity and glucose conversion in fed-batch extractive fermentation was compared with continuous fermentation and in situ product removal fermentation [161]. In ABE fermentation using C. acetobutylicum IAM 19012, it was necessary not only to keep BuOH concentration below the toxic level (2 g L -1 ), but also to control glucose concentration at <80 g L -1 and pH between 4.5 and 5.5. The amount of glucose consumed could approximatly be estimated as 4 times the volume of gas evolved, and BuOH was produced from glucose with an average yield of 0.173. It was thus possible to estimate the concentration of glucose and BuOH at any fermentation time using the volume of gas evolved as an indicator. As oleyl alcohol was an excellent extracting solvent for BuOH, a fed-batch culture system for the microorganism was developed, where withdrawing and feeding operations of the solvent were done automatically based on gas evolution [162]. Ohno combined the fermentation by C. beijerinckii ATCC 25752 which perfectly inhibited the process at the BuOH concentration of 12 kg m -3 with the extraction with oleyl alcohol and removing the butanol from its mixture with oleyl alcohol which was carried out by prevaporation with hollow fiber membrane. When the BuOH concentration in oleyl alcohol was 22 kg m -3 , the BuOH flux was 3.6 ´ 10 -4 kg m -2 h -1 at 35 °C [163]. Extraction with the non-toxic immiscible solvent, oleyl alcohol was combined with fermentation performed with immobilized C. acetobutylicum to ferment glucose to ABE solvents in a fluidized-bed bioreactor. The extracting solvent had a distribution coefficient of near 3 for butanol. Nonfermenting system tests indicated that equilibrium between the phases could be reached in one pass through the column. Steady-state results are presented for the fermentation with and without extractive solvent addn. One run, with a continuous aqueous feedstream containing 40 g L -1 glucose, was operated for 23 d. Steady state was established with just the aqueous feedstream. About half of the glucose was consumed, and the pH fell from 6.5 to 4.5. Then, during multiple intervals, the flow of organic extractive solvent (oleyl alc.) began into the fermenting columnar reactor. A new apparent steady state was reached in about 4 h. The final aqueous butanol concentration was lowered by more than half. The total butanol production rate increased by 50-90% during the solvent extraction as the organic-to-aqueous ratio increased from 1 to 4, respectively. A maximal volumetric productivity of 1.8 g butanol h -1 L -1 was observed in this nonoptimized system. The butanol yield apparently improved because of the removal of the inhibition. More substrate is going to the desired product, butanol, and less to maintenance or acid production, resulting in a 10-20% increase in the ratio of butanol relative to all products [164]. Whole broth containing viable cells of C. acetobutylicum was cycled to a Karr reciprocating plate extraction column in which acetone and butanol were extracted into oleyl alcohol flowing counter-currently through the column. A concentrated solution containing 300 g L -1 glucose was fermented at an overall butanol productivity of 1.0 g L -1 h -1 , 70% higher than productivity of normal batch fermentations. The continuous extraction process provides flexible operation and lends itself to process scale-up [165]. Liquid, Gaseous and Solid Biofuels -Conversion Techniques A new type of bioreactor containing a porous permeable wall to recover the biobutanol produced in anaerobic ABE fermentation processes was developed [166,176]. The ferment liquor is contacted with a non-toxic organic solvent as oleyl alcohol and the butanol in the fermentation liquor distributes between the organic phase and the ferment liquor. The butanol containing solvent located at one side of the permeable wall is in diffusion equilibrium with a same kind of auxiliary solvent with lower butanol concentration located at the other side of the permeable wall. Due to concentration difference, butanol diffuses from one side of the wall to the other side. The concentration difference is kept to be constant by continuous removal of the butanol form the auxiliary solvent phase in which the butanol concentration is always lower than in the extractant phase but much higher than the butanol concentration in the ferment liquor phase. In this way, the primary extractant solvent contacting the ferment liquor is only a transmitting media between the ferment liquor and a small volume of the auxiliary solvent separated with the permeable wall. Energy demand of the distillation to remove the butanol from the auxiliary sol-vent is less than energy demand of the direct butanol recovery from the ferment liquor or from the extractant phase [166]. The porous composite membranes used as permeable walls for ABE production can be prepared by the method of Tamics et al. [167]. Not only simple alcohols but polyols can also be used in extractive fermentation systems for ABE production. Mattiasson et al. [168] produced acetone and BuOH by C. acetobutylicum in an aqueous two-phase system using 25 % polyethylene glycol 8000. Bacteria remained in the lower phase, and the partition coefficients of acetone and BuOH favoring the upper phase were 2.0 and 1.9, resp. Mean productivity was estimated at 0.24 g BuOH L -1 h -1 , producing 13 g BuOH L -1 in 50 h. Poly(propylene)glycol 1200 is the highest partition coefficient reported to date for a biocompatible ABE extracting solvents. Extractive fermentations using concentrated feeds produced ~58.6 g L -1 acetone and BuOH in 202 h, the equivalent of 3 control fermentations in a single run. Product yields (based on total solvent products and glucose consumed) of 0.234-0.311 g g -1 and within-run solvent productivities of 0.174-0.290 g L -1 h -1 were consistent with conventional fermentation reported in the literature. The extended duration of fermentation resulted in an overall improvement in productivity by reducing the fraction of betweenrun down-time for fermentor cleaning and sterilization [141]. Two aqueous two-phased systems involving polyol-type extractants were investigated to determine their ability to reduce product inhibition in the acetone-BuOH-EtOH fermentation. An industrial-grade dextran (DEX) and a hydroxylpropyl starch polymer (Aquaphase PPT (APPT)) were tested as a copolymer with polyethylene glycol (PEG) to form a two-phased fermentation broth. Two-phase fermentation performances in the DEX-PEG and APPT-PEG 2-phase systems were compared to a single-phase conventional fermentation through a series of batch runs. Effects of the phase-forming polymers on C. acetobutylicum also were investigated. With a BuOH partition coefficient of 1.3, the BuOH yield with the two-phase system was increased by 27% over conventional fermentation [169]. Dibutyl phthalate is one of the ester-type extractants used in extractive fermentation of glucose, glucose-xylose mixtures and hydrolyzates of lignocellulosics to acetone-butanol solvents. Dibutyl phthalate has satisfactory physical properties, nontoxic and mildly stimulates the growth of the organism used, C. acetobutylicum P262. Sugar concentrations mainly in the range of 80-100 g L -1 resulted in solvent concentrations of 28-30 g L -1 in 24 h extractive fermentation compared to 18-20 g L -1 for nonextractive control fermentation. Conversion factors of 0.33-0.37 g solvents g -1 sugar consumed were obtained. Rapid fermentation was achieved by high cell concentrations and cell recycling from every 24 h fermentation to succeeding similar 24 h fermentation. Somewhat higher nutrients were also helpful. By this means, 255 L of acetone-butanol solvents were obtained per ton of aspen wood, 298 L per ton of pine, and 283 L per ton of corn stover. Such high product yields from inexpensive substrates offer the prospect of economic viability for the process [170]. Induction of flocculation of Clostridia led to a reduction of the specific solvent production rate. Cells adhering to sintered glass are better than flocculating cells for continuous BuOH-acetone fermentation. Due to low toxicity, in-situ application of paraffin, oleic alcohol or stearic acid butyl ester with the cells in the fermenter is possible. Solvent production by Clostridia can be considerably enhanced by the extractive process. Extraction may be directly integrated into a continuous fermentation. Separation of BuOH from oleic acid is easy due to the high boiling point of the extractant (260 °C) being far above the boiling point of BuOH (117 °C). Thus, BuOH can be obtained by normal distillation and the extractant can be recycled [171]. BuOH could be manufactured by cultivating BuOH-producing microorganisms such as C. acetobutylicum in a medium containing a fluorocarbon extractant. The generation time, the mean BuOH production rate, and the mean final BuOH concentration in the C. acetobutylicum culture medium containing Freon-11 (1 g L -1 ) were increased by 29, 19, and 12%, respectively. Production of acetone and EtOH was not affected [172]. Continuous fermentation of a carbohydrate substrate with continuous extraction of the product by CFCl 3 took place in a cylindrical fermentor, with an inlet at the center and a filter membrane concentric with the outer wall, allowing the medium to diffuse outward and to retain microorganisms. The collected medium is pumped to an extractor, where it contacts CFCl 3 or another material with a high solvency for BuOH and a low solvency for H 2 O and then separated into two phases. The extracted medium is recycled to a feed tank. The solvent is removed from BuOH in an evaporator, where BuOH is collected and the solvent pumped to a compressor and re-utilized [173]. Organic solvents having relatively high distribution coefficients for BuOH against water, often higher alcohols, esters, and organic acids, are very toxic to the microorganisms for BuOH fermentation. Most fermentation inhibition caused by solvent toxicity was eliminated by reextracting the primary extractant solvent from the residual phase, to be recycled from the product extraction column to the fermentor by paraffin as an extractive fermentation process applied externally to product extraction. After selecting 2-octanol as the extractant from the standpoint of energy consumption in BuOH recovery, a two-stage-extraction BuOH extractive fermentation process having the possibility of reducing the production cost of BuOH was proposed [174]. Heptanal shows strong toxic effect towards C. Acetobutylicum R1 and T5 strains [175] but it has extremely high distribution coefficent (11.5) for butanol [175,176]. Exsitu extraction with heptanal and recycling the residual broth into a new fermentation cycle proved to be unsuccesfull because the broth contained approx. two times higher heptanal concentration than the toxic limit. Diluting the recycled broth or extracting it with a secondary non-toxic apolar solvent such as hexane to remove the residual dissolved heptanal, inoculate the recycled broth with fresh bacteria in each cycle showed that 4-5 cycles of fermentation could be obtained without important decreasing in the ABE yields and productivity [175]. A multiple solvent extraction is described by Shi et al [177]. Mathematical formulation was made for the performance evaluation considering two types of solvent-supplying strategies. One is to add multiple solvents simultaneously and the product is removed at one time. Another is to add them one by one consecutively. Computer simulations were made for batch, fed-batch, and repeated fed-batch operation of acetone-BuOH fermentation to show the power of the approach. Significant improvement in terms of productivity and product concentration is expected when two extractants such as oleyl alcohol and benzyl benzoate are used, as compared to using only one solvent [177]. A two-stageextraction butanol extractive fermentation process was developed and studied using a benchscale extractive fermentation plant with a butanol production capacity of ~10 g h -1 . The production rate equation for extractive fermentation was simply expressed by a previously reported equation multiplied by an equation for the extraction raffinate recycling effect. A butanol production-cost calculation program for the two-stage-extraction process determined the optimum operational conditions to be when butanol concentration, residual sugar concentration and recycling ratio were 6 kg m -3 , 15 kg m -3 and 3, respectively. These optimal conditions were achieved in the bench-scale plant when it was operated with total sugar concentration, dilution rate and recycling ratio of 113 kg m -3 , 0.158 h -1 and 3, respectively [178]. A special kind of in-situ extractive fermentation is the so-called perstraction, where a selective membrane is located between the broth and the extractant phase. Both sides of the membrane contact with each phase and ensures a medium betwen two immiscible phases to exchange butanol content. Due to lack of direct contact betwen two phases, toxicity or other problems can be eliminated and a dispersion-free extraction is possible, leading to an easy operation of the equipment, but the mass transfer in the membrane becomes important. This extraction processes were coupled to batch, fed-batch, and continuous BuOH fermentation to affirm the applicability of the recovery techniques in the actual process. In batch and fed batch fermentation a 3-fold increase in the substrate consumption could be achieved, while in the continuous fermentation it increases by~30% [142]. Jeon and Lee [179] described a fed-batch operation for enhanced separation with a semipermeable silicon membrane which showed high specific permeability to BuOH and acetone. Among various solvents examined, oleyl alcohol and polypropylene glycol were the most suitable as extractants. In fed-batch operation of the membrane-assisted extractive BuOH fermentation system, significant improvements were found in comparison to a straight batch fermn. The total glucose uptake per run was raised to 10 times of the value normally found in batch fermentation. The solvent productivity increased by a factor of 2. The total solvent yield increased by 23% due to reduction of acid production and reuse of cells in the fed-batch operation [179]. A continuously operated membrane bioreactor was connected to a 4-stage mixer-settler cascade and Clostridium acetobutylicum was cultivated in this reactor. BuOH was selectively extracted with butyric acid-saturated decanol from the cell-free cultivation medium, and the BuOH-free medium was refed into the reactor. Due to high boiling point of decanol, recovery of BuOH from the decanol solution is easy. Both partition coefficient and selectivity of BuOH in the cultivation medium-decanol system are sufficiently high for removing it from the medium. Direct contact of cells with the decanol phase causes cell damage. However, decanol is practically insoluble in the fermentation medium, thus the contact of the cell-free medium with the solvent phase does not influence cell growth neither product formation. At a dilution rate of D=0.1 h -1 , BuOH productivity was increased by a factor of 4 by removing BuOH from the medium [180]. Membrane techniques and other methods Pervaporation is an energy-efficient alternative to distillation for removing volatile organic compounds from water, especially ABE solvents from their dilute solutions in a fermentation broth. Pervaporation is able to enrich acetone, BuOH, and EtOH with respect to water. The selectivity of this process is based mainly on superposition of the thermodinamical liquid-vapor selectivity, the chemical affinity selectivity, and the kinetic diffusional selectivity of the materials used. The liquids to be separated are not stressed in any chemical, thermal, or mechanical way. Gudernatsch et al. demonstrated the technical feasibility of the pervaporation process in continuous fermentation runs. Composite hollow fiber membranes with transmembrane fluxes in the range of 2 kg m -2 h -1 and sufficient selectivity were prepared and characterized [182]. El-Zanati et al. designed a special cell to separate the butanol from butanol/water solutions of different butanol concentrations between 6 and 50 g L -1 . The temperature of the mixture feed to the cell was 33 °C while the pressure of permeation side was about ~0 bar. Results revealed that butanol concentration changes non-linearly during the first 3 h, and then proceeds linearly. The percentage of butanol removal increases with increasing feed concentration [183]. A new type of pervaporation apparatus was designed and tested by Vrana et al. to develop an integrated fermentation and product recovery process for ABE fermentation. A cross-flow membrane module able to accommodate flat sheet hydrophobic membranes was used for the experiments. Permeate vapors were collected under vacuum and condensed in a dry ice/ethanol cold trap. The apparatus containing polytetrafluoroethylene membranes was tested using butanol-water and model solutions of ABE products. Parameters such as product concentration, component effect, temperature and permeate side pressure were examined [184]. Various kinds of polymeric, ceramic, and liquid membranes can be used for selective separation of solvent vapors at the temperature of fermentation. Polymeric and ceramic membranes have rather poor solvent selectivity compared to liquid membranes even though they achieve reasonable solvent mass fluxes. Liquid membranes have stability problems due to various losses. Groot et al. used silicon tubing membrane technology in the BuOH/iso-PrOH batch fermentation and the substrate conversion could be increased by simultaneous product recovery [185,186]. Geng and Park carried out fermentation by using a low acid producing C. acetobutylicum B18 and a pervaporation module with 0.17 m 2 of surface area was made of silicone membrane of 240 mm thickness. During batch and fed-batch fermentation, pervaporation at an air flow rate of 8 L min -1 removed butanol and acetone efficiently. Butanol concentration was maintained below 4.5 g L -1 even though C. acetobutylicum B18 produced butanol steadily. With pervaporation, glucose consumption rate increased as compared to that without pervaporation, and up to 160 g L -1 of glucose was consumed during 80 h [187]. Experiments using make-up solutions showed that BuOH and acetone fluxes increased linearly with their concentration in the aqueous phase. Fickian diffusion coefficients were constants for fixed air flow rates and increased at higher sweep air flow rates. During batch and fed-batch fermentation, pervaporation at an air flow rate of 8 L/min removed BuOH and acetone efficiently. BuOH concentration was maintained at <4.5 g/L even though C. acetobutylicum B18 produced BuOH steadily. Cell growth was not inhibited by possible salt accumulation or O 2 diffusion through the silicone tubing. The culture volume was maintained relatively constant during fed-batch operation because of offsetting effects of water and product removal by pervaporation and addition of nutrient supplements [188]. Fadeev et al evaluated poly[1-(trimethylsilyl)-1-propyne] (PTMSP) dense films for n-butanol recovery from ABE fermentation broth. Flux decline of a PTMSP film during pervaporation of 20 g L -1 BuOH/water mixture was linear. PTMSP films change their geometry when exposed to alcohol and alcohol/water mixtures and then dried. As a result of the relaxation process, polymer film becomes thicker and denser, effecting membrane performance. PTMSP films that were treated with 70% iso-propanol/water show linear flux decline vs. pervaporation time. Strong lipid adsorption seems to occur on the membrane surface when fermentation broth is used as a feed causing flux decline within short period of time [189]. Oya and Matsumoto used a hydrophobic polypropylene porous hollow fiber membrane of surface area 0.3 m 2 , porosity 45%, and bubble point 12.5 kg/cm 2 , under reduced pressure [190]. By Knapp et al, a vinyl-type norbornene polymer with average molar weight ∼5000 was found to be useful as pervaporation membranes with separation factor of ∼10 for separation of nbutanol and isobutanol [191]. Various membranes like styrene Butadiene Rubber (SBR), ethylene propylene diene rubber (EPDM), plain poly di-Me Siloxane (PDMS) and silicalite filled PDMS were studied for the removal of ABE solvents from binary aqueous mixtures and from a quaternary mixture. It was found that the overall performance of PDMS filled with 15% wt./wt. of silicalite was the best for removal of butanol in binary mixture study. SBR performance was best for the quaternary mixtures studied [192]. Composite membranes containing adsorbents such as silicalite or liquid extractants such as oleyl alcohol or other solvents proved to be effective materials in ABE solvent removal from fermentation broth. Thin-film silicalite-filled silicone composite membranes were fabricated by incorporating ultrafine silicalite-1 particles, 0.1-0.2 mm. It was found that with the increase of silicalite content in the top active layer, selectivity for n-butanol and n-butanol flux increased, while the total flux decreased. When the silicalite-1 content was over 60%, the active layer appeared to have defects, aggregation of silicalite-1 particles, which influenced the separation factor. By controlling the membrane thickness and silicalite-1 content, membranes with total flux of 600-700 g m -2 h -1 (n-butanol flux of 300 g m -2 h -1 ) and selectivity of 90-100 at 70 °C using 10 g L -1 of n-butanol as feed solution were obtained. The effects of operation temperature and feed solution concentration on membrane performances were studied [193]. A membrane with a silicalite-1 (its adsorption capacity for a mixture of acetone, butanol and ethanol were 8-12, 85-90 and <5 mg g -1 , respectively, there was no apparent difference in absorption rate of butanol at 36° C and 79°C and desorption of butanol occurred efficiently at 78 °C and 1-3 Torr) to polymer ratio of 1.5:1 (g:g) (306 mm thick) had butanol selectivities of 100-108 and a flux of 89 g m -2 h -1 at feed butanol concnentrations ranging from 5 to 9 g L -1 and a retentive temperature of 78°C. A 170 mm silicone membrane under identical conditions had selectivity and flux of 30 and 84 g m -2 h -1 , respectively. A thin silicalite membrane offered low selectivity and high flux, while a thick membrane offered high selectivity and low flux. The effect of butanol concentration (0.37-78 g L -1 ) on flux and selectivity was also studied [194]. Thongsukmak and Sirkar developed a new liquid membrane-based pervaporation technique to achieve high selectivity and avoid contamination of the fermentation broth. Trioctylamine as a liquid membrane was immobilized in the pores of a hydrophobic hollow fiber substrate having a nanoporous coating on the broth side. The coated hollow fibers demonstrated high selectivity and reasonable mass fluxes of solvents in pervaporation. The selectivities of butanol, acetone, and ethanol achieved were 275, 220, and 80, respectively, with 11.0, 5.0, and 1.2 g m -2 h -1 for the mass fluxes of butanol, acetone and ethanol, respectively, at a temperature of 54°C for a feed solution containing 1.5 wt.% butanol, 0.8 wt.% acetone, and 0.5 wt.% ethanol. Mass fluxes were increased by as much as five times with similar selectivity of solvents when an ultrathin liquid membrane was used [195]. Other long-chain trialkylamines such as tri-laurylamine or tri-decylamine could also be used as liquid membranes [196]. Acetic acid in the feed solution reduced selectivity of the solvents without reducing the solvent fluxes due to coextraction of water which increases the rate of water permeation to the vacuum side. The liquid membrane present throughout the pores of the coated substrate demonstrated excellent stability over many hours of experiment and essentially prevented the loss of liquid membrane to the feed solution and the latter's contamination by the liquid membrane [195]. In order to exclude toxic effect of the released liquid membrane ingredient, an oleyl alcohol based liquid membrane was developed. This liquid membrane was energy efficient and did not affect microorganism growth. Oleyl alcohol liquid membrane was proved to be useful for the separation of BuOH and isobutanol in a fermentation culture with immobilized Clostridium isopropylicum IAM 19239 [197]. An ionic liquid (IL)-polydimethylsiloxane (PDMS) ultrafiltration membrane (pore size 60 nm) guaranteed high stability and selectivity during ABE fermentation carried out at 37 °C. Overall solvent productivity of fermentation together with continuous product removal by pervaporation was 2.34 g L -1 h -1 . The supported ionic liquid membrane (SILM) was impregnated with 15 wt.% of a novel ionic liquid (tetrapropylammonium tetracyano-borate) and 85 wt. % of polydimethylsiloxane. Pervaporation, accomplished with the optimized SILM, led to stable and efficient removal of the solvents butan-1-ol and acetone out of a C. acetobutylicum culture [198]. Reverse osmosis for recovering water from broth can also be used to concentrate ABE fermentation products. Polyamide membranes exhibited BuOH rejection rates ≤85%. Optimum rejection of BuOH occurred at a pressure of 5.5-6.5 MPa and hydraulic recoveries of 50-70%. The flux range was 0.5-1.8 L m -1 [199]. Other membranes exhibited rejection rates as high as 98% and the optimal rejection of BuOH in the ferment liquor occurred at recoveries of 20-45% with flux ranging between 0.05-0.6 L m -2 min -1 [200]. Dialysis fermentation relieves BuOH toxicity with increased yield of product, and solvent extraction can be applied to the nongrowth side of the fermentor for concentration of the BuOH. C. acetobutylicum ATCC 824 and several other strains were studied for the fermentation of corn, potato, and glucose [201]. The ability of cyclodextrins to form crystalline insoluble complexes with organic components was explored as a selective separation of dilute ABE products from Clostridium fermentation systems. A product or a product mixture at a concentration of 0.150 mM each was treated with α-cyclodextrine or β-cyclodextrine in aqueous solutions or nutrient broth. In the acetonebutanol-ethanol system and in the butanol-isopropanol system, α-cyclodextrine selectively precipitated 48% and 46% butanol after 1 h agitation at 30°. However, β-CD was superior for the butyric acid-acetic acid system because it selectively precipitated 100% butyric acid under the same conditions. Cooling the three-product system with α-CD to 4° for 24 h significantly increased the precipitates but decreased the selectivity for either butanol or butyric acid [202]. Hypercrosslinked microporous ion-exchanger resins proved to be suitable agents to adsorb butanol into solid phase from fermentation broth. This ensures fermenting with a microorganism capable of producing butanol in a suitable fermentation medium and recovering butanol from the fermentation medium [203]. Integration of the abovementioned (Chapter 6) methods ensures new possibilities in the economic ABE solvent recovery. Some representative examples without demand of completeness are discussed here. Mawasaki et al performed continuous extractive butanol fermentation with the microbe immobilized in gel beads and presented the recovery system of butanol from the solvent by pervaporation with hollow fiber membrane. This system was expected to be advantageous to prevent the fouling of membrane because butanol-oleyl alcohol mixtures obtained from extractive fermentation do not include solid particles [204]. Pervaporation method could also be used for in situ alcohol recovery in continuous iso-PrOH-BuOH-EtOH fermentation with immobilized cells. Fermentation was performed in a stirred tank and in a fluidized bed reactor as well. In the integrated process, the substrate consumption could be increased by a factor of 4 if compared to continuous fermentation without pervaporation product recovery. Experiments with a pilot plant plate-and-frame pervaporation module were described for the separation and dehydration of alcohols. This module was also coupled to continuous BuOH fermentation, however, sterilization of the module was troublesome, and it was frequently plugged by microbial cells [205]. ABE solvents were produced in an integrated fermentation-product recovery system using C. acetobutylicum and a silicalite-silicone composite membrane. Cells of C. acetobutylicum were removed from the cell culture using a 500,000 molecular weight cut-off ultrafiltration membrane and returned to the fed-batch fermentor. The ABE solvents were removed from the ultrafiltration permeate using a silicalite-silicone composite pervaporation membrane. The silicalite-silicone composite membrane (306 mm thick) flux was constant during pervaporation of fermentation broth at the same concentration of ABE solvents. Acetone butanol selectivity was also not affected by the fermentation broth, indicating that the membrane was not fouled by the ABE fermentation broth. The silicalite-silicone composite membrane was exposed to fermentation broth for 120 h. Acetic acid and ethanol did not diffuse through the silicalite-silicone composite membrane at low concentrations. The fed-batch reactor was operated for 870 h. Totally 154.97 g L -1 solvents was produced at solvent yield of 0.31-0.35 [206]. Application of membrane-assisted extraction to butanol fermentation was investigated as a means of product separation and also as a way of alleviating the problems concerning the endproduct inhibition. The coupled reactor-separator system was stable enough to sustain continuous operation lasting several weeks. The data on continuous run reaffirmed most of the advantages found in a previous study on fed-batch system in that the reactor separator system rendered high productivity and yields due primarily to reduced product inhibition. Improvement in productivity was particularly notable, as a fourfold increase over straight batch operation was obtained. In normal continuous operation, spontaneous cell deactivation occurred after 200-400 h of operation despite the removal of inhibitory products. The presence of autolysin was one of the probable causes of cell deactivation. The cell viability, however, was prolonged significantly when the bioreactor was operated under glucose-limited conditions [207]. A calcium alginate-immobilized continuous culture was used in a novel gas-sparged reactor to strip the solvents from the aqueous phase and reduce their toxicity. A dilution rate of 0.07 h -1 was found to give maximal solvent productivity at 0.58 g dm -3 h -1 , although at 0.12 h -1 the productivity was slightly lower. In order to increase glucose uptake by the culture, feed glucose concentration was increased over time to attempt to acclimatize the culture. This resulted in productivity as high as 0.72 g dm -3 h -1 although this production rate was unstable [208]. An extractive acetone-BuOH fermentation process was developed by integrating bioproduction, ultrafiltration, and distillation, providing simultaneous retention of biomass, selective removal of inhibitors from permeate and separation and purification of acetone, BuOH, and EtOH. Successive batch fermentations were performed with normal pressure distillation (98°) which permitted prolonging and enhancing (by a factor of 3) solvent production with very few volume exchanges of medium (average dilution rate was 0.002 h -1 ), and recovering the concentrated solvents online. Different operating conditions were also tested in order to study the presence of extracellular autolytic enzymes as inhibition factors. Extracellular autolytic activity was low most of the time, even without enzyme-inactivating heat treatment in the distillation boiler, and high-temperature distillation was deleterious to the culture medium. Improvements of the process were achieved, first, by managing continuous runs, providing a minimal renewal of the culture medium and, mainly, by decreasing the temperature and pressure of distillation. Solvent productivity reached 2.6 g L -1 h -1 for a 0.036 h -1 average dilution rate, corresponding to a feed concentration of 156 g L -1 glucose actually consumed [209]. Continuous extractive bioconversion processes were described for conversion of native starch granules to ABE solvent production using a selective adsorbent. In fermentation of carbohydrates with C. acetobutylicum selective synthetic zeolite or crosslinked divinylbenzenestyrene copolymer sorbents are integrated in the process to adsorb the products from the medium continuously [210]. The conversion of glucose to ABE solvents by C. acetobutylicum employing extractive fermentation by using a combination of membrane technology and solid adsorbents integrated into the fermentation process was studied. The adsorbent used was a nitrated divinylbenzene-styrene copolymer. Its ability to adsorb fermentation broth constituents was as follows: BuOH 82, EtOH 36, Me 2 CO 51, butyric acid 99, and AcOH 21 mg/g sorbent. The polymer was then heat treated to release the bound solvents. In a long term experiment using an adsorption column, 400 g glucose was added successively to the column and fermentation allowed to proceed for 320 h. A total amount of 67 g of solvent was recovered by heating 930 g polymer [211]. It was found that the in situ adsorption process using polyvinylpyridine as the adsorbent enhanced the fermentation rates and the reactor productivity by C. acetobutylicum. In typical traditional acetone-butanol fermentation process only about 60 g/L of glucose could be used in a batch operation mode and thus, at maximum only 21 g L -1 of the total final products concentration could be achieved. In the adsorption-coupled system an initial glucose concentration of 94 g L -1 was fermented when a weight ratio of the adsorbent to the fermentation broth of 3/10 was used. An overall product concentration of 29.8 g L -1 and a productivity of 0.92 g L -1 h -1 were achieved in the adsorptive batch fermentation system. Compared with the controlled traditional batch acetone butanol fermentation, the integrated process increased the final product concentration by 54% and the productivity by 130% [212]. Integration of a repeated fed-batch fermentation (C. acetobutylicum) with continuous product removal (poly(vinylpyridine) adsorption) and cell recycling resulted in inhibitory product concentration reduction. Because of the reduced inhibition effect, a higher specific cell growth rate and thus a higher product formation rate were achieved. The cell recycle using membrane separation increased the total cell mass density and, therefore, enhanced the reactor productivity. The repeated fed-batch operation overcame the drawbacks typically associated with a batch operation such as down times, long lag period, and the limitation on the maximum initial substrate concentration allowed due to the substrate inhibition. Unlike a continuous operation, the repeated fed-batch operation could be maintained for a long time at a relatively higher substrate concentration without sacrificing the substrate loss in the effluent. As a result, the integrated process reached 47.2 g L -1 in the equivalent solvent concentration (including acetone, BuOH, and EtOH) and 1.69 g L -1 h -1 in the fermentor productivity, on average, over a 239.5-h period. Compared with controlled traditional batch acetone-BuOH fermentation, the equivalent solvent concentration and the fermentor productivity were increased by 140% and 320%, respectively [213]. Cells of C. acetobutylicum were immobilized by adsorption onto bonechar and used in a packed bed or fluidized bed reactor for the continuous production of ABE solvents from whey permeate. At dilution rates in of 0.35-1.0 h -1 , ABE solvent productivities of 3.0 to 4.0 g L -1 h -1 were observed, but lactose utilization values were poor. When operated in an integrated system with product removal by liquid-liquid extraction, there was a decrease in productivity, but lactose utilization was increased markedly. Of the three extractants tested, oleyl alcohol proved to be superior to both benzyl benzoate and dibutyl phthalate [214]. Shah and Lee studied simultaneous saccharification and extractive fermentation (SSEF) to produce ABE solvents from aspen tree. In SSEF employing cellulase enzymes and C. acetobutylicum, both glucan and xylan fractions of pretreated aspen are concurrently converted into acetone and butanol. Continuous removal of fermentation products from the bioreactor by extraction allowed long-term fed-batch operation. The use of membrane extraction prevented the problems of phase separation and extractant loss. Increase in substrate feeding as well as reduction of nutrient supply was found to be beneficial in suppressing the acid production, thereby improving the solvent yield. Because of prolonged low growth conditions prevalent in the fed-batch operation, the butanol-to-acetone ratio in the product was significantly higher at 2.6-2.8 compared to the typical value of two [215]. Integrated bioreactorextractor was also tested in SSEF and production of ABE solvents from pretreated hardwood by C. acetobutylicum and cellulase enzymes. The SSEF system was constructed so that products of fermentation were extracted from the broth through a semipermeable membrane. In situ removal of inhibitory products was found to be beneficial in sustaining cell viability, thus allowing fed-batch operation of the bioreactor over a period of several weeks. Hardwood chips were pretreated by monoethanolamine in such a way that hemicellulose and cellulose were retained in high yield. The feed material thus prepd. was readily converted by SSEF. The ability of C. acetobutylicum to ferment both glucose and xylose was a major factor in simplifying the overall process into a single-stage operation [216]. Perspectives of butanol as biofuel Biobutanol has excellent fuel properties compared to ethanol, thus it can be used directly as fuel or blending component for both diesel and gasoline powered internal combustion engines [217][218][219][220][221]. Butanol has no corrosive properties and its miscibility with gasoline and water tolerance is higher than the appropriate properties of ethanol or methanol [222]. Butanol can also be used as hydrogen source for fuel cells [223] and proved to be useful as esterification alcohol in fatty acid ester type biodiesel production [229][230][231][232][233] or as raw material in the production of dibutyl ether [236] butoxylated butyl diesels [237] or can be converted into aromatic hydrocarbons on zeolite catalysts [238][239][240][241]. Butanol as fuel and blending component in fuel mixtures Although ethanol as a gasoline extender has received a great deal of attention, this fluid has numerous problems, such as aggressive behaviour toward engine components and a relatively low energy content, the properties of butanol or butanol containing gasoline, diesel and biodiesel fuel compositions are more advantageous than the analogous properties of ethanol or ethanol containing fuels [222]. The performances of gasoline and diesel engines powered with gasoline contained 0-20% BuOH and diesel fuel contained 0-50% BuOH were evaluated. Tests showed that BuOH can be used as a gasoline or diesel fuel supplement in amounts of ≤20% and ≤40%, respectively, without significantly affecting unmodified engine performance. BuOH slightly decreased the octane rating of a blend of 20% BuOH in gasoline but in diesel fuel ≤40% BuOH had no detectable effect on the ignition of the fuel blend [217]. Diesel engines can be powered with 25-75% of a Bu-alcohol and 25-75% of vegetable oil mixtures which were normally liquids under operating conditions. A fuel mixture composed of 50% corn oil and 50% n-BuOH was used as the fuel for 2 tractors when the engine performance in both tractors and the behaviour of the fuel was entirely satisfactory, the engine running smoothly and evenly without significant smoke or odor, with quick acceleration and smooth idling. The above blend could be mixed in any proportion to no. 2 diesel oil without significant change in engine performance [218]. A diesel precombustion chamber engine powered with 70% BuOH-30% diesel fuel had, at an av. 5.9-bar pressure, an ignition delay of operation which was only 10% more than that when operated with diesel alone. The maximum pressure increase during the operation remained higher in both combustion chambers in operation with 70 vol.% BuOH than in operation with diesel alone. There is high potential of improvement of the exhaust gas quality with BuOH-diesel fuel mixtures, especially with regard to smoke value, particulate emissions, and nitrogen oxides. The engine performance under such conditions is similar to that with diesel fuel alone. The starting problem of the engine powered with diesel-BuOH mixture is avoided by using an electrically heated spark plug which maintains ~1000 ºC in the precombustion chamber. More than 200 h of satisfactory operation was attained in a BuOHdiesel mixture powered engine [219]. Substitute diesel fuel compositions consist of gas oil (b. 167-359 °C) 20-55, a 75:25 (wt.) mixture BuOH-Me 2 CO 30-40, fatty acid esters 15-40 wt.%. Thus, substitute diesel fuel composition containing gas oil 20, BuOH-Me 2 CO mixture 40, and gas oil and BuOH-Me2CO mixture 40 wt.% had cetane no. 40.6 and resulted in normal tractor operation for 50 h [220]. Coupled biodiesel and ABE production technology proceeds by extraction of the ABE containing broth with biodiesel oils forms a mixture which can directly be applied as fuel for diesel engines [224]. Using soybean-derived biodiesel as the extractant with an aqueous phase volume ratio of 1:1, butanol recovery ranged from 45 to 51% at initial butanol concentration of 150 and 225 mM, respectively. Using biodiesel-derived glycerol as feedstock for butanol production, the production of a biodiesel/butanol fuel blend could be a fully integrated process within a biodiesel facility [225]. The presence of surfactants had important influence on the amount of extracted butanol with biodiesel oil prepared from waste cooking oil [226]. This extraction was integrated into the fermentation process, when large quantity of gas (H 2 and CO 2 ), was released and the produced butanol and acetone were brought into extractant phase. Surfactants decreased the tension of gas-liquid interface and made the large bubble break down, therefore, the releasing gas passed through the extractant phase in form of small bubbles. The mass transfer rate of products from the aqueous phase to the extractant phase was enhanced and the balance time was shortened accordingly by addition of surfactants, consequently, the fermentation productivity was improved. Using waste cooking oil derived biodiesel as extractant the butanol concentration in the extractant phase was increased by 21.2% as compared to the control, while the concentration of surfactant (Tween-80) in culture medium was 0.140% (w/v). Under these conditions, gross solvent productivity was increased by 16.5% [226]. When the biodiesel derived from crude palm oil was used as extractant, the fuel properties of the biodiesel-ABE mixture were comparable to that of No.2 diesel, but its cetane no. and the boiling point of the 90% fraction were higher [227]. Biodiesels prepared from some waste oils proved to be somewhat toxic toward C. Acetobutylicum. Under this condition, the butanol concentration in the biodiesel phase also reached a level of 6.44 g L -1 [228]. Butyl-ester type biodiesels Biodiesel is typically synthesized from triacylglycerides derived from vegetable oils and an alcohol with base catalysis, yielding the fatty acid ester type biodiesel. Wahlen et al. determined conditions that allowed rapid and high yield conversion of oil feedstocks containing significant concentrations of free fatty acids into biodiesel using an acid-catalyzed reaction with longer chain alcohols such as n-butanol at a slight molar excess. Biodiesel yields >98% were achieved in <40 min. Key properties of the resulting butyl-diesel were determined, including cetane number, pour point, and viscosity [229]. The batch and continuous-flow preparation of biodiesel derived from vegetable oil and 1-butanol using a microwave apparatus has been reported. The methodology allows for the reaction to be run under atmospheric conditions and in continuous-flow mode. It can be utilized with new or used vegetable oil with 1-butanol and a 1:6 molar ratio of oil to alcohol. Sulfuric acid or potassium hydroxide can be used as catalyst [230]. High conversion could be reached when the transesterification of triglycerides with 1-butanol was performed under near-critical or supercritical conditions with microwave heating [231]. Biodiesel synthesis by butanolysis of vegetable oils (soybean, sunflower, and rice bran) catalyzed by Lipozyme RM-IM), and the optimization of the enzyme stability over repeated batches has been described. The enzyme showed the highest activity at a 9:1 BuOH:oil molar ratio and in the 30-35 °C temperature range [232]. Transesterification reaction using sunflower oil and butanol catalyzed by immobilized lipases can be carried out without auxiliary solvent. Immobilized porcine pancreatic lipase (PPL) and Candida rugosa lipase (CRL) showed satisfactory activity in these reactions. Activities of immobilized lipases were highly increased in comparison with free lipases because its activity sites became more effective. Immobilized enzyme could be repeatedly used without difficult method of separation and the decrease in its activity was not largely observed [233]. Other types of butanol-based biofuels Preparation of a fuel blending mixture characterized by viscosity breaking and clouding point decreasing abilities was carried out in the reaction of acetone (by-product of biobutanol production) and glycerol (by-product of biodiesel or butyldiesel production) in presence of acidic catalysts such as sulphuric acid, p-toluenesulfonic acid or strongly acidic cation exchangers. A mixture of 2,2-dialkoxy-propanes, 2,2-dimethyl-4-hydroxymethyl-1,3-dioxolane and 2,2-dimethyl-5-hydroxy-1,3-dioxane was formed [176]. Similar reaction of an oxidized ABE mixture consist of butyraldehyde, acetaldehyde and acetone was carried out with formation of a mixture contained 2,2-dialkoxy-propanes, 1,1-dialkoxyethanes, 1,1dialkoxybutanes, and 2,2-dimethyl-, 2-methyl or 2-propyl derivatives of the appropriate 4hydroxymethyl-1,3-dioxolane and 5-hydroxy-dioxane [176]. In this way, the by-products of the biodiesel or butyl-biodiesel production (glycerol) and the acetone from the biobutanol producing (or the oxidized ABE solvent mixtures) can completely be used as fuel components [176]. Fuel characteristics of a blended (15 %) biofuel prepared from oxidized ABE mixture and glycerol contains methanol can be seen in Table 4 Table 4. Fuel parameters of a biodiesel oil contains 15 % acetal mixture prepared from mixture of glycerol, etanol, butanol and acetaldehyde, butyraldehyde and acetone [176]. Butanol and butyric acid prepared by optimized batch or fed-batch fermentation of wheat flour hydrolysate with selected strains of Clostridium strains, then butanol was recovered from the fermentation broth by distillation and butyric acid by solvent extraction. Esterification could be performed with a lipase in the solvent of extraction [234]. The butylbutyrate formed has a great value as novel biofuel [235]. D'amore at al. developed a catalytic process for making dibutyl ether as transportation fuel and diesel blending component from aq. butanol solutions [236]. Butoxylation of the unsaturated fraction of biodiesel offers the potential benefit of reduced cloud point without compromising ignition quality or oxidation stability. Butyl biodiesel derived from canola oil was epoxidized via the in situ peroxyacetic acid method then the epoxy butyl biodiesel was butoxylated with n-butanol with sulfuric acid catalyst without use of solvents. Optimal conditions for the butoxylation of epoxy butyl biodiesel were 80 °C, 2% sulfuric acid, and a 40:1 molar ratio of n-butanol over a period of 1 h. Conversion of epoxy butyl biodiesel was 100%, and selectivity for butoxy biodiesel was 87.0%. Butoxy biodiesel is able to prevent an earlier onset of crystallization due to the decrease in unsaturated content, but only at lower concentrations [237]. One-step conversion reactions of the title products (6:3:1 volume BuOH-acetone-EtOH) with and without water to aromatic hydrocarbons over molecular shape-selective zeolite were carried out by Anunziata et al. The presence of water in the feed resulted in increased catalyst life. Deactivation reactions toward aromatic hydrocarbon synthesis with product-H 2 O mixtures (50:50, 85:15, 99:1, vol.) shown the influence of secondary alkylation reactions leading to substituted aromatic hydrocarbons whose yields were related to the deactivation time of the catalyst [238]. Costa et al. studied the conversion of n-BuOH/Me 2 CO mixtures to C 1-10 hydrocarbons on ZSM-5-type zeolites with different Si-Al ratios. Best results were obtained with a HZSM-5 zeolite (Si/Al=36:1), using a 30 wt% Na montmorillonite binder. The formation of gaseous olefins and non-aromatic liquid hydrocarbons decreased with increasing reaction temperature or space velocity, whereas the amount of aromatic hydrocarbons and gaseous paraffins increased. The total yield of liquid hydrocarbons increased with pressure, although the aromatic content showed a smooth maximum at 1 atm. The yield of aromatic hydrocarbons decreased with increasing water content in the feed. A hydrocarbon distribution similar to that obtained from the anhydrous mixture can be obtained with water-containing feedstock, but lower space velocities were necessary [239]. Orio et al. described the conversion of low molecular-weight oxygenated compounds as ABE solvent mixture into gasoline components over HZSM-5 zeolites. Reagents were used in nonanhydrous form. Formation of C 2-4 hydrocarbons decrease and aromatic hydrocarbons increase with increasing temperature, formation of C 5-8 hydrocarbons increases to a maximum at ~300 °C and then decreases. The yields of aromatics from all reactants were ~60 to ~90%; the yields of C 2-4 and C 5-8 hydrocarbons were <30% and <10%, respectively. Highest production of aromatic hydrocarbons was attained with the fermentation products of starch (6:3:1 BuOHacetone-EtOH) [240]. Butanol produced by the fermentation of starch can be presented as a key compd. to produce diesel and jet fuel. Butanol could be converted into Bu esters or into 1butene which was catalytically oligomerized in a H 2 atmosphere into a hydrocarbon fuel [241]. Conclusion Biobutanol proved to be a superior fuel substitute and blending component in gasolines or diesel fuels. It can be used as raw material in the preparation of so-called butyl-diesel (longchain fatty acid butyl esters), in the butoxylation of unsaturated fatty acid esters and in the preparation of dibutoxy-acetals. Butanol can easily be transformed via butyraldehide into 2butoxy-4-hydroxymethyl-1,3-dioxolane or 2-butoxy-5-hydroxy-1,3-dioxane fuel additives with using waste glycerol of biodiesel or butyldiesel production. The new fermentation techniques use renewable lignocellulosic raw materials, and integration with various recovering technologies, membrane techniques, together with new fermentor types and genetically engineered microorganisms make a solid base of a new generation of economic biobutanol production processes.
29,635.2
2013-03-01T00:00:00.000
[ "Agricultural and Food Sciences", "Chemistry", "Engineering", "Environmental Science" ]
A posteriori error estimation for model order reduction of parametric systems This survey discusses a posteriori error estimation for model order reduction of parametric systems, including linear and nonlinear, time-dependent and steady systems. We focus on introducing the error estimators we have proposed in the past few years and comparing them with the most related error estimators from the literature. For a clearer comparison, we have translated some existing error bounds proposed in function spaces into the vector space C n and provide the corresponding proofs in C n . Some new insights into our proposed error estimators are explored. Moreover, we review our newly proposed error estimator for nonlinear time-evolution systems, which is applicable to reduced-order models solved by arbitrary time-integration solvers. Our recent work on multi-fidelity error estimation is also briefly discussed. Finally, we derive a new inf-sup -constant-free output error estimator for nonlinear time-evolution systems. Numerical results for three examples show the robustness of the new error estimator. Introduction For every model order reduction (MOR) method or algorithm to be eventually used in real applications, accuracy and efficiency of the method play key roles.While many MOR methods are numerically shown efficient, not all of them are guaranteed to be reliable.In other words, not all numerically demonstrated efficient MOR methods are associated with computable error estimators, let alone fast-to-compute error estimators.This work reviews a posteriori error estimators for projection-based MOR of parametric systems.Many projection-based MOR methods for parametric systems [1] have been proposed, for example, the multi-moment-matching methods [2,3], methods based on (transfer function, projection matrix, or manifold) interpolation [4][5][6][7][8][9][10][11], the proper orthogonal decomposition (POD) methods [12][13][14], as well as the reduced basis methods [15][16][17][18].We name those MOR methods for parametric systems pMOR methods.However, error estimation for some of the pMOR methods are not yet widely discussed, for example, error estimation for interpolation-based pMOR methods.While some a posteriori error bounds [15,[17][18][19][20][21][22][23][24][25][26][27][28] are proposed for reduced-order models (ROMs) obtained from the reduced basis method, most of them are derived using the weak form of the finite element method (FEM).In contrast, we proposed some a posteriori error estimators [29][30][31][32][33][34][35] which are independent of the numerical discretization method.The error estimators are expressed with the already discretized matrices and (nonlinear) vectors.Many of the existing error bounds or error estimators are applicable to ROMs constructed via global projection matrices, regardless which pMOR method is used for the ROM construction.For the reduced basis method, the projection matrix and the ROM are usually constructed via a greedy process.Multi-fidelity error estimation is recently proposed in [36] to accelerate the greedy algorithm for constructing the projection matrix. We further discuss a newly proposed error estimator [35] which is independent of the numerical time-integration scheme and therefore is able to estimate the error of the ROM solved with any time integrator.This is desired in many engineering applications, where often commercial software is used to solve the original dynamical systems.Then it is also desirable that the error estimator can be applied to measure the ROM error while the ROM is solved with the same software.However, existing error estimators (bounds) cannot achieve this, since they usually require a pre-defined non-adaptive time-integration scheme.This limits the wide use of the error estimators (bounds). Finally, we propose an inf-sup-constant-free output error estimator for nonlinear timeevolution systems, which avoids the computation of the smallest singular value σ min (μ) of a large matrix at each queried sample of the parameter.This not only improves the accuracy of the error estimator for problems with σ min (μ) close to zero, but also reduces a large amount of computations, as computing the singular value needs computational complexity of at least O(N ) for each parameter sample, where N is usually large. Most of the error estimation methods reviewed in this work are based either on the residual of the ROM approximation or on both the residual and a dual system.Such techniques of using the residual of an approximate solution and a dual system, can be traced back to error estimation for FEM approximations, see, e.g., [37]. For clarity, we summarize the new contributions of this survey, which cannot be found in the referenced articles: • Theorem 2. It transforms the error bound presented in function space in [19,27] into an error bound in the vector space C n .New proofs are provided in Appendix.• Theorem 4. It derives an error bound with quadratic decay in C n .• Theorem 5 and its proof.It uses a slightly different dual system (25) and a slightly different auxiliary output ỹk (μ) to derive the same output error bound as in [29,30].Please see Remark 8 for the detailed differences.• Theorem 7. It quantifies the state error estimator proposed in [38] with computable upper and lower bounds.• "Inf-sup-constant-free error estimator for time-evolution systems" section.It proposes a new inf-sup-constant-free output error estimator for parametric timeevolution systems. In the next sections, we discuss error estimation for both time-evolution systems and steady systems."Problem formulation" section formulates the problems considered in this work, including the original large-scale models and the corresponding ROMs.We first review rigorous error bounds for both systems and provide some new proofs in "Rigorous a posteriori error bounds" section.Then in "A posteriori error estimators" section, we review error estimators which are not rigorous anymore, but decay faster than the error bounds.The error estimators usually also have less computational complexity than the error bounds."Error estimator for ROMs solved with any black-box time-integration solver" section reviews the newly proposed error estimator that is applicable to black-box solvers.The recently proposed multi-fidelity error estimation for large and complex systems is reviewed in "Multi-fidelity error estimation" section.It is shown that for some complex systems, the greedy process of constructing the reduced basis can be largely accelerated with multi-fidelity error estimation."Inf-sup-constant-free error estimator for time-evolution systems" section proposes a new inf-sup-constant-free output error estimator for nonlinear time-evolution systems and presents numerical results.We conclude this survey in "Conclusion" section.This review is not exhaustive, but only contains our contribution to this topic and the most related ones from the literature.Other error estimators, in particular all error estimators for different types of systems, e.g., error estimation for ROMs of second-order non-parametric systems [39,40], are not discussed.The proper generalized decomposition (PGD) method [41] known as a non-projectionbased MOR method, and the corresponding error estimation [42][43][44], are not considered in this survey either.The list of abbreviations is provided as below: Problem formulation Consider the following parametric time-evolution system of differential algebraic equations (DAEs): where t ∈ [0, T ] and μ ∈ P ⊂ R p , P is the parameter domain.x(μ) ∈ R N is the state vector of the system and , ∀μ ∈ P, are the system matrices, f : R N × P → R N is the nonlinear system operator and u : t → R n I is the external input signal.Such systems often arise from discretizing partial differential equations (PDEs) using numerical discretization schemes, or following some physical laws.System (1) is called the full-order model (FOM) when we discuss MOR.The number of equations N in ( 1) is often very large to ensure high-resolution of the underlying physical process.Numerically solving the FOM is expensive, especially for multi-query tasks, where the FOM has to be solved at many instances of μ.When n I > 1 and n O > 1, the system has multiple inputs and multiple outputs.Such problems are common in electrical or electromagnetic simulation [36].When we consider error estimation, we usually first assume n I = n o = 1, then the obtained error estimation is extended to the more general case n I > 1 and n O > 1.The extension is straightforward if the error is measured using the matrix-max norm [31,33,38].Therefore, if not mentioned explicitly, we consider the case n I = n o = 1 such that (1) can be written as ( Here, the input signal u(t) and the output response y(t, μ) become scalar-valued functions of time and μ, respectively.Consequently, the system matrices b ∈ R N and c ∈ R N are now vectors.All other quantities remain the same as in (1).We will briefly mention the extension to n I > 1 and n O > 1 at proper places.Projection-based MOR techniques obtain a ROM for (2) in the following form: where V ∈ R N ×n is the parameter μ-independent projection matrix, whose columns are the reduced basis vectors. •) is the reduced nonlinear vector.The number of equations n in (3) should be much smaller than N in (2), i.e., n N , so that the ROM can be readily used for repeated simulations.When V = W, it is referred to as Galerkin projection.We focus on Galerkin projection, though the error estimators discussed in this work straightforwardly apply to Petrov-Galerkin projection, too. For steady problems, the parametric system is time-independent, where x(μ) ∈ R N , and f : R N × C p → R N is the nonlinear system operator.Projectionbased pMOR obtains a steady parametric ROM as below, where f (•, When the system is linear, the steady system then becomes where M(μ) ∈ R N ×N , ∀μ ∈ P. The corresponding steady parametric ROM is where M(μ) = V T M(μ)V. In the following, we mainly discuss error estimation on the solutions obtained from the ROMs (3) and (7).The norm • refers to the vector 2-norm or matrix spectral norm all through the article.The ROM in (3) or ( 7) is constructed using a global reduced basis V.The error estimation methods reviewed in this work could be applied to measure the error of the ROM obtained using a global reduced basis, irrespective of the method used to construct the reduced basis.In this sense, the error estimation methods are generic and could be applicable to multi-moment-matching methods, POD methods, reduced basis methods and some interpolation-based methods. Remark 1 We point out that if the FOMs (1), ( 2), ( 4), (6) are obtained from numerical discretization of PDEs, the error estimation discussed in this work and those in most of the referenced works in the introduction, do not involve the discretization error.This is the case for most of the reduced basis method in the literature.As the spatial discretization and the model reduction are mostly two separate steps, this is common practice.We note that in case of knowledge of the discretization error, e.g., in adaptive FEM, one can adapt the model reduction error tolerance to this error so that model reduction does not contribute further to the magnitude of the approximation error, by, e.g., choosing the model reduction tolerance to be 1-2 orders of magnitude lower than the discretization error.This is common practice, but beyond the scope of this paper.For works on error estimation including both the discretization error and the ROM error, please refer to [45][46][47].The error estimation reviewed in this work could be combined with the discretization error estimator [37] to realize adaptivity of the mesh size by checking the two estimated errors respectively, during a joint greedy process for both spatial discretization and MOR.Moreover, there are FOMs that are not derived by numerical discretization of PDEs, rather from some physical laws, for example, the modified nodal analysis (MNA) in circuit simulation directly results in systems of DAEs.For such systems, we consider the solutions to the FOMs as the exact solutions. Rigorous a posteriori error bounds This section reviews rigorous a posteriori error bounds for estimating the ROM error, which are upper bounds of the true errors and therefore are rigorous.For time-evolution systems, most of the approaches estimate the error at discrete time instances.There are error bounds for state error and error bounds for output error.Output error bounds usually need a dual system to achieve faster decay.We review the error bound for timeevolution systems and steady systems in separate subsections. Error bounds for time-evolution systems The standard error estimation approaches proposed for the reduced basis method are residual-based [17-19, 23, 24].In order to derive the error bound, knowledge of the temporal discretization scheme used to integrate the FOM and the ROM is assumed, e.g., using implicit Euler, Crank-Nicolson method, or an implicit-explicit (IMEX) method.Computing the error bound involves determining the residual vector r(μ) ∈ R N at each time instance.Some goal-oriented output error estimation approaches also require the residual of a dual system.Suppose (2) is discretized in time using a first-order IMEX scheme [48].The linear part is discretized implicitly, while the nonlinear vector f (x(t, μ), μ) is evaluated explicitly.The resulting discretized system is with A t (μ) := E(μ)−δtA(μ).Here, δt is the temporal discretization step.Error estimation methods discussed in this work may also apply to time-varying δt.For simplicity, we use δt to represent the time-varying case, too.The ROM (3) can be discretized in the same way as where Ât (μ) := Ê(μ) − δt Â(μ).The residual from the ROM approximation is computed by substituting the approximate state vector xk (μ) := V xk (μ) into (8).The resulting residual at the k-th time step, r k (μ) is The nonlinear part of the ROM (9) is not yet hyperreduced.When hyperreduction [49,50], e.g., discrete empirical interpolation (DEIM), is applied to (9), we get the ROM in the form, where It is clear that in order to obtain the residual r k (μ), the temporal discretization scheme for the ROM should be the same as that for the FOM so that xk (μ) in (10) and x k (μ) in (8) correspond to the same time instance t k . State error bound An a posteriori error bound (μ) for the approximation error e k (μ) := x k (μ) − xk (μ) can be computed based on the residual as below. Theorem 1 (Residual-based error bound) Suppose that the nonlinear quantity f (x(t, μ), μ) is Lipschitz continuous in the first argument for all μ such that there exists a constant L f for which Further assume that for any parameter μ the projection error at the first time step is e 0 (μ) = x 0 (μ) − x0 (μ) = x 0 (μ) − VV T x 0 (μ) , and A t (μ) is invertible, ∀μ ∈ P. The error of the approximate state vector x at the k-th time step, e k (μ) = x k (μ) − xk (μ) is given by: where ζ (μ) := A t (μ) −1 and ξ (μ Proof A proof for the above theorem can be found in [35]. Remark 2 When the system has multiple inputs, then the state error bound corresponding to each column of B(μ) can be obtained from Theorem 1.The final state error is taken as the maximum over all the column-wise derived state error bounds. In [16,18,19,51], similar state error bounds using the residual r k (μ) are derived, where only linear systems are considered in [16,18].The error bound proposed in [19] for nonlinear systems includes the error of hyperreduction for the nonlinear function f (x(t, μ), μ), such that there is an additional term in the error bound.The error bound [19] is expressed in function space, and is not straightforward to be translated into the vector space C n as we consider here.An error bound in the vector space C n by considering hyperreduction is provided in [29,52].A state error bound based on implicit temporal discretization scheme is proposed in [51], where the hyperreduction error is also considered.In summary, all the error bounds discussed in [16,18,19,29,51,52] and the one in Theorem 1 involve summing up the residual r k (μ) at discrete time steps. Remark 3 In [24], state error bound for the linear version of the time-continuous ROM in (3) is derived, i.e., the nonlinear functions in (2) and in (3) are both assumed to be zero.The error bound is also a function of continuous time and continuous parameter.The sum of the residual r i (μ) over discrete time steps becomes the integral of the residual over the time interval [0, T ]. Output error bound A straightforward output error bound for the output error can be derived from (12) of Theorem 1 by noticing that e k o (μ) ≤ c e k (μ) [24].Finally, we have The above output error bound is nevertheless rather conservative, especially when c is large.Moreover, the error bound depends only on r i (μ) , i.e., the primal residual, leading to a linear decay.Primal-dual-based output error bounds are obtained in [19,24,26,27] for linear timeevolution systems, so that the resulting error bounds possess quadratic decay w.r.t.both the primal residual and the dual residual.The output error bound in [19,26,27] is described in function space based on the weak form of the original PDEs.To be consistent with the system (9) using matrices and vectors, we transform the error bound in [19] into the vector space C n for the ROM in (9).Theorem 2 shows the interpreted error bound.The assumptions of the theorem correspond to those assumptions in [19] in function space.No additional or stronger limitations are assumed in Theorem 2. Although the proof for the theorem can be done by more or less using the idea of the proof in [19], it is very different and is therefore provided in this work.Note that the proof in [19] is divided into several lemmas, thus consists of a sequence of proofs. Theorem 2 (Output error bound for linear systems) Given a linear FOM in (8), where f (x k (μ), μ) = 0, consider the output error of its ROM in (9) where f (V xk (μ), μ) = 0. Assume that there is no error at the initial condition, e 0 (μ) = x 0 (μ) − x0 (μ) = 0, and both the matrices −A(μ) and E(μ) are symmetric positive definite.An error bound for the error of a corrected output of the ROM is where ŷk denotes the residual at the j-th time instance induced by the ROM of the dual system and (μ) is a parameter-dependent scalar.δ K du (μ) is a scaled upper bound for the dual ROM state error x K du (μ) − V du xK du (μ) 2 at the final time step t = t K .The dual ROM is defined as where Proof See Appendix. Remark 4 For multiple input and multiple output systems, an output error bound corresponding to each column of B(μ) and each row of C(μ) can be derived from Theorem 2. Then the final output error bound is taken as the maximum over all column-row-wise derived output error bounds.Please refer to a more detailed derivation for steady systems in the next "Error bounds for steady systems" section. Remark 5 In [24], a similar primal-dual based output error bound is obtained for the time-continuous ROM in (3).The output error bound estimates the output error at the final time T .The sums of the primal residual r(μ) k and the dual-residual r k du over time instances in (14) then become two integrals integrating the time variable from 0 to T. The initial approximation error e 0 (μ) 2 was assumed to be zero in [19], while it exists in the error bound in [24].(μ) is also differently defined in [24].In contrast to [24] where the error estimation is derived in the vector space C n , in [26] a time-continuous output error bound is derived in the function space based on the weak form of the original PDEs.The error bounds proposed in [15,24,27] are also reviewed in the survey paper [28] on the reduced basis method. Remark 6 Theorem 2 is restricted in the sense that both E(μ) and −A(μ) are assumed to be symmetric positive definite.Our proposed error estimators to be discussed in "A posteriori error estimators" section do not need this assumption. The primal-dual based output error bound in ( 14) has a quadratic behavior in the sense that it is the multiplication of the primal residuals with the dual-residuals.Therefore, it is expected that the error bound decays faster than the primal-only output error bound in (13).Note that all the above reviewed error bounds estimate the error by accumulating the residuals over time. Error bounds for steady systems In this subsection, we discuss error bounds for steady systems as in (6).Analogous to the time-evolution systems, the error bounds for both the state error and the output error also rely on the spectral norm of M −1 (μ) , i.e., the smallest singular value of the matrix M(μ) for any given μ. State error bound The state error bound for the state error e(μ) = x(μ) − V x(μ) can be easily derived by noticing that Finally, Similar error bounds for the state error have been proposed for the reduced-basis method [17,18] based on the weak form of the PDEs and they are written in the functional form.Here, we derive the error bound in the vector space C n for the spatially descritzed system (6) written using matrices and vectors.For systems with multiple inputs, b(μ) is a matrix, then e(μ) is also a matrix.Considering the i-th column e i (μ) of e(μ), we get [38] where r i (μ) is the i-th column of r(μ) The final bound s (μ) is then defined as s (μ) := max i e i (μ) for multiple input systems.In [17,23,53], error bounds for the nonlinear steady systems (4) and ( 5) are also obtained, where M(μ) −1 on the right-hand side of ( 18) is replaced by the smallest singular value of a properly defined Jacobian matrix in [17], whereas in [23], it is replaced by a lower bound on the coercivity constant of a linear operator.In [53], with some assumptions, e(μ) is bounded as where ê(μ) is a properly computed approximation of e(μ) which is the solution to the residual system Here, J f (μ) is the Jacobian matrix of f in (4) w.r.t.x(μ).In (19), r r (μ) := r(μ) − J f ê(μ) is the residual induced by the approximation ê(μ) to e(μ). In summary, the error bound s (μ) as well as the error bounds derived in [17,18,23] all depend on the spectral norm of a properly defined matrix, or a coercivity constant (usually a lower bound of it), which entail computational complexity depending on the large dimension N .Sometimes, M(μ) −1 or the coercivity constant is so small leading to very rough error estimation. Output error bound Analogous to the time-evolution systems, estimating the output error y(μ) − ŷ(μ) for the ROM ( 7) is also based on a dual system and its ROM defined respectively as where The following Theorem states an output error bound using the dual system (20) and its ROM (21). The above primal-dual based output error bound is motivated by the primal-dual error bounds early proposed in [20][21][22], etc, though the derivations in [20][21][22] are in function space and therefore are different.An even earlier proposed primal-dual based output error bound in function space can be found in [25].For systems with multiple inputs and multiple outputs, an error bound with matrix-max norm can be derived [33].To this end, we first get the error bound for the (i, j)-th entry of the output e o (μ) matrix, which can be straightforwardly derived from (22), i.e., where r du,i (μ Here, xdu,i (μ) and xj (μ) are the i-th and j-th columns of xdu (μ) and x(μ), respectively.The error bound for e o (μ) max is defined as The error bounds in ( 22) and ( 23) do not quadratically decay, since there is a second term which is not a quadratic function of the two residual norms.However, with some modifications or assumptions, we can derive error bounds that are quadratic. Theorem 4 (Output error bound for linear steady systems with quadratic behavior When b(μ) = c T (μ), if we modify the output of the ROM in (7) to ȳ(μ) = ŷ(μ) + (V du xdu ) T r(μ), then the output error bound for the output error y(μ) − ȳ(μ) becomes Proof When b(μ) = c T (μ), the dual system (20) is the same as the primal system (6), so that V du = V and xdu (μ) = x(μ).This leads to [V du xdu (μ)] T r(μ) = 0 in (22) or (23).When b(μ) = c T (μ), we have With the corrected output, the output error bound has a quadratic behavior.The same technique was previously used in [20][21][22]25] for error analysis in function space.The error bound in ( 24) is in agreement with the analysis in, e.g., [20].It is worth pointing out that using a corrected output to obtain error estimation with quadratic decay was early proposed for the finite element method (FEM) [37].Analogous to the state error bound in (18), the smallest singular value of M(μ) for any given μ must be computed in order to compute the output error bound. A posteriori error estimators This section discusses a posteriori error estimators that may loose the rigorousness of the error bound.However, these error estimators try to reduce the big gap between the error bounds and the true error occurring in many problems.Usually the ratio error bound true error or error estimator true error is considered as the effectivity of an error bound/estimator.A posteriori error estimators discussed in this section are aimed to have effectivities error estimator true error closer to 1 as compared to those error bound true error of the error bounds.At the same time, they usually have less computational complexity than the error bound.In the following subsections, we also separately discuss error estimation for time-evolution systems and that for steady systems. Error estimators for time-evolution systems The error estimators discussed in this subsection aim to estimate the error of the timediscrete ROM in (9).The works in [29][30][31] propose output error estimators which avoid accumulating (summing up) the residuals over the time evolution, resulting in much tighter error estimators than the primal-dual based error bounds in "Error bounds for time-evolution systems" section.Furthermore, the output error estimators in [29,30] apply to both nonlinear and linear systems.For nonlinear systems, the error estimators could also include the approximation error of hyperreduction.We review those error estimators in the following theorems. Output error estimators The output error estimators needs a dual system defined as, The ROM of the dual system can be derived by where Remark 7 The dual system defined in ( 25) is slightly different from that in [29][30][31], where the right-hand side is −c(μ) T instead of c(μ) T .To be consistent with the definition of the corrected output ȳ(μ) for the steady systems in Theorem 4, we use the dual system (25), based on which the corrected output for the nonlinear time-evolution systems to be defined later will have a uniform form as the corrected output for the steady systems. The residual induced by the approximate solution V du xdu computed from the ROM in ( 26) is Define an auxiliary residual: It can be seen that the differences of rk (μ) from r k (μ) in (10) With rk (μ), we will derive a direct relation between rk (μ) and the state error x k (μ) − xk (μ).This relation will aid the derivation of the output error estimation. Remark 8 Recall that the dual system defined in ( 26) is a bit different, therefore the proof in [29,30] does not directly apply here.Although the proof above is similar to the proofs in [29,30], it is not the same.In particular the variable ỹk [29,30]. When hyperreduction is applied to the ROM (9), we get the hyperreduced ROM in (11).An output error estimation for the hyperreduced ROM is derived in [29,30] as where The quantity k I (μ) can be computed using, e.g., the technique in [51].The output error estimators in (33) and (34) do not include the sum of the residuals over time instances, and are expected to be much tighter than the rigorous output error bound.In the numerical results in [29] for a linear system, it is shown that the error estimator yields a more accurate estimation of the true error than the error bound in [19,27]. Remark 9 For multiple-input multiple-output systems, the corresponding output error estimator can be obtained using the matrix-max norm as explained in (23) and (24). The error estimators in (33) and (34) do not quadratically decay w.r.t. the two residuals because of the second part in k (μ).In [31], we use a corrected output of the ROM, so that the finally derived error estimator includes much less contribution of the second part in k (μ).This makes the error estimator decay almost quadratic. Define a corrected output ȳk (μ) = ŷk (μ)+(V du xk du (μ)) T r k (μ) for the ROM in (9).With the same assumptions as in Theorem 5, and the Lipschitz continuity of f (x(t, μ), μ), the output error ēk o (μ) = y k (μ) − ȳk (μ) can be estimated as [31] ēk where Comparing the error estimator in (35) with that in (33), we find that the second nonquadratic term is still there, but with a scaling factor |1 − ρ| instead of ρ.It is analyzed in [30] that under certain assumptions, when the POD-greedy algorithm used to compute the projection matrix V converges, ρ gets closer to 1, meaning that |1 − ρ| will be closer to 0. This makes the second part in (35) tend to zero, while the second part in (28) remains away from zero.Therefore, the error estimator for the corrected output error should give a tighter estimation.The derivation follows almost that in [31] and the proof for Theorem 5, noticing that the dual system and the corrected output are slightly different from those in [31].We will not repeat it here. With simple calculations, the corrected output error for the hyperreduced ROM in (11) can be estimated as [31] ēk Error estimators for linear steady systems Some error estimators [33,34,38,54] for linear steady systems were proposed in order to avoid estimating/computing the spectral norm M(μ) −1 involved in the error bounds in "Error bounds for steady systems" section.An approach based on randomized residuals is proposed in [54], where some randomized systems are defined to get error estimators for both the sate error and the output error.It is discussed in [34] and [38] that the error estimators in [54] are theoretically less accurate than the estimators proposed in [33,38].The error estimators in [54] more easily underestimate the true error than the estimators in [33,38], which is also numerically demonstrated in [33,38].Here we review the error estimators proposed in our recent work [33,34,38]. State error estimators The error estimator proposed in [38] estimates the state error for linear steady systems For the FOM in (6), the error e(μ) := x(μ) − V x(μ) of the approximate state V x(μ) computed by the ROM (7) can be estimated as where xr (μ) is the solution to the following ROM Here, M r (μ) = V T r M(μ)V r , r(μ) = V r r(μ) with V r being properly derived, and r(μ) = b(μ) − M(μ)V x(μ).The system (38) is the ROM of the following residual system Remark 10 We note that a similar technique of using an approximate solution to a residual system, as an error estimator for the state error, was already proposed for the finite element method (FEM) (see [37] and the references therein).There, the approximate solution was not obtained from a ROM of the residual system. The accuracy of the error estimator |V r xr (μ)| in (37) is quantified in [38]: Theorem 6 (Quantifying the error estimator [38]) The state error |e(μ)| is lower and upper bounded as Whenever the ROM (38) of the residual system is accurate enough, δ(μ) will be small.However, how to further quantify the error δ(μ) is left open.We derive the following theorem with computable upper and lower bounds. Proof The proof can be easily done.We notice that Note that for linear systems, the upper bound in ( 40) is only half the upper bound in (19). Output error esitmators The error estimators in [33,34] estimate the error e o (μ) := y(μ) − ŷ(μ) of the output ŷ(μ) computed from the ROM (7).In [34], we derive the following primal-dual based output error estimator where xdu (μ) is the solution to the reduced dual system Mdu (μ)x du = ĉdu (μ), (41) and Mdu (μ T .The reduced dual system is a ROM of the dual system, Remark 11 In [37] and the references therein, the FEM approximation error was estimated also using a similarly defined dual system in the function space.However, the approximate solution to the dual system is not the solution of the ROM for the dual system.The approximate dual solution is then multiplied with the residual of the FEM approximation to the original PDEs to constitute a primal-dual based error estimator for the output error of the FEM approximation. The randomized output error estimator in [54] is based on the output error estimator o 1 (μ).On the one hand, it is analysed in [34] that o 1 (μ) is more accurate than the randomized output error estimator; on the other hand, it is also numerically demonstrated in [34] that o 1 (μ) is nevertheless less accurate than the other estimators proposed in [33,34].In the following, we first introduce the primal-dual output error estimator in [33], which involves a dual-residual system defined as M(μ) T x r du (μ) = r du (μ), and its ROM Mr du (μ)x r du (μ) = rdu (μ), where Mr du (μ) = V T r du M(μ)V r du , rdu (μ) = V T r du r du (μ), with V r du being properly computed.The dual-residual r du (μ) := c(μ) T − M(μ) T V du xdu is the residual induced by the approximate solution V du xdu computed from the dual ROM (41).A primal-dual and dual-residual based output error estimator proposed in [33] is stated as following.For the FOM in (6), the output error e o (μ) of the ROM (7) can be estimated as The error estimator o 2 (μ) in ( 43) has an additional term |(V r du xr du (μ)) T r(μ)| as compared to o 1 (μ).Now we discuss the accuracy of both estimators through the next Theorems. We can observe that where r r du (μ) := r du (μ) − M(μ) T V r du xr du (μ) is the residual induced by the reduced dualresidual system. We can further derive upper bounds for δ 1 (μ) and δ 2 (μ), respectively.Actually, Although we have no proof yet, it is expected that r r du (μ) ≤ r du in general, since r r du is the residual induced by the ROM of the dual-residual system whose right-hand side is r du .Finally, we should have δ2 (μ) ≤ δ1 (μ), indicating that o 2 (μ) should be more accurate than o 1 (μ).On the other hand, we know that Then δ2 (μ) ≤ δ1 (μ) implies that underestimation of the true error by o 2 (μ) should be less than by o 1 (μ). In [34], we have further proposed another output error estimator variant o 3 (μ), which has less computational complexity than o 2 (μ), but has similar, or sometimes even better accuracy.It does not depend on the dual system and/or dual-residual system as o 1 (μ) and o 2 (μ), but depends on the primal-residual system in (39).o 3 (μ) is defined as Comparing o 3 (μ) with the state error estimator |V r xr (μ)| in (37), we see that there is only a difference of the output matrix c(μ).Both are derived by employing the primal-residual system (39). Theorem 10 (Quantifying the output error estimator o 3 (μ) [34]) The output error e o (μ) is bounded as where With simple calculations, an upper bound of δ 3 (μ) can be derived as It can be easily seen that computing o 3 (μ) needs only to compute one additional ROM, i.e., the ROM of the primal-residual system (38), while computing o 2 (μ) needs to compute two additional ROMs.Theoretically, the upper bound δ2 (μ) should decay faster than the upper bound δ3 (μ), implicating that o 2 (μ) should be more accurate than o 3 (μ).However, from our numerical simulations on several different problems [34], o 3 (μ) is even more accurate than o 2 (μ). Error estimator for ROMs solved with any black-box time-integration solver The error bounds and error estimators reviewed in the previous sections are all residual based.In particular, for time-evolution systems, the error bound and estimators need to compute the residual r k (μ) at corresponding time instances t k , k = 1, . . ., K .It is clear that to compute r k (μ), the temporal discretization scheme applied to the FOM must be known, so that r k (μ) (10) can be derived by inserting the approximate solution xk (μ) into the temporal discretization scheme, e.g., (8) and by subtracting the left-hand side of the first equation from its right-hand side.Moreover, the temporal discretization scheme (8) for the ROM (3) must be the same as that for the FOM to make sure that xk (μ) computed from the ROM (8) corresponds to the true solution x k (μ) at the same time instance t k .These two requirements on the FOM and the ROM become limitations for the error bounds (estimators) when the FOM is simulated by a black-box time-integration solver and/or when the ROM is also desired to be solved using the same black-box timeintegration solver. In [35], we propose a new error estimator which is applicable to the situation where both the FOM and the ROM are solved by a black-box solver.We take use of a userdefined implicit-explicit (IMEX) temporal discretization scheme to derive the new error estimator.Although potentially any IMEX scheme can be applied, we consider the firstorder IMEX scheme (8) in this survey.Note that the second-order IMEX scheme is also used in [35]. Since the first-order IMEX scheme (8) differs from the black-box solver, we have a defect or a mismatch when we insert the solution snapshots x k (μ) computed from the black-box solver into the first-order IMEX scheme. Although the time-integration scheme of the black-box solver is invisible, we can use the solution snapshots x k (μ), k = 0, . . ., K , at some samples of μ to learn the defect vector.We then use d k (μ) to correct the user-defined scheme (8), such that its solution recovers the solution x k (μ) computed by a black-box solver and the temporal discretization scheme of the black-box solver then becomes visible via the corrected time-discrete FOM as below, It is clear that if d k (μ) can be accurately learned, then not only x k c (μ) in ( 45) recovers x k (μ), but also the FOM in ( 45) is equivalent to the temporal discretization scheme of the black-box solver.The ROM of the FOM in ( 45) can be obtained as where Ât (μ), Ê(μ), f (•, •), b(μ), ĉ(μ) are defined as in (3) and dk (μ) = V T d k (μ).We make use of both the corrected FOM ( 45) and the corresponding ROM (46) to derive output error estimation for the output error |y k (μ) − ŷk (μ)|, where y k (μ) and ŷk (μ) are the outputs of the FOM in (2) and the ROM in (3) at any time instance t k , respectively.Both systems can be solved using any black-box solver.Given the FOM in (2), assuming that A t (μ) is non-singular for all μ ∈ P, the nonlinear function f (x(t, μ), μ) is Lipschitz continuous w.r.t.x(t, μ), and the defect vector d(μ) can be accurately learned, then the output error |y k (μ) − ŷk (μ)| of the ROM (3) can be estimated as [35] |y where is the residual induced by the d-corrected ROM (46), and ȳk c (μ (46).¯ k (μ), V, V du , xk du (μ) and r k du (μ) are defined as before. The corrected output ȳk c (μ is defined a bit differently as in [35], where ȳk c (μ . Its corresponding dual system in [35]: is also slightly different from that in (25).Please also refer to Remark 7.However, derivation of the error estimator is very similar as that in [35] and is not repeated here. When hyperreduction is considered, the ROM (3) becomes 6. ε = (μ * ). 7. End. where which is the d-corrected hyperreduced ROM.Error estimation for the output error of the ROM ( 48) is stated as where is the residual induced by the d-corrected ROM (49), and k I (μ) is the hyperreduction error defined as before. Now we come to the problem of accurately learning d(μ), so that (47) gives an accurate error estimation for ROMs solved with black-box solvers.In [35], we have used proper orthogonal decomposition (POD) combined with radial basis function (RBF) interpolation or with feed forward neural network (FFNN) to learn d(μ).POD is first used to project d(μ) ∈ R N onto a lower-dimensional subspace.RBF or FFNN is then used to learn the projected short vector d(μ is computed from a two-stage singular value decomposition (SVD) of the snapshot matrix D := [d 0 (μ 1 ), . . ., d K (μ 1 ), . . ., d 0 (μ s ), . . ., d K (μ s )].Each d i (μ j ) is the defect vector evaluated at time instance t i and parameter sample μ j .All details can be found in [35]. Remark 12 While the new error estimator is based on our earlier proposed output error estimator in [31], the idea can be directly applied to derive a posteriori state error estimators (bounds). Multi-fidelity error estimation This section briefly reviews our recent multi-fidelity error estimation used for accelerating the weak greedy algorithm.Weak greedy algorithms are often used to iteratively construct the reduced basis (V) for MOR of parametric steady systems.A sketch of the algorithm is given as Algorithm 1. Some key points for the greedy algorithm to converge fast are: a properly chosen training set , an efficient and fast-to-compute error estimator (bound) (μ).For some complex problems, although the cardinality of the training set is not large, computing (μ) over at each iteration is slow.In [36], we propose the concept of multi-fidelity error estimation to accelerate the greedy iteration. We start with a rough training set c with even smaller cardinality, i.e. | c | ≤ | |, and try to evaluate (μ) only over c at each greedy iteration.At the same time, a surrogate estimator is constructed based on the already available values of (μ) over c .This surrogate is supposed to be more cheaply computed than (μ), so that it can be fast evaluated over a fine training set f with much larger cardinality than | |.Using the results of the surrogate estimator over f , we enrich c with the parameter sample selected by the surrogate.The selected parameter sample corresponds to the largest value of the surrogate.The parameter sample that corresponds to the smallest value of (μ) over c is simultaneously removed from c .This way, we can always keep c small over iterations, but c is kept being updated to only contain those important parameter samples.In the greedy process, those samples correspond to large ROM errors and are good candidates for greedy parameter selection in the next iterations. This process of using a surrogate estimator in the greedy algorithm was originally proposed in [55] for time-evolution nonlinear systems.In [36], we define this as bi-fidelity error estimation, since both the original estimator (μ) and a surrogate estimator are used for estimating the error in the greedy process.Based on that, we further propose multi-fidelity error estimation which depends on the structure of the original error estimator (μ) [36].Taking the output error estimator o 3 (μ) as an example, two projection matrices V and V r should be constructed in order to compute o 3 (μ).When we replace (μ) in Algorithm 1 with o 3 (μ), we need to iteratively update both V and V r with snapshots by solving the FOM in (6) at two greedily selected parameter samples twice.If at a certain stage, e.g., when the estimated ROM error is smaller than a small value θ < 1, but is still larger than the error tolerance tol, we stop updating V r , then the FOM in (39) at one of the two selected parameter samples does not have to be solved.Consequently, we have saved runtime of solving a large FOM at the subsequent iterations.At the same time, the original o 3 (μ) is degraded to a low fidelity error estimator o 3 (μ).The surrogate estimator is then constructed based on this low fidelity estimator at the latter stage of the greedy process.Finally, we have employed the original estimator o 3 (μ), a low-fidelity estimator o 3 (μ), and their respective surrogates in the whole greedy process.We call this multi-fidelity error estimation.We sketch this concept in Fig. 1.It is shown in [36] that the greedy process employing multi-fidelity error estimation is much faster than the standard weak greedy algorithm for some large-scale time-delay systems with hundreds of delays. The error estimators presented in the previous sections have been numerically compared in the individual papers.For an overview, we list them as below.Here, the sections are those in this survey where the corresponding error estimators are reviewed. • In [29,30], the error estimator proposed there ("Error estimators for time-evolution systems" section) is numerically compared with the error bound in [19,27] ("Error bounds for time-evolution systems" section) for parametric time-evolution systems.• In [31], the error estimator with corrected output ("Error estimators for time-evolution systems" section) is numerically compared with the error estimator in [29,30] for parametric time-evolution systems. Fig. 1 The concept of multi-fidelity error estimation in a greedy process, where (μ) represents any original error estimator, (μ) is a low-fidelity error estimator when we stop updating partial information of (μ), and s (μ) is a surrogate of (μ).Likewise, s (μ) is a surrogate of (μ).tol and ε are defined in Algorithm 1, tol < θ < 1 is a user-defined small value • In [38], the proposed state error estimator ("Error estimators for linear steady systems" section) is compared with the state error bound ("Error bounds for steady systems" section) for parametric steady systems from computational electromagnetics.• In [33], a newly proposed output error estimator o 1 (μ) ("Error estimators for linear steady systems" section) is compared with the output error bound (22) in [32] "Error bounds for steady systems" section) for parametric linear steady systems.It is also compared with an existing randomized error estimator from [54].• In [34], some more output error estimators are proposed and compared with each other; they are also compared with the output error estimator o 1 (μ) proposed in [33] ("Error estimators for linear steady systems" section).• In [35], a new error estimator ("Error estimator for ROMs solved with any black-box time-integration solver" section) which is applicable to the situation where both the FOM and the ROM are solved by a black-box solver, is compared with the output error estimator in [31] for parametric nonlinear time-evolution systems.• In [36], the multi-fidelity error estimation ("Multi-fidelity error estimation" section) is numerically compared with the standard greedy process with only a single highfidelity error estimator for time-delay systems with more than one hundred delays. Inf-sup-constant-free error estimator for time-evolution systems While the error estimators for time-evolution systems described in "Error estimators for time-evolution systems" section are accurate, their computation involves the quantities k (μ) and ¯ k (μ) for which the term A t (μ needs to be evaluated for every μ, where σ min (A t (μ)) is the smallest singular value of the matrix A t (μ).In function space, σ min (A t (μ)) corresponds to the inf-sup-constant of a linear operator [56].This poses two challenges.Firstly, the complexity of computing the smallest singular value is at least linearly dependent on N for each parameter sample.When the number of parameter samples is high (typical for problems with several parameters or parameters having a wide range), this can lead to significant increase of the offline computational cost.Secondly, for some applications the matrix A t (μ) could be poorly conditioned, leading to σ min (A t (μ)) close to zero, which could lead to blow up of the estimated error.While methods exist in the literature [56][57][58] to address the increased computational cost, these approaches are somewhat heuristic and a careful tuning of the involved parameters needs to be done to achieve good results.In the following theorem, we derive a new output error estimator applicable to time-evolution systems avoiding the inf-sup constant. Remark 13 For the sake of exposition, we derive the new inf-sup-constant-free error estimator based on the derivation of the output error estimator in Theorem 5. But, a similar process can be repeated to derive inf-sup-constant-free versions of the error estimators presented in (34), (35) and (36).Furthermore, a straightforward extension of the inf-supconstant-free output error estimator is applicable to the output error estimator in (47) and ( 50) which deals with the case of black-box time-integration solvers. Theorem 11 (Primal-dual inf-sup-constant-free output error estimator) For the time discrete FOM ( 8) and the time-discrete ROM (9), assume the time step δt is constant, so that A t (μ) does not change with time.Let all the assumptions in Theorem 5 be met, the output error e k o (μ) = y k (μ) − ŷk (μ) at the time instance t k can be bounded as Here, xdu (μ) and r du (μ) are defined in ( 26) and ( 27), respectively. Proof We start with the expression of (32) from Theorem 5 and write Since A t (μ) does not depend on time, we can safely remove the superscript k from r du (μ). Unlike what is done in (32) we do not apply the matrix sub-multiplicative property in the second line for the term [A t (μ)] −T r k du (μ) .The expression [A t (μ)] −T r du (μ) =: e du (μ) is seen to be the solution of the linear system We call the above linear system the dual-residual system corresponding to the dual system (25).Using this dual-residual system and the expression ỹ(μ) = ŷk (μ) + (V du xdu (μ)) T rk (μ) we write where xdu (μ) does not change with time, so that the superscript k is also removed.Defining ˘ (μ) := e du (μ) + V du xdu (μ) and using ρk (μ) = rk (μ) / r k (μ) , we get the desired error bound. Finally, we approximate the ratio ρk (μ) with the quantity ρ(μ) to obtain the inf-supconstant-free output error estimator as Algorithm 2 Simultaneous construction of the projection bases for the inf-sup-constantfree output error estimator applicable to time-evolution systems Input: Dual system matrices A t (μ) T , c(μ) T , a training set composed of parameter samples taken from the parameter domain P, error tolerance tol < 1. Output: Projection matrices V du and V e . Computational aspects In (52), evaluating ˘ (μ) involves determining e du (μ) by solving the dual-residual system (51) for every parameter sample μ.This step can be computationally expensive. To address this, we propose to obtain a ROM for (51) such that we can approximate e du (μ) ≈ V e êdu (μ).The ROM reads [ Âe (μ)] T êdu (μ) = rdu (μ), (53) where Âe (μ) = V T e A t (μ)V e , rdu (μ) = V T e r du (μ).The dual-residual r du (μ) is the residual induced by the approximate solution V du xdu (μ) computed from the dual ROM (26).We propose a greedy algorithm in which V e and the projection matrix V du for the dual system ROM (26) are constructed simultaneously.For an appropriately computed V e , we have e du (μ) ≈ V e êdu (μ) and hence the inf-sup-constant-free error estimator (52) can be further approximated as with ˘ e (μ) := V e êdu (μ) + V du xdu (μ) .Next, the greedy algorithm to simultaneously construct V du and V e is detailed. Simultaneous and greedy construction of V du and V e The greedy algorithm is sketched in Algorithm 2. The inputs to the algorithm are the system matrices corresponding to the dual system, viz., A t (μ) T , c(μ) T , a properly chosen training set and a tolerance tol.The outputs resulting from the algorithm are the two projection bases V du and V e which are needed to evaluate ˘ e (μ) in the inf-sup-constant-free error estimator (54).In Step 1, the initial greedy parameters μ * and μ * e are initialized, ensuring that μ * = μ * e .The projection matrices V du , V 0 e and V e are initialized as empty matrices.In Steps 3 and 5, the FOM ( 25) is evaluated at μ * and μ * e , respectively.The resulting dual system snapshots are then used to update V du and V 0 e in Steps 4 and 6, respectively, using e.g., the modified Gram-Schmidt process with deflation.Step 7 involves constructing the projection matrix V e .Following this, the ROM in ( 53) is solved to evaluate V e êdu (μ) ∀μ ∈ , which is then used as an error estimator to choose the next greedy parameter μ * in Step 8. Furthermore, in Step 9, the norm of the residual r du (μ) − [A t (μ)] T V e êdu (μ) induced by the ROM (53) of the dual-residual system is evaluated to determine the second greedy parameter μ * e for the next iteration.In Step 10, the maximum estimated error at the current iteration is set to be the maximum estimated error in Step 8, i.e., = V e êdu (μ) . Remark 14 In Step 8, we have used the criterion V e êdu (μ) to select the parameter μ * for constructing V du for the ROM (26).Recalling the state error estimator (37) for steady parametric systems proposed in (37), it is easy to see that V e êdu (μ) is exactly the state error estimator for the state error x du (μ) − xdu (μ) of the ROM (26).We use this state error estimator to iteratively construct the projection matrix V du for the ROM (26).In order to evaluate the state error estimator, we also need to construct V e .In [38], we have explained how to construct V e in detail.In particular, a different criterion is used for greedy construction of V e , i.e. the norm r du (μ) − [A t (μ)] T V e êdu (μ) to avoid μ * = μ * e .The vector r du (μ) − [A t (μ)] T V e êdu (μ) is nothing but the residual vector induced by the ROM (53) of the dual-residual system (51). To obtain the ROM (3) corresponding to the FOM (2), we apply the adaptive POD-Greedy algorithm [31] to construct the projection matrix V.As the first test case (TC1), we apply the adaptive POD-Greedy algorithm which uses the output error estimator presented in Theorem 5.For the second test case (TC2), we apply the adaptive POD-Greedy algorithm with the new inf-sup-constant-free error estimator ˜ iscf (μ) (54).To compute the projection bases for evaluating (54), we make use of Algorithm 2. We first run Algorithm 2 to obtain the projection matrices V du , V e , as well as the reduced quantities xdu (μ), êdu (μ) corresponding to V du , V e .During each iteration of the POD-Greedy algorithm, those quantities are then used to compute the output error estimator ˜ iscf (μ) (54). Numerical examples Next, we illustrate the benefits of using the inf-sup-constant-free error estimator ˜ iscf (μ) in (54) with two numerical examples: the Burgers' equation and the FitzHugh-Nagumo equations.It is demonstrated that, firstly, the inf-sup-constant-free error estimator offers accurate performance when used in the POD-greedy algorithm to construct V. Secondly, the new approach yields a significant reduction of the offline computational costs by avoiding solving several large-scale eigenvalue problems for obtaining the inf-sup constant. For the adaptive POD-Greedy greedy algorithm, we plot the maximum estimated errors computed using the respective error estimators for TC1 and TC2 over the training set at every iteration.We define this as: where (μ) is either (33) in case of TC1 or (54) in case of TC2. 1-D Burgers' equation The viscous Burgers' equation defined in the 1-D domain := [0, 1] is given by is the spatial variable and the time variable t ∈ [0, 2].We spatially discretize (55) with the finite difference method.The mesh size is z = 0.001, which results in a discretized FOM of dimension N = 4000.As the variable parameter, we consider the viscosity μ ∈ P := [0.005, 1].The output variable of interest is the value of the state at the node just before the right boundary.The ROM tolerance is set to be tol = 1×10 −4 .We generate 100 sample points in P using np.logspace in python, out of which 80 randomly chosen samples constitute the training set. For TC1, we first use the standard greedy Algorithm 1 to compute the projection basis V du for the ROM of the dual system.The error estimator used in Algorithm 1 is the state error bound (17) so that the inf-sup constants [A t (μ)] −1 ∀μ ∈ are pre-computed before starting the greedy iteration.These are then used to evaluate the output error estimator (33) during the POD-greedy algorithm for constructing V.The greedy Algorithm 1 for constructing V du converges in 1.1 s.However, computing the inf-sup constants took 164.8 s.For solving the eigenvalue problem at every parameter, we make use of the scipy library for Python.The POD-greedy needs 255.7 s to converge, with the ROM dimension n = 4. In the case of TC2, Algorithm 2 is first used to compute the projection bases V du and V e simultaneously.This requires a total time of 0.98 s.The POD-greedy algorithm using the inf-sup-constant-free error estimator takes the same 255.7 s to converge.The final ROM dimension is also n = 4. Convergence of the POD-greedy algorithm for ROMs generation in the case of TC1 and TC2 is plotted in Fig. 2. It is clear that using the inf-sup-constant-Fig. 2 1-D Burgers' equation: error (estimator) decay for TC1 and TC2 free error estimator results in little loss of accuracy in the error estimation, while speeding up the offline basis generation by 1.6×. 2-D Burgers' equation We next consider the 2-D coupled Burgers' equation in the square domain := [0, 2] × [0, 2].The governing equations are as follows: We impose the following Dirichlet boundary conditions: The initial conditions at time t = 0 are given by with φ 1 = φ 2 = 10 e −(z 1 −0.8) 2 −(z 2 −1.0) 2 .In (56), v 1 (z 1 , z 2 , t) and v 2 (z 1 , z 2 , t) denote the state variables and represent, respectively, the velocity components in the canonical x and y directions.Further, (z 1 , z 2 ) ∈ and t ∈ [0, 1].Similar to the 1-D case, we spatially discretize the 2-D Burgers' equation using the finite difference method with a step size z 1 = z 2 = 0.011 (90 divisions along both x-axis and y-axis).This results in a coupled FOM of dimension N = 2 • 8100.The viscosity μ ∈ P := [0.01, 1] is the parameter of interest.As the output, we take the mean of x-component velocities in the region := [0.7, 1.4] × [0.7, 1.4].A first-order implicit-explicit scheme with time step size t = 0.0025 is used.The ROM tolerance is tol = 1 × 10 −3 .We generate 60 logspacesampled (with np.logspace in python) points from P, out of which 48 samples are randomly chosen to constitute the training set. For TC1, the computation of the dual system projection matrix V du takes 36.5 s and computing the inf-sup-constant by solving an eigenvalue problem for every parameter in the training set took 3, 380 seconds.Following this, Algorithm 1 is used to obtain the projection matrix V.It requires 5, 808 seconds to reach the desired tolerance of 1 × 10 −3 in 11 iterations.The ROM dimension is n = 44. The simultaneous generation of V du and V e with Algorithm 2 needs 63 s in case of TC2.The POD-Greedy algorithm using the inf-sup-constant-free error estimator takes 5, 811 seconds, which is close to the time taken by the greedy algorithm in case of TC1. The resulting ROM has the same dimension as before, viz., n = 44.The convergence of the estimated and true errors of TC1 and TC2 are plotted in Fig. 3. Evidently, the use of the inf-sup-constant-free output error estimator results in no loss of the accuracy of the estimated error.The overall speedup achieved in case of TC2 is 1.6-fold, compared to the offline time for TC1.However, since the system is of much larger dimension than the 1-D case, the offline time of computing the inf-sup-constants takes much longer time: 3,380 s.This certifies that using the inf-sup-constant-free error estimator has saved almost one hour of offline computational time. FitzHugh-Nagumo equations The FitzHugh-Nagumo system models the response of an excitable neuron or cell under an external stimulus.It finds applications in a variety of fields such as cardiac electrophysiology and brain modeling.The nonlinear coupled system of two partial differential equations defined in the domain := [0, L] is given below: As done for the previous example, we first consider TC1 for the FitzHugh-Nagumo system.In this example, the greedy algorithm needs 1.6 s to obtain V du while computing the inf-sup constants takes 174.7 s.The POD-greedy algorithm based on the error estimator in Theorem 5 converges to the desired tolerance in 8 iterations, taking 291.2 s.The resulting ROM is of dimension n = 48.Applying TC2 to this example, Algorithm 2 requires just 3.6 s to obtain V du and V d .The POD-greedy algorithm converges in 8 iterations and the runtime is 292 s.The ROM dimension is again n = 48.The convergence of the POD-greedy algorithm for ROM generation in the case of TC1 and TC2 is plotted in Fig. 4. Likewise, using the inf-supconstant-free error estimator results in no loss of accuracy, but ends up with 1.6× speedups.For both examples, the inf-sup constants, i.e., the smallest singular values of A(μ) at all the training samples of μ are close to 1, so that the effectivity of the inf-sup-constantfree error estimator in TC2 has almost no difference from that in TC1.This can be seen from Figs. 2-4. Conclusion A posteriori error estimation is vital not only to quantify the accuracy of ROMs but also to construct ROMs of small dimension, in a computationally efficient and adaptive manner.In this review, we have presented a wide range of a posteriori error estimators applicable to (non)-linear parametric systems, covering both steady and time-dependent systems.Furthermore, we have also discussed multi-fidelity error estimators as a means to improve the computational efficiency of error estimation.As a novel contribution, we have introduced an inf-sup-constant-free output error estimator that is applicable to nonlinear time-dependent systems.This new error estimator is attractive for its improved efficiency and also its ability to be applicable to systems with a potentially ill-conditioned left hand side matrix, e.g.A t (μ) with smallest singular values being close to zeros.Results on three numerical examples were used to illustrate the reduced computational costs offered by the inf-sup-constant-free output error estimator, which is achieved with smaller effort but with little loss of accuracy.Going ahead, we envisage an important potential for accurate error estimation in applications such as digital twins where model updates can be done on-the-fly based on the accuracy quantified by error estimators.
14,901.6
2024-03-11T00:00:00.000
[ "Mathematics", "Engineering" ]
Recent Advancements towards Sustainability in Rotomoulding Rotational moulding is a unique low-shear process used to manufacture hollow parts. The process is an excellent process method for batch processing, minimal waste and stress-free parts. However, the process has drawbacks such as long cycle times, gas dependency and a limited palette of materials relative to other process methods. This review aimed to shed light on the current state-of-the-art research contributing towards sustainability in rotational moulding. The scope of this review broadly assessed all areas of the process such as material development, process adaptations and development, modelling, simulation and contributions towards applications carving a more sustainable society. The PRISMA literature review method was adopted, finding that the majority of publications focus on material development, specifically on the use of waste, fillers, fibres and composites as a way to improve sustainability. Significant focus on biocomposites and natural fibres highlighted the strong research interest, while recyclate studies appeared to be less explored to date. Other research paths are process modification, modelling and simulation, motivated to increase energy efficiency, reduction in scrap and attempts to reduce cycle time with models. An emerging research interest in rotational moulding is the contribution towards the hydrogen economy, particularly type IV hydrogen vessels. Introduction Sustainability is now a key consideration within today's society across all industries, academia, governmental institutions and society.Sustainable development is a core principle and a priority objective of the European Union and, generally, across the globe [1].Key aspects across relating to sustainability are summarised in the Sustainable Development Goals (SDGs) outlined by the United Nations (UN) as part of Agenda 2030 [2].In total, there were 17 SDGs identified in 2015 focusing on objectives such as affordable and clean energy (7); decent work and economic growth (8); industry, innovation and infrastructure (9); responsible consumption and production (12); and climate action (13) shown in Figure 1, all of which contribute to 169 targets. The polymer processing sector has a responsibility as a manufacturing industry to commit to all goals in areas of material design, process modification and scientific understanding.Badurdeen and Jawahir [3] provide a comprehensive definition of sustainable manufacturing as follows: 'the product, process and systems levels must demonstrate reduced negative environmental impact, offer improved energy and resource efficiency, generate minimum quantity of wastes, provide operational safety and offer improved personnel health, while maintaining and/or improving the product and process quality with overall lifecycle cost benefits'.There is a growing interest in sustainable materials and manufacturing methods across industry and this is reflected in a growing body of the research literature and various industry sector reviews.Authors have emphasised the use of polymers for battery research, increasingly sustainable synthesis routes and polymers from renewable resources like biomass [4][5][6][7].The review theme continues into the polymer processing field of work, for example, the use of recycled waste and upcycling both plastic and biomass waste, 3D printing developments and applications towards sustainable environmental solutions [8][9][10].Despite a strong relationship between sustainability and additive manufacture being present in the literature, other studies present in extrusion, composite manufacturing and recycling demonstrate an importance in the relationship of plastics and sustainability [11][12][13].Reviews align on a common objective to establish the current state of the art in material design, processing and processes towards sustainability for the environment and industry.However, in the case of rotational moulding, there appeared to be no publications reviewing contributions towards sustainability as of today. overall lifecycle cost benefits'.There is a growing interest in sustainable materials and manufacturing methods across industry and this is reflected in a growing body of the research literature and various industry sector reviews.Authors have emphasised the use of polymers for battery research, increasingly sustainable synthesis routes and polymers from renewable resources like biomass [4][5][6][7].The review theme continues into the polymer processing field of work, for example, the use of recycled waste and upcycling both plastic and biomass waste, 3D printing developments and applications towards sustainable environmental solutions [8][9][10].Despite a strong relationship between sustainability and additive manufacture being present in the literature, other studies present in extrusion, composite manufacturing and recycling demonstrate an importance in the relationship of plastics and sustainability [11][12][13].Reviews align on a common objective to establish the current state of the art in material design, processing and processes towards sustainability for the environment and industry.However, in the case of rotational moulding, there appeared to be no publications reviewing contributions towards sustainability as of today.The rotational moulding (RM) process offers benefits that cannot be achieved by other polymer processing techniques such as injection or blow moulding.Its manufactured products are stress-free due to no shear processing, it generates minimal waste, it requires inexpensive tooling, it has a high degree of versatility and it is excellent for short production runs [14], all of which justify the selection of RM as a process method for manufacturers and designers [14,15].Despite these exclusive advantages, the process has some drawbacks related to the limited selection of materials (95% of products are manufactured from PE), greater raw material costs due to grinding and antioxidant addition, long cycle times and current dependency on gas as an energy source [14,16].Polyethylene dominates the industry due to its relatively low melting temperature and good flow characteristics that complement the low-pressure process; this polyolefin thermoplastic, although used in high volume, has limitations for certain applications [14].The RM as a polymer processing method is a biaxial process existing fundamentally in four parts with a heating and cooling cycle, as shown in the schematic (Figure 2).It is necessary to understand the basic principles of the process for the context of further review, as many of the limitations for material selection are governed by the nature of the process.Firstly, the tool, which is often fabricated from aluminium or steel, is charged (loaded) with selected material and closed [14].The tool can be closed by a variety of clamping methods; the tool will still have areas on the tool open for venting during the process cycle.Stage 2 begins where the tool enters an oven and is heated; during the mould rotation, the material starts to heat and layer up on the surface of the tool.This is caused by the tool passing through the powder pool at the bottom of the mould, and as temperature rises, the material starts to become 'tacky' or 'sticky', allowing it to adhere to the mould wall [14].As the biaxial rotation and heating continues, layering increases and the powdered material will start to sinter, melt and coalesce, promoting the formation of a polymer melt with trapped air voids [17].As temperature rises towards the end of the heating cycle, densification occurs, reducing the voids within the polymer melt.The tool will then be removed from the oven once optimum internal air temperature (IAT) is achieved.During this period as demonstrated in Figure 2, stage 3 cooling with fans is commonly used; however, there are other methods The rotational moulding (RM) process offers benefits that cannot be achieved by other polymer processing techniques such as injection or blow moulding.Its manufactured products are stress-free due to no shear processing, it generates minimal waste, it requires inexpensive tooling, it has a high degree of versatility and it is excellent for short production runs [14], all of which justify the selection of RM as a process method for manufacturers and designers [14,15].Despite these exclusive advantages, the process has some drawbacks related to the limited selection of materials (95% of products are manufactured from PE), greater raw material costs due to grinding and antioxidant addition, long cycle times and current dependency on gas as an energy source [14,16].Polyethylene dominates the industry due to its relatively low melting temperature and good flow characteristics that complement the low-pressure process; this polyolefin thermoplastic, although used in high volume, has limitations for certain applications [14].The RM as a polymer processing method is a biaxial process existing fundamentally in four parts with a heating and cooling cycle, as shown in the schematic (Figure 2).It is necessary to understand the basic principles of the process for the context of further review, as many of the limitations for material selection are governed by the nature of the process.Firstly, the tool, which is often fabricated from aluminium or steel, is charged (loaded) with selected material and closed [14].The tool can be closed by a variety of clamping methods; the tool will still have areas on the tool open for venting during the process cycle.Stage 2 begins where the tool enters an oven and is heated; during the mould rotation, the material starts to heat and layer up on the surface of the tool.This is caused by the tool passing through the powder pool at the bottom of the mould, and as temperature rises, the material starts to become 'tacky' or 'sticky', allowing it to adhere to the mould wall [14].As the biaxial rotation and heating continues, layering increases and the powdered material will start to sinter, melt and coalesce, promoting the formation of a polymer melt with trapped air voids [17].As temperature rises towards the end of the heating cycle, densification occurs, reducing the voids within the polymer melt.The tool will then be removed from the oven once optimum internal air temperature (IAT) is achieved.During this period as demonstrated in Figure 2, stage 3 cooling with fans is commonly used; however, there are other methods that can be selected such as water mist/spray or ambient cooling [14].The material reaches a peak internal air temperature (PIAT) after removal due before the internal air temperature in the tool starts to decrease leading.The tool and material begin cooling and consequently undergo crystallisation [17].The product continues to cool before releasing from the tool surface as the material transitions from a viscous polymer melt state to a solid moulded part.Once the tooling has cooled sufficiently and it is safe to do so, the part is de-moulded, concluding the process at stage 4. that can be selected such as water mist/spray or ambient cooling [14].The material reaches a peak internal air temperature (PIAT) after removal due before the internal air temperature in the tool starts to decrease leading.The tool and material begin cooling and consequently undergo crystallisation [17].The product continues to cool before releasing from the tool surface as the material transitions from a viscous polymer melt state to a solid moulded part.Once the tooling has cooled sufficiently and it is safe to do so, the part is de-moulded, concluding the process at stage 4. RM must adapt and progress towards sustainable development.In rotational moulding, this offers unique challenges and opportunities.For example, high material volume per part (in some cases exceeding 1000 kg) processed in rotational moulding means plenty of opportunity to reduce the net quantity of virgin materials used by utilising waste, renewable sourced materials and furthermore, the adoption of alternatives, which can have a positive impact on the carbon footprint associated with the process and parts.In addition, greater process understanding, modelling and process development can have a considerable impact due to the part volume.A drawback of the process is the long cycle times, which require extended periods of gas consumption, so improving efficiency by exploring alternative energy sources for processing is essential in future work.Sustainability is a fundamental requirement rather than a current trend due to the current climate crisis and challenges around the generation of waste.To maintain substantial growth, continued progress and impactful research, it is crucial to evaluate the current state of the art in response to increased focus.This study aims to address this gap found in reviewing research and outline the position of research in RM towards sustainable development, whether this concerns reuse, renewable materials, greater scientific understanding or reduction in energy usage.RM must adapt and progress towards sustainable development.In rotational moulding, this offers unique challenges and opportunities.For example, high material volume per part (in some cases exceeding 1000 kg) processed in rotational moulding means plenty of opportunity to reduce the net quantity of virgin materials used by utilising waste, renewable sourced materials and furthermore, the adoption of alternatives, which can have a positive impact on the carbon footprint associated with the process and parts.In addition, greater process understanding, modelling and process development can have a considerable impact due to the part volume.A drawback of the process is the long cycle times, which require extended periods of gas consumption, so improving efficiency by exploring alternative energy sources for processing is essential in future work.Sustainability is a fundamental requirement rather than a current trend due to the current climate crisis and challenges around the generation of waste.To maintain substantial growth, continued progress and impactful research, it is crucial to evaluate the current state of the art in response to increased focus.This study aims to address this gap found in reviewing research and outline the position of research in RM towards sustainable development, whether this concerns reuse, renewable materials, greater scientific understanding or reduction in energy usage. Methods This literature review was untaken with a similar systematic approach as outlined by PRISMA [18].Scopus was selected as the search engine and keywords were used to extract relevant studies from the publication databases from the past decade, 2013-2023.The selection of Scopus was made given its credible source of the scientific literature and the reputation as a well-established, highly regarded platform for research.Such a time period was set to capture the latest state-of-the-art research and most recent contributions to sustainable development, and it has also been a period where this focus has been increasingly topical.Searches were performed in early 2024, including keywords outlined below in Table 1.In addition to the keywords outlined below, "rotational moulding" or "rotomoulding" remained a constant search criterion.Figure 3 outlines the process of the literature search and selection of publications for inclusion.The process was constructed from the identification of studies from the literature search (stage 1), then screening and selection, based on abstracts, then full publication assessment (stage 2) before the conclusion and selection (stage 3).Stage 1 collected all research as retrieved from the search while removing any duplicates and those not related to rotational moulding by title.Stage 2 first consisted of reviewing abstracts, and those not relating to sustainability criteria were excluded before stage 2, screening 2. Stage 2, screening 2 publications were read in full, and if the contribution to sustainability was justified, studies proceeded to stage 3 and were reviewed in greater detail for inclusion within this review. Methods This literature review was untaken with a similar systematic approach as outlined by PRISMA [18].Scopus was selected as the search engine and keywords were used to extract relevant studies from the publication databases from the past decade, 2013-2023.The selection of Scopus was made given its credible source of the scientific literature and the reputation as a well-established, highly regarded platform for research.Such a time period was set to capture the latest state-of-the-art research and most recent contributions to sustainable development, and it has also been a period where this focus has been increasingly topical.Searches were performed in early 2024, including keywords outlined below in Table 1.In addition to the keywords outlined below, "rotational moulding" or "rotomoulding" remained a constant search criterion.Figure 3 outlines the process of the literature search and selection of publications for inclusion.The process was constructed from the identification of studies from the literature search (stage 1), then screening and selection, based on abstracts, then full publication assessment (stage 2) before the conclusion and selection (stage 3).Stage 1 collected all research as retrieved from the search while removing any duplicates and those not related to rotational moulding by title.Stage 2 first consisted of reviewing abstracts, and those not relating to sustainability criteria were excluded before stage 2, screening 2. Stage 2, screening 2 publications were read in full, and if the contribution to sustainability was justified, studies proceeded to stage 3 and were reviewed in greater detail for inclusion within this review.The criteria for selection were an assessment of contributions to sustainability or sustainable development to the rotational moulding research and industry.The definition previously outlined given by Badurdeen and Jawahir et al. [3] on sustainable manufacturing formed the basis for the selection criterion. Contextually, in rotational moulding, this has been defined for the purposes of this research review to contribute towards those outlined in Figure 4.These criteria were used to select relevant studies to be included in this review. The criteria for selection were an assessment of contributions to sustainability or sustainable development to the rotational moulding research and industry.The definition previously outlined given by Badurdeen and Jawahir et al. [3] on sustainable manufacturing formed the basis for the selection criterion. Contextually, in rotational moulding, this has been defined for the purposes of this research review to contribute towards those outlined in Figure 4.These criteria were used to select relevant studies to be included in this review. Literature Search Results The total literature search conducted on Scopus resulted in 321 results (sum of Table 1); after removal of duplicates and those not relating to RM by title, 173 studies remained at stage 1.The initial search, prior to stage 1 sorting, showed that the greatest quantity of results were collected from the keywords 'Energy' 'Bio*' and 'Natural', returning 52, 58 and 51 results, respectively.On the other hand, 'Sustainab*', 'Waste' and 'Recyc*' returned the lowest number of results with 20, 26 and 32 results, respectively.The '*' was used to capture multiple suffixes of keywords; for example, 'Recyc*' would successfully find studies including Recyclate, Recycling and Recycled.All keywords are outlined in Table 1 and represented graphically in Figure 5a. Table 2 presents the main categories and sub-categories of the retrieved works; the greatest proportion of the studies were concerning material-based research.Overall, 75% of the publications appearing within this review are categorised as materials research.As a result of the category being so dominant, sub-categories based on publication themes, namely, use of waste and recyclate; natural fibres and fillers; and biopolymers and composites, have been established.The greatest area of research returned was around natural fibres and this was of interest from 2013 to 2023 with steady growth.Use of waste and recycled materials appeared to be an emerging trend from 2018 and has grown in the past 5 years, and studies in this area remain novel to the RM field.This time period does not only relate to polyethylene recyclate but other feedstocks such as tyres, cable waste and biopolymers only appearing in the past 5-6 years.Other categories, such as process research, modelling and simulation and developments for sustainable applications, yielded similar results.Figure 5b showcases the categories of research and quantity of studies in each category. Literature Search Results The total literature search conducted on Scopus resulted in 321 results (sum of Table 1); after removal of duplicates and those not relating to RM by title, 173 studies remained at stage 1.The initial search, prior to stage 1 sorting, showed that the greatest quantity of results were collected from the keywords 'Energy' 'Bio*' and 'Natural', returning 52, 58 and 51 results, respectively.On the other hand, 'Sustainab*', 'Waste' and 'Recyc*' returned the lowest number of results with 20, 26 and 32 results, respectively.The '*' was used to capture multiple suffixes of keywords; for example, 'Recyc*' would successfully find studies including Recyclate, Recycling and Recycled.All keywords are outlined in Table 1 and represented graphically in Figure 5a.Table 2 presents the main categories and sub-categories of the retrieved works; the greatest proportion of the studies were concerning material-based research.Overall, 75% of the publications appearing within this review are categorised as materials research.As a result of the category being so dominant, sub-categories based on publication themes, namely, use of waste and recyclate; natural fibres and fillers; and biopolymers and composites, have been established.The greatest area of research returned was around natural fibres and this was of interest from 2013 to 2023 with steady growth.Use of waste and recycled materials appeared to be an emerging trend from 2018 and has grown in the past 5 years, and studies in this area remain novel to the RM field.This time period does not only relate to polyethylene recyclate but other feedstocks such as tyres, cable waste and biopolymers only appearing in the past 5-6 years.Other categories, such as process research, modelling and simulation and developments for sustainable applications, yielded similar results.Figure 5b showcases the categories of research and quantity of studies in each category. Material Research Use of Waste and Recyclate 9 Natural Fibres and Fillers 14 Modelling and Simulation 4 Developments for Sustainable Applications 2 Using Recyclate and Waste Materials A summary of the publications reviewed and their conclusions are presented in Table 3.A variety of materials have been used from waste or recycled materials throughout the literature.Chaisrichawla and Dangtungee reported adopting a dry-blended approach, mixing at various compositions [22].The authors assessed the use of rHDPE (recycled highdensity polyethylene) between 0 and 50 wt% melt blended with LLDPE (linear low-density polyethylene).A low MFI 'blowing process' recyclate grade was used and good mechanical performance was suggested, and the authors presented small increases in tensile strength with 10 wt% rHDPE compared to neat LLDPE.Young's modulus was said to significantly increase with the addition of rHDPE.LLDPE also recorded a lower impact strength compared to rHDPE; furthermore, increasing rHDPE content increased impact strength in the LLDPE/rHDPE blends [22].Shafigullin et al. [28] investigated the use of collected water tanks and water-filled barriers manufactured from a rotational moulding grade.The addition of a plasticising masterbatch, grade PF0010/1-PE, at 4 wt% was described as achieving physical-mechanical performance not inferior to virgin PE.The authors stated that 'using products of polyethylene recycling ensures high-quality products' and in this case is suitable for holding tank production [28].Continuing with studies reporting the use of rPE recyclate, Dou and Rodrigue (2022) studied the use of recycled rHDPE and chemical blowing agent (CBA) azodicarbonamide (AZ) in the RM process [27].The authors claimed that foam structures were successfully achieved, concluding that a foam structure form of 100% recycled HDPE can be obtained; however, optimisation was suggested.The study also reported reductions in thermal conductivity and increased cell diameter with increasing CBA agent addition when using rHDPE.Within a similar theme, Saifullah et al. [24] also studied foam, more specifically sandwich structures with non-reprocessed and reprocessed materials.Reprocessed material was defined as rotational moulding scrap, purge materials and rejected parts.Low-velocity impact (LVI) and flexure-after-impact (FAI) were of interest for both materials; reports shared that reprocessing sandwich structures produced lower impact performance.Specifically, at 30 J of impact, crack formation was thought to have occurred in reprocessed structures, while it was stated that the virgin structure maintained structural integrity.However, the authors claimed that catastrophic failure was not observed for either material.It was reported that FAI was lower in the reprocessed material, retaining 91% and 66% of the virgin structure strength at 15 J and 30 J, respectively [24].The authors attributed this to degradation induced by reprocessing, thus influencing the polymeric chain structure and reducing impact strength. There is a significant variability in the PE recyclate used by authors, ranging from MFI 0.18 to 4 g/10 min (190 • C/2.16 kg) throughout the literature.This appears to be dependent on the source of recyclate and extent of the mechanical recycling process.Pick et al. [19] aimed to provide 'baseline data' on polyethylene from recycled rotomoulded tanks and mixed post-consumer (PCR) waste for comparison with virgin polyethylene [22].A compatibiliser was reportedly used in an attempt to increase miscibility, and in this case, an ionomer of an ethylene acid copolymer was studied.Over a 70% reduction in tensile strength was suggested compared to the virgin performance in the recyclate blends, and improvements in Young's modulus were reported and thought to be due to the presence of stiffer PP (polypropylene) domains and the presence of fractional melt PEs.The impact performance for recyclate materials was at 0.5 J/mm and 1 J/mm for peak and total impact strength, respectively.The authors believed that the most significant contributing factor was attributed to the presence of PP and lower melt flow PE grades.The large increase in viscosity at low shear rates was also thought to be a significant factor regarding performance when using PCR [19].Cestari et al. [20] reported a similar study to Pick et al. [19], and in this study, the authors compared various polymer blends of fractional MFI rHDPEs sourced from post-consumer feedstock, bottles pipes and household waste (PCR).Fundamentally, they studied and compared the performance of compression moulded plaques and rotomoulded samples without the presence of a compatibiliser [20].Improvements in the compression moulding process with the incorporation of recyclate were suggested, whereas the opposite was reported in the rotational moulding process, and there was a specifically significant reduction in impact strength compared to neat MDPE.The authors concluded that the lack of shear in the process of the recyclate materials and incomplete melting was causing a discontinuous structure and hence the reported loss in properties.The researchers suspected that the lower cohesion was also attributed to the 20-30% reductions in the flexural modulus.Use of PCR PP and HDPE from alcohol and mineral water bottles was investigated for use in rotational moulded parts by Ferreira et al. [23].The authors focused on the use in construction building blocks.Benefits were expected to remove waste from nature, increase the life cycle of the polymer, generate income from waste and use large amounts of waste.The formation of construction blocks was reported as successful, and the authors commented that HDPE (high-density polyethylene) blocks offer 'excellent material for use in modular construction, in special considering light weighting'.This proof-of-concept study was also said to achieve UL-94 vertical flammability V-0 classification (highest classification under UL-94 vertical standard) with a 5 wt% addition of alumina, thus satisfying and meeting specific construction industry requirements.Cestari et al. [29] also combined a circular economy and rotational moulding by using recyclate in building block applications. Within the literature reviewed, not only PCR has been reported on, but also some attempts have been made using post-industrial waste/recyclate streams (PIW/PIR).For instance, Díaz et al. focused on the potential reuse of waste cable materials as a filler in rotomolded samples [21]; the objective was to valorise residual cable covers, as the metal wire is extensively recycled and available.Unlike the studies discussed above, this study reported on multi-layer, dual-layer and monolayer approaches.According to the authors, this method allowed the addition of material step by step without the concern of phase separation.However, longer production times and increased complexity of the cycles should also be considered when proposing this layering strategy.The specimens obtained in the work were reported to have a strong reduction in impact, and a linear reduction in Young's modulus and maximum stress with an increasing fraction of recyclate, demonstrated for both mono-and multilayer structures.The authors attributed this to the inability to melt the thermoset cable waste [21].Studies assessing the use of valorised PLA cups for rotational moulding were undertaken by Aniśko et al. [25].Varying particle sizes were reported during the study, ranging from 400 to 1400 µm, and according to Aniśko et al., materials were sieved to assess the influence of particle size.The highest tensile strength was achieved with finer material <400 µm and annealing due to a void volume fraction of 0.54%.Aniśko et al. concluded that results from annealed PLA powder and previously extruded PLA proved the most balanced approach between performance and preparation [25]. Recycled rubber has also been proposed as a filler in the rotational moulding process [26]; particularly, ground tyre rubber (GTR) and LDPE (low-density polyethylene) were blended to produce thermoplastic elastomer materials in RM.Shaker and Rodrigue reported on both dry blending and melt compounding, and the scope was to achieve improved impact strength from the addition of rubber.Materials were prepared at 0, 20, 35 and 50 wt% GTR; in addition, the authors discussed the use of regenerated and non-regenerated GTR.The authors described regenerated rubber as de-vulcanised rubber resulting in a reduced number of crosslinks.The results of pre-treatment on recyclate before blending were discussed and thought to be distinguishable by characterisation.SEM on the RM fractured parts showed a homogenous distribution of GTR in the PE (polyethylene) matrix up to 20 wt% according to the researchers.Shake and Rodrigue reported that non-regenerated rubber had up to a 20% greater flexural modulus than regenerated rubber for both techniques.The authors also suggested that the elongation at break at 20 wt% was above 100%; therefore, good thermoplastic elastomers were achieved and attributed to the elastic nature of the recyclate.Further work was suggested to improve surface pre-treatment [26]. Some of the main challenges with the reviewed literature were high viscosity, porosity in the final article, reduction in mechanical properties and potential requirements to achieve compatibilisation.The importance of minimal contamination was discussed as blend properties were thought to diminish with a polypropylene presence.Melt compounding and dry blending were adopted techniques for material preparation; however, it remains unclear which provided a greater benefit to overall performance due to the absence of studies comparing the same materials like for like, the exception being studies with GTR.There was an absence in addressing the thermal stability directly and the potential improvements this may have on material performance. Fibres and Fillers One of the findings from the systematic search was the use of particles in the form of fibres as a filler added to the polymer and used for rotational moulding.This was selected as a category due to the wealth of publications in the rotational moulding literature; however, this trend is not unique and this is a topic of interest for the wider polymer processing field.In total, 10 studies have been assessed under this field.Table 4 provides a summary of the current state of the art retrieved, which shows different plant species used in RM at different loadings.Most research works focus on the use of different grades of PE, due to its wide commercial adoption.Thermomechanical behaviour of the composites does not show any significant differences, regardless of the number of layers or the weight of fibre used As described by Ortega et al., another route towards more sustainable solutions is using residues or natural fibres [40].This is thought to be due to the mechanical reinforcement and increase in other mechanical properties, which can be achieved with such organic or inorganic materials, but also complimented by reducing the dependency on using virgin plastics. From results retrieved, four recent review articles (from 2022 and 2023) exist on this topic.These studies are titled 'Recent Developments in Inorganic Composites in Rotational Moulding' by Ortega et al. [40], 'A review of polymers, fibre additives and fibre treatment techniques used in rotational moulding processing' by Khanna et al. [41], 'Effect of Manufacturing Techniques on Mechanical Properties of Natural Fibres Reinforced Composites for Lightweight Products-A Review' by Sasi Kumar et al. [42], and 'A comprehensive review to evaluate the consequences of material, additives, and parameterisation in rotational moulding' by Yadav et al. [43].These review papers show the relationship and areas of focus between fibres and fillers in RM in detail.In the summary of the review conducted by Khanna et al., the authors reported that the benefits of using natural fibres were the low cost and eco-friendly nature [41].However, some limitations were identified, i.e., the poor bonding at the interface between the polymer and natural fibre, where a significant effect has been given to pre-treat fibres to reduce their hydrophilic nature and overcome such limitations [41].There were four main focal areas within these reviews: polymers used in RM, fillers/additives, natural fibres/artificial fibres and pre-treated fibre addition.While Khanna et al. outlines many resins used during rotational moulding research, types of fibres and properties achieved, a large body of the work focuses on pre-treatment.Research on pretreatments of fibres in rotational moulding was described in that a 'recommendable piece of work is done to increase the mechanical and thermal properties of fibre-polymer composites' to date.The authors discussed publications on the following treatments and the efficiency related to mechanical properties: mercerisation [44,45], maleation treatment [46][47][48], silane treatments [48][49][50], benzoylation treatment [51], peroxide treatment [52] and plasma treatment [53,54].On the other hand, the study by Ortega et al. [40] identified a growing interest in the use of waste and industrial bioproducts from other processes.Again, as an attempt to lower environmental footprints and lower cost associated with virgin plastics, it was assessed for both dry-blended and melt-compounding material preparation methods.The authors also highlighted the need for life cycle assessment of materials using studies to quantify the overall balance in environmental impact.Reviewers focused on topic areas such as composites with glass fibres or particles [49,[54][55][56], nanoparticles such as zinc oxides [57], nanoclay and carbon nanofibres [53,[58][59][60][61][62], and finally other fillers reporting on calcium carbonate [63], copper slag waste [64] or residues from mining processes and basalt powders [65,66].The motivation of this review was to fill the gap for inorganic fillers in sustainable applications for rotational moulding, considering the huge amount of by-products/wastes produced by industrial processes that are not sensitive to temperature and are available in powder form.The authors believed that energy consumption in the processing of composites was unexplored and required further assessment, whilst they also suggested future work assessing the environmental behaviour of materials in application.Yadav et al. [43] reviewed both natural fibres and inorganic filler in their work.The recent review was divided into topics on how the addition of such material affects the matrix, namely, the effect of flow and viscosity, effect of particle size, effect of heating and cooling rate and potential degradation and ageing effects.The authors reviewed each material and concluded that particle loading up to 10 wt% created improvements in impact strength, and treated fibre loading can be increased by up to 30-40% compared to untreated fibre loading [43].The authors expressed that reduced particle size of the filler/fibre can increase strength according to reports within the literature.Most significantly, from the investigation, Yadav et al. suggest that a critical particle size of around 30 nanometres can provide an increase in the modulus based on the literature.Yada et al. continued to summarise the literature with comments on the importance of antioxidant packages for PE, specifically for maintaining a good impact strength for different processing times [43]. Finally, Sasi Kumar et al. [42] offered a broader review of natural fibres for light weighting for many processing methods like a hand layup, compression, injection, filament, spray and rotational moulding.The authors highlight the use of coir, flax, agave, sisal, pineapple and palmyra fibres whilst discussing the benefits of the RM process.Given that this review has a broader scope, fewer RM data were portrayed.Regarding the only study, by Abhilash et al. [44], as mentioned by Khanna et al. [41], where the authors stated a 10 wt% wood dust addition to LLDPE, an 'acceptable strength' was reported on in [42]. In addition to these studies, more publications have been made within this area of research, which were returned as results in the literature search but not mentioned in the previous reviews.All publications are within the past 5 years.Natural fillers such as black tea, giant reed, wood, hemp and rice were amongst findings accompanied by the experimental work with recycled carbon fibres [30][31][32][33][34]67]. Abhilash et al. [34] reported that the addition of risk husk improved the vibrational properties from an experimental modal analysis (EMA), alluding to suitability for automobile vibrating applications by up to 15 wt% addition. Findings from Ortega et al. reported on the addition of giant reed fibres from invasive plant species, and the dry blending with PE and PLA.The authors claimed that impact performance was acceptable at 5% addition [30].It was reported that sieving allowed a greater quantity of fibre to be introduced and 20 wt% can be achieved without significant reduction in properties.While not only typical, monolayer composites have been studied recently but also composite foams by Vazquez Fletes et al. [31].The researchers studied bilayer materials with foaming wood and linear medium-density composites.The authors declared that poor interfacial adhesion reduced impact strength between 47% and 52% with a 10 wt% and 15 wt% addition of wood fibres.Increases in the flexural modulus were shared with agave addition, and further work was suggested to limit gas migration during foaming [31].The combination of recycled polymers and natural fibres has also been studied, and Arribasplata-Seguin et al. [32] investigated the use of recycled (postinjection moulding processing) high-density polyethylene (rHDPE) from bottle caps from an injection moulding grade and capirona wood particles.This was found to be the only study combining recycled polymers and natural fibres.Accordingly, moulded parts formed well, and similar trends to other literature were observed whereby increasing content impeded the sintering process, thus offering inferior performance for composites compared to the neat rHDPE.Arribasplata-Seguin et al. identified the optimal particle size in the study with some improvements in the tensile modulus as outlined in Table 4.Alternatively, recycled fibres were researched by Oliveira et al., and the authors uncovered that recycled carbon fibres were found to have superior Young's modulus.There was an increase in performance relative to neat material by 350% in Young's modulus and a 45% increase in tensile strength when treated with MAPE.Hybridisation mixtures of hemp fibre and recycled carbon fibre at 5 wt% were declared to achieve the same increase in the modulus.It was also reported that Young's modulus of the hybrid composites increased with a greater addition of the recycled fibre.For example, increasing from 20 to 50% recorded such an effect and the authors stated that this was due to the high stiffness of the carbon fibre and reduced porosity in the final moulded part.Treatment of fibres with matrix acid (nitric acid) was thought to improve adhesion with maleic anhydride grafted polyethylene (MAPE) [33]. The PRISMA search presented some more recent studies from the past 12 months.For example, Ortega et al. [35] reported on the addition of ignimbrite from quarries.The authors found that the addition of the inorganic dust and addition of pressurisation achieved up to 27% time saving in cycle time.The use of pressurisation in the composites produced a similar thermomechanical response to neat PE via a dynamic mechanical analysis (DMA) [35].The study offers scope for further pressurisation and composite studies given cycle time savings and property enhancement.Studies assessing multilayer composites with banana fibres have been explored using melt compounding and dry blending techniques by Ortega et al. and Kelly-Walley et al. [36,39].Kelly-Walley et al. reported on the size reduction and changes in the aspect ratio after pulverisation for melt compounded composites [36].Differences in rheological properties were highlighted, showing a reduction in viscosity due to reduction in fibre size and the aspect ratio, and cycle times were extended due to multi-shot processing.It was found that mechanical properties were inferior to neat polyethylene.Ortega et al. [39] used dry blending to avoid any thermal degradation due to compounding processing, also determining that NaOH pre-treatment of fibres enables an increased performance.In summary, the study claimed that 10 wt% composites did not achieve consolidated parts; it was also reported that PEMA did not offer any increased performance, and a significant increase in cycle time (at least 10 wt% per part in heating cycle) was recorded.More work has been undertaken focusing on the use of banana fibres by Ramkumar et al. [38].The study focused upon LLDPE/banana fibre composites prepared via dry blending between 5 and 40 wt%.Fibres were prepared by heating to remove moisture and then 'crushed' to a fine powder.The researchers concluded that the MFI declines upon increasing the fibre content of the composite.MFI changes from 3.5 g/10 min (5%) to below 1.5 g/10 min (30 wt%).In the study, 10 wt% of the fibre was recommended as the optimal loading range [38]. Hybrid systems have also been explored with the introduction of TiO 2 -lignin.Bula et al. [37] reported that when lignin is used with TiO 2 in a dual-filler system, higher thermal stability than the lignin alone can be obtained.According to Bula et al., the hybridised system with both fillers enabled the rotomolded containers to exhibit improved compression resistance with slightly lower impact resistance.The compression tests, characterising the load and deflection relationship, found that all composites had a lower minimumenergy-to-crack resistance between 4.5 and 7.5 J compared to the LLDPE achieving 9 J. Mean compression force exceeding LLDPE was achieved by three different composite formulations, while maximum compression values (specimen 1) from 5% TiO 2 -lignin (1:1) exceeded LLPDE by over 70 N. Results were attributed to narrow particle size and low polydispersity of TiO 2 resulting in reduced agglomeration [37]. Generally, there is significant interest in natural fibres in the RM composite field.This is a potential way to incorporate natural materials and become more resourceful.The next section outlines more of the same, with a greater focus on biopolymers and biocomposites, choosing instead a bio-based or biodegradable matrix.Some recommendations for future research are outlined by the reported literature, for example, the assessment of benefits to energy consumption and processing consumption when utilising fillers and fibres [35].Finally, the preparation strategies focusing on pre-treatment were a strong theme and the influence on the properties of articles with and without these considerations. Biopolymers and Biocomposites In present-day industrial interest and the polymer processing literature, another topic of significant focus is the topic of biopolymers and biomaterials.An area of research observed in the search results was biopolymers and biocomposites where the polymeric matrix is derived from renewable bio-feedstocks rather than fossil-based sources.Benefits of such materials can consist of feedstocks from renewable resources; availability of feedstock; advanced functionalities; potential for biodegradation, thus easing plastic pollution; use for multiple processing methods; and recyclability [68].Current state-of-the-art biopolymers and biocomposites in rotational moulding are generalised in Table 5. • Dry blending recorded migration of fibres towards a far distance from the mould wall, while melt compounding produced micrometric powders, reducing aspect ratio • No significant increases in modulus recorded, attributed to uneven distribution not achieving reinforcement in the composite Some studies retrieved were included in the review by Khanna et al. [41] and were investigating buckwheat and polylactic acid (PLA), PLA and agave fibres and attempts to improve compatibility between natural fibres and green bio-polyethylene [46,47,69]. Aniśko and Barczewski studied bio-based PE and black tea waste from a tea distribution company [67].Black tea would otherwise have been removed to landfill.However, the tea was of interest due to the antioxidant activity it can provide with catechin extracts; the theaflavins and thearubigins still present in spent tea are able to scavenge radicals and protect the polymer chains [67].Black tea was found to reduce the carbonyl index (a measurement of the extent of induced oxidation during processing) compared to neat PE.The greatest reduction to 0 from around ~1.7 was reported at 10 wt%.Despite this improvement, further findings from Aniśko and Barczewski included that mechanical strengths were said to have a 'downward trend' with increasing filler content.The Young's modulus decreased by nearly four times at 10 wt% of black tea, for example [67].In addition, some studies assessed the blending of natural fibres with, specifically, bio-based materials such as PLA/PE in this area.To list a few studies in this area, Robledo-Ortiz et al. assessed natural fibres and green polyethylene biocomposites with Bio-PE/agave composites [46], Barczewski et al. investigated the use of waste copper slag as a filler in PLA [64], PE/PLA with the addition of a husk filler was assessed by Andrzejewski et al. [69] and Perez-Fonseca et al. explored wood/PLA composites, comparing rotational processing and compression moulding [46].In the case of the publication by Barczewski et al. [64], residue from the filler and the degradation of the polymer showed by TGA was thought to be caused by metal oxides in copper slag and was thought to degrade the PLA matrix by depolymerisation and hydrolytic degradation due to residual water being released [64].The PLA was almost completely amorphous and the filler addition had negligible nucleating ability.For composites up to 10 wt%, copper slag achieved increased stiffness and hardness.The highest G' (storage modulus) (at 25 • C and 80 • C) from a thermomechanical analysis and Young's modulus were recorded for 10 wt%.Furthermore, Robledo-Ortiz et al. [47] explored the combination of PLA and agave fibre for the construction of biocomposites.Modified PLA with maleic anhydride (MA) was used in an attempt to improve adhesion, continuing along the common focus to increase fibre-polymer interaction.According to Robledo-Ortiz et al., the MAPLA (maleic anhydride grafter polylactic acid) layer on the natural fibre increased the effectiveness of stress transfer and mechanical properties.Flexural strength was said to increase and the flexural modulus increased from 2.4 to 3.0 GPa at 10 wt% using MAPLA, and in accordance with other studies, fibre addition significantly reduced properties.Reportedly, impact performance was reduced upon the addition of fibres and with the treatment, the treated fibres had reduced impact strength [47].The researchers reported that the treated fibres had greater adhesion, resulting in the fracture of the fibre and the matrix, while untreated fibres experienced 'pull-out', allowing greater energy dissipation [47].Similarly, Andrzejewski et al. investigated PLA as a matrix for biocomposites with buckwheat husks (BHs) but also assessed bio-based polylactic acid (PLA) [69].It was observed that density reduced and porosity increased with increasing filler content, and the authors also found, via a rheological analysis, that viscosity increased for both materials with increasing addition.The former was attributed to PLA degradation and growing hydrodynamic interactions for PE.SEM was interpreted to show a greater interaction between the filler and matrix with only minor interfacial separation compared to full gaps and strong debonding behaviour in PE.A lack of modulus change recorded by Andrzejewski et al. upon tensile testing, was thought to be due to high porosity and PLA/BH composites' increase in brittle behaviour.Other studies reported on the use of valorised Ricinus communis particles in PE and PLA, communicating significant reductions in impact strength when exceeding 5 wt% and reductions in tensile strength in PE, but some increased in PLA [71].Robledo-Ortiz et al. [46] noticed poor compatibility between bio-based PE and natural fillers.Studies investigated 'Green-PE' bio-based agave and core fibres using maleic anhydride (MA) grafted polyethylene (MAPE).Authors applied treatment after drying the fibres by dissolving MAPE in xylene and adding fibres to the solution.Untreated fibres were found to reduce the tensile strength by 45% at 30 wt%; treated fibre addition, however, recorded increases compared to neat PE by up to around 40%.Impact strength was suggested to significantly reduce (50% reduction or more) compared to neat Green-PE; despite this, the treated fibres outperformed non-treated ones [46].Other efforts to compatibilise natural fibres have been made with PLA as the matrix, specifically further studies from Robledo-Ortiz et al. involving the use of glycidyl methacrylate grafted polylactic acid (GMA-g-PLA) to increase the interfacial adhesion between PLA and agave fibre.This treatment was also reported to reduce water uptake, for example, at 25 wt%, untreated, GMA-treated and twice-GMA-treated fibres had 23.8%, 20% and 15%, respectively.From SEM imaging, it was suspected that this is a result of reduced porosity and improved compatibility, thus explaining the lower moisture absorption [70]. In summary, it appears that the addition of such natural materials is preferred via the dry blending method due to the reduction in thermal exposure to the fibres preventing degradation.Addition above 20 wt% was generally seen to significantly reduce material performance.The use of compatibilisers achieved increases in performance but was not always included in formulating such composites.There was some novel use of fillers to increase thermal stability of a matrix like black tea, for example.Life cycle assessment relative to other materials was not included but was previously mentioned by authors as being beneficial. Rotational Moulding Process Development This section focuses on the to be made in the rotomoulding process, mainly on energy consumption and cycle time reductions.This is a key variable in improving process sustainability. Focusing specifically on rotation speed, and therefore mould speed, Glogowska et al. [73] investigated energy consumption.Moulding linear low-density polyethylene (LLDPE) at various auxiliary and major axis combinations and varying rotation coefficients allowed a full characterisation of the influence of mould speed on energy consumption.The highest energy consumption was reported for a 4:1 ratio and lowest for 1:1.Overall, the authors also stated that the greater the main axis rotation, the higher the energy consumption becomes.The study presented that based on the data published by Glogowska et al., mould speeds could potentially reduce energy consumption and have statistical significance regarding the material performance and part thickness.Another method assessed the ability to reduce energy consumption using micro-active composite materials.Specifically, micro-wave-susceptible inorganic compounds (MWSICs) were of interest with methyl phenyl silicone resin to Luciano et al. [74].The authors focused on modifying the conventional rotational moulding process.Inorganic compounds such as silicon carbide (SiC), iron (II) silicate (Fe 2 SiO 4 ), ferric oxide (Fe 2 O 3 ), titanium oxide (TiO 2 ) or barium titanate (BaTiO 3 ) were investigated.The researchers commented that dielectric heating using microwaves when measuring absorbed power reported savings in time and energy compared to the conventional electric resistance heating process [74].This was justified by the absorbed power calculated from the dielectric constant of the MWSIC materials [74].Testing with ISO 527 [75] showed comparable performance from the microwave-assisted process to the classic processing of materials according to Luciano et al. Continuing with the theme of energy consumption studies carried out, McCourt et al. [76] critically evaluated the environmental, economic and productivity benefits of new industrial technologies.A comparison between conventional and robotic rotational moulding processing machines was performed.According to the authors, the starkest difference between the methods mentioned is that the robotic system presents the ability to heat directly on the rotational moulding tool.McCourt et al. identified that when reaching an identical PIAT, it was found that the electrically heated method was over 14 times more efficient (increasing from 35% to 51%).The authors concluded that time savings of 22% per part were achieved with conductive technology compared to the conventional oven process.Finally, another post-process development was reported on by Tyukanko et al. [77].Focus was on ensuring that well-processed products were acquired, and the authors investigated three degrees of sintering (under processed, normally processed and degraded material) at varying thicknesses (7.5 mm, 8.5 mm and 9.5 mm) using ultrasonic signals (USSs).It was reported that it is possible to determine the quality of PE sintering through an analysis for a rotationally moulded part by an analysis of the amplitude of the third harmonic of β USS using the mirror-shadow method. Studies have made progress towards increasing and measuring efficiency and therefore potential modifications to reduce the amount of scrap and energy usage.Considering the progress towards industries of the future, artificial intelligence (AI) and increasing degrees of automation, it is fair to expect the exploration for applying such technologies to RM, as previously mentioned by Crawford and Kearns [14].There is not one dominant technique in this review, but various methods trialled such as microwave technologies, rotation ratios, ultra-sonic signals and speeds and measuring of efficiencies for conventional and more recent processes. Modelling and Simulation for Rotational Moulding The prediction from numerical modelling and simulation with various packages can allow a greater estimation of processing parameters, thus reducing energy expenditure, raw materials and time before optimising the process.Various areas employing such methods to the process method have been explored.The models will not be outlined in detail; only names, the outcome and applications will be shared.For specific model information, publications should be referred to and can be found in the References section. Chandrasekar et al. [78] developed a data-driven economic model predictive control (EMPC) for the rotational moulding process.The model was developed on data from a uniaxial set-up lab scale machine.The methodology described that measuring impact energy and the 'sinkhole' or pinhole area is undertaken before the model is complete by placing them into an EMPC scheme.Details on the model and data can be found within the publication; however, applications of such model were suspected to achieve good product quality and specification-compliant parts while minimising operating costs.The authors stated that the rotational moulding process has non-linearities and is a multi-stage process and due to this, the single model may not be sufficient.It is expected from the authors' comments that further work will explore a reidentification algorithm for rotational moulding specifically. Cai et al. [79] focused on coupling the Smoothed Particle Hydrodynamics (SPH) method with the Mohr-Coulomb material methods.The authors reported comparisons to the Discrete Element Method (DEM) numerical benchmark and validated two experimental results.Findings reported the technique to be accurate when the ratio between the SPH particle radius and drum radius was equal to 0.01.Observations were thought to deliver more appropriate methods for an industrial-scale process compared to DEM where the number of particles <20,000 but SPH can be used when millions of particles need to be predicted [79].This method could offer improvements in understanding the contact of the mould surface with the powder pool, and address benefit challenges such as non-uniform wall thickness, thinning and difficulty covering specific areas. Seregar et al. focused specifically on warpage simulation [80].A drawback outlined was the limited data in the literature on temperature-dependant material properties.Despite this, simulation results were found to be within the deviation range of 1.2-21.2%(others at 69%) dependant on processing conditions.The authors suggested that despite some model limitations, this is a major contribution to rotomoulding simulation.Key highlights from the study stated that the cooling rate was directly proportional to the degree of warpage and maximum warpage occurred when the greatest temperature difference was evident between the part and mould, offering practical considerations for rotational moulding when experiencing warpage.Ubene and Mhaskar [81] explored a multiphase model for the rotational moulding process.The authors concluded that a multiphase subspace identification (MPSSID) was introduced with a good degree of success for modelling a three-phase model [81].Reportedly, the model was tested on pre-existing data from a uniaxial process, and it was found that a three-phase model best predicted the temperature trajectory of the internal air trace, translating into product quality prediction improvements.It was suggested that improvements in more robust model predictive control (MPC) would ease upscaling from laboratory to production scale use. Modelling of rotational moulding has focused on the heating, cooling and particle movement.Many of the studies have been based upon or validated with lab-style data.Authors suggested that they can be used in a manufacturing environment with some adjustments and further work, which is a positive contribution towards improving process control and prediction and thus the overarching theme of sustainable development. Sustainable Applications Using Rotational Moulding This literature review highlighted product development work for applications, which will contribute to a sustainable society.Specifically, search results were found to focus on contributing to the emerging hydrogen economy, with applications directly related to hydrogen storage and type IV hydrogen vessels. Chashchilov et al. [82] selected the rotational moulding process for prototyping of high-pressure storage cylinders, justified by the benefits of the batch process and the ability to have a small production run.Polyethylene powder was used to produce the liner of the article before glass fibre was wound around the polyethylene rotationally moulded part.The binder composition and binding hardener in the polymer composite material shell were assessed.This example demonstrates the potential use of RM products in the storage of high-pressure gaseous fuels.Motaharinejad et al. [83] investigated hydrogen storage tanks' metallic connector and how adhesion can be improved between this part and the polymer.In this case, rotational moulding was outlined as preferred over the blow moulding methods as rotational moulding allows the simultaneous manufacturing of the liner and assembling the boss in one process.Blow moulded articles would require an extra unit operation for welding and assembly.The authors assessed the pre-treatment influence on adhesion, and methods adopted were anodising, flaming, sandblasting (at different particle sizes) and PEG (polyethylene grafted) coating.Characterisation techniques optical microscopy and SEM were used to assess typography, and assess mechanical behaviour at the aluminium (Al)/polymer interface.SEM and optical techniques were interpreted to show that sandblasting achieved the greater roughness value and removal of aluminium oxides.Upon the addition of the PEG coating, the authors reported greater adhesion attributed to hydrogen bonds between the grafted carboxyl groups in the PE and hydroxyl groups in the aluminium.Data obtained from the shear test described PEG to have the shear stress of 26.71 ± 0.1 MPa compared to a non-treated sample, which achieved 1.61 ± 0.1 MPa. It has been demonstrated that developments in hydrogen storage involve using RM as a polymer processing technique for manufacturing.The pre-treatment of the boss and prototyping are the focuses returned within this literature search.In this case, no material development for this application was presented, nor process monitoring.This may be a limitation of the literature review method; however, such areas may be of interest in future publications. Conclusions This review publication outlines many developments in the rotational moulding method, material development, process development, modelling and contributions towards sustainable applications.The wealth of the literature has demonstrated the current position on today's state of the art. • The current state-of-the-art rotational moulding literature contributing to sustainability undertaken with an approach comparable to the PRISMA literature review method heavily focused upon materials and composite characterisation and assessment.From retrieved studies, 75% were categorised as materials, 9% as the process, 9% as the process and modelling and 4 as sustainable applications.The introduction of further search engines, which includes non-academic publications, such as Google Scholar, might be beneficial for future reviews; similarly, the introduction of conference papers would add some interesting insights.However, this was not addressed within this work due to the difficulties in obtaining the full papers for such works, as they are not usually easily accessible. • Recycling appears to be increasingly an area of interest in rotomoulding, as in with a constant growth, like for many other processing methods.The research work performed so far highlights the challenges with viscosity, degradation and impact performance when using polyethylene, including use in blends.A limitation of studies reviewed in this area is the low quantity of studies, from which there is still less focus on polyethylene specifically.This makes it challenging to conclude on the general trends between rotational moulding and recyclate polyethylene adoptions. • Few publications adopted natural fibres and recyclate material simultaneously.There was also limited cross-over between the other subsections.Potentially, in future work, once each research area grows with greater published data and increased material usage in industry, this will facilitate the modelling and simulation of more novel materials such as biocomposites, biopolymers and recyclate.This also highlights the relevance of this literature review performed and the need to perform further work to solve this gap. • Fibre/filler research was seen to be heavily dominated by natural fibres and fillers.A keen research interest was evident with four reviews recently published in this area.The use of biomaterials and biocomposites reported the use of waste materials providing thermal stability; this offers some novel findings, which could offer scope for further research.• As mentioned by authors, material-based research and process modifications would benefit from life cycle assessment (LCA), or an indication as to what degree a reduction in a carbon footprint would be achieved as a result of the work.This would enable a clearer consideration of the potential for progress towards sustainability and the current limitations. • Studying sustainable applications like those supporting the hydrogen economy is new and emerging.Only two returns were made, which may highlight a limitation in the review method.Such limitations could be addressed by collecting further studies and reviewing this area individually, allowing stronger conclusions on the connection of hydrogen and rotomoulding.Developments for future use are expected to continue in the hydrogen economy, as well as a continuation in other prospects such as the automotive sector, tanks and leisure with a shift towards materials of a sustainable nature, and developments in each sector can benefit from more sophisticated technologies and automation. Figure 2 . Figure 2. Schematic of the rotational moulding process, simplified to 4 key stages.Red and blue arrows represent heating and cooling respectively.Black arrows indicate biaxial rotation. Figure 2 . Figure 2. Schematic of the rotational moulding process, simplified to 4 key stages.Red and blue arrows represent heating and cooling respectively.Black arrows indicate biaxial rotation. Figure 3 . Figure 3. Systematic literature review process from collection to selection. Figure 3 . Figure 3. Systematic literature review process from collection to selection. Figure 4 . Figure 4. Sustainability criteria in rotational moulding for the purposes of this review. Figure 4 . Figure 4. Sustainability criteria in rotational moulding for the purposes of this review. Figure 5 . Figure 5. (a) Returns from search, population for each search criteria.(b) Publications reviewed for each subtitle category. Figure 5 . Figure 5. (a) Returns from search, population for each search criteria.(b) Publications reviewed for each subtitle category. Table 1 . Number of results retrieved from Scopus based on keywords. Table 1 . Number of results retrieved from Scopus based on keywords. Table 2 . Categories of literature review based on the search returns from Scopus systematic review. Table 2 . Categories of literature review based on the search returns from Scopus systematic review. Table 4 . Summary of all filler/fibre literature reviewed. Table 5 . Summary of Biopolymer and Biocomposite literature results. •Density of all PE/BH composites was lower than pure PE.Increasing porosity with increasing BH •Highlighted that potential applications could likely be consumer products, furniture accessories and garden equipment
14,126
2024-05-28T00:00:00.000
[ "Environmental Science", "Engineering", "Materials Science" ]
The Volumetric Source Function: Looking Inside van der Waals Interactions The study of van der Waals interactions plays a central role in the understanding of bonding across a range of biological, chemical and physical phenomena. The presence of van der Waals interactions can be identified through analysis of the reduced density gradient, a fundamental parameter at the core of Density Functional Theory. An extension of Bader’s Quantum Theory of Atoms in Molecules is developed here through combination with the analysis of the reduced density gradient. Through this development, a new quantum chemical topological tool is presented: the volumetric source function. This technique allows insight into the atomic composition of van der Waals interactions, offering the first route towards applying the highly successful source function to these disperse interactions. A new algorithm has been implemented in the open-source code, CRITIC2, and tested on acetone, adipic and maleic acids molecular crystals, each stabilized by van der Waals interactions. This novel technique for studying van der Waals interactions at an atomic level offers unprecedented opportunities in the fundamental study of intermolecular interactions and molecular design for crystal engineering, drug design and bio-macromolecular processes. Recently, Johnson et al. [26][27][28][29] developed an approach to visualize NCIs based on topological analysis of the electron density, dubbed NCI surfaces. Within their method, the reduced density gradient (RDG) is analyzed and an interaction map is generated where → s r ( ) 0; i.e. critical points. The magnitude of ρ r ( ) is used to scale the relative strengths of these interactions, and the nature of the interaction is identified by analysis of the Hessian of the Laplacian of electron density. This analysis leads to a highly visual representation of NCIs, without need for expert knowledge in electron density analysis. The intuitive nature of this approach has made it immensely popular among both the experimental and theoretical communities. In particular, NCI surface analysis has found widespread applications in the materials, chemistry, and biology communities, offering new insight into the structure of bonding interactions [26][27][28][29] and self-assembly phenomena 10,[29][30][31] . Furthermore, this technique has allowed for the identification of non-conventional hydrogen bonds, which are typically unobserved within the frameworks of other theoretical approaches [31][32][33][34][35] . Despite its conceptual power, analysis of NCI surfaces does not permit atomic-level interpretation. That is to say that while the NCI surface shows the entire interaction surface between molecules, no information is obtained as to which atoms contribute to this interaction. Such information would offer invaluable insights into the tunability of NCIs. A topological tool -the source function (SF) [36][37][38][39][40][41][42][43][44][45] -is already known to provide this insight, doing so within the framework of the Quantum Theory of Atoms in Molecules (QTAIM) 43 . The NCIs are typically descripted by SF at well-defined critical points. Albeit, the SF reconstruction along a line, on a surface or within a volume were previously introduced and discussed, both for the electron density and the electron spin density [46][47][48] . Here, we describe an interesting application of the SF double integration procedure through its combination with the study of NCI surfaces. This extension allows, for the first time, insight into the atomic contributions to NCIs, and hence represents a new tool in the targeted design of molecular recognition. Through this work, we develop a fundamental extension to the traditional bond critical point 43 (BCP) seen by QTAIM, dubbed the vdW volume (V vdW ). Extension of the SF to accommodate the V vdW leads to generation of a further development: the volumetric source function (VSF). Analysis of NCI surfaces by the VSF provides, for the first time, a chemically intuitive, quantitative approach to the study of non-directional interactions. This technique adds a new method to the chemists' toolbox to investigate the nature and structure of vdW interactions, with unique atomic-level insight. To demonstrate the potential of our extension to QTAIM a brief introduction to the fundamental theories and developments is provided, which is followed by the description of the corresponding algorithmic developments. The development of theoretical considerations is tested on acetone, adipic, and maleic acids molecular crystals discussing the interpretation of vdW interactions with the new topological tool so called VSF here presented. theoretical Background introduction to QtAiM and the source function. QTAIM is based on the separation of molecules into distinct atomic subunits, familiar to chemists. These atomic subunits (known as atomic basins) are delimited by surface so called zero flux surface. Such surface is made by an infinity of points r, for which the dot product of the gradient of electron density, ∇ρ, and the normal vector, n, is zero (zero-flux boundary condition). This is to say that the surface of an atom is defined by the set of vectors n ( ) that align perpendicular to these points of ∇ρ = 0. All points which sit on this surface are denoted r s . Thus, there is no change (or flux) of electron density across this surface, and it is 'zero-flux' . Particular critical points (namely where ∇ ρ r ( ) 2 is negative in only one direction) are known as BCPs and define the unique bond path of covalent (or semi-covalent, e.g. HB) interactions. In order to describe bonding at BCPs intuitively, an extension to QTAIM was developed, dubbed the SF. The SF describes the contribution of each atomic basin to the BCP and hence describes the atomic contributions to individual covalent bonds 43 , or intermolecular hydrogen bonds 44,45 . Only in very specific cases the SF can be used with success to describe bonding in vdW interactions 45 , because in principle vdW interactions may be described in terms of SF reconstructions as for any other point or surface or volume, but there might arise serious numerical accuracy problems. The electron density at point r that comes from atomic basin Ω is given by the sum of two contributions: (i) the integral of ′ ∇ ρ r ( ) 2 evaluated over all points ′ r within Ω, each weighted as an inverse function of its distance from the point of interest − ′ − r r 1 ; and (ii) the flux of the electric field density, ε − r r ( ) s , across the boundary of Ω, calculated at r. This term depends on the electron density of the surface of Ω, ρ r ( ) S [36][37][38][39][40][41][42][43][44][45] . The electron density at point r within an atomic basin Ω can be written as, S S 2 For a closed system with boundary at infinity the second term of Equation 2 is reduced to zero. For a system with more than one atomic basin, ρ r ( ) becomes a sum of the contribution of each atomic basin, with each individual contribution known as the SF for the respective basin, 2 The integrand in Equation 3 is defined as Local Source (LS). non-covalent interaction surfaces. The RDG For regions far from the various nuclei of a system (i.e. where density decays exponentially to zero), the RDG adopts large positive values. In contrast, the RDG values approach zero for regions of covalent and non-covalent bonding. Hence, the magnitude of s offers a good indication of the position of NCIs. The nature of the NCI is subsequently defined by analyzing the Laplacian of electron density ∇ ρ r ( ) 2 . To characterize interactions, the Laplacian is often decomposed into its three principal axes of maximum variation, the three eigenvalues λ n of the Hessian of electron density, ∇ ρ = λ + λ + λ λ ≤ λ ≤ λ r ( ) ( ) 2 1 2 3 1 2 3 . As per convention, λ 1 , λ 2 and λ 3 are in ascending order. Then for BCPs it is clear that λ 3 is always greater than zero. This was arbitrarily assumed, in view of the fact that the low RDG isosurfaces bound volumes where in most cases RDG goes to zero and so enclose an electron density critical point (CP). When this is a BCP (bonding interaction) λ 2 is negative and when this CP is a ring CP (associated to non-bonding/repulsive interactions at the ring CP between the atoms forming the ring) λ 2 is positive and for vdW interactions λ ≈ 0 2 . Hence the nature of interactions at points of → r s( ) 0 can be defined by the corresponding value of ρ ⋅ λ r ( ) sign( ) 2 . It is generally accepted that the magnitude of ρ r ( ) corresponds to the relative strength of interaction. As a rule of thumb, it has been suggested that the limits ± . Development of volumetric source function The above discussion of SF relates to the decomposition of atomic basin contributions to a well-defined point. This is generally taken to be a BCP, and hence SF has traditionally been used to interpret covalent and well-defined NCIs such as HBs. Unfortunately, due to their non-directionality, vdW interactions are not associated with such well-defined critical points. Hence, these interactions have remained beyond the scope of QTAIM analysis. In order to extend QTAIM for this purpose, we define a V vdW via the infinite points within the volume defined by NCI analysis, ∈ V r vdW v dW . The number of electrons within V vdW for a closed system with boundary at infinity is given by the sum on all M atomic basins of VSF. Such equation is defined as the integral of SF on the V vdW giving the contribution of an atomic basin to such volume, The general workflow developed for calculation of the VSF is summarized in Scheme 1. In order to investigate the nature of vdW interactions within the solid state, the input structures were first relaxed in the solid-state using plane-wave Density Functional Theory (PW-DFT). The electron density of the relaxed structure was subsequently obtained with a plane wave kinetic energy cut-off of 100 Ry. This value has been previously shown to facilitate the reliable calculation of the SF 48 . The resulting electron density was analyzed for NCI surfaces using the NCIplot code, as implemented in CRITIC2 49,50 . The NCI surfaces were generated for ρ − . ≤ ⋅ λ ≤ . 0 02 s ign( ) 0 02 2 a.u. The NCI surfaces corresponding to vdW dimers that compose the reticular motif of acetone, adipic and maleic acids crystal structures were subsequently isolated and taken as definition of the ... To assess the suitability of PW basis sets, the electron density in the dimers was recalculated with localized basis set (LBS) at the MP2 51-55 /aug-cc-pVTZ 56 level. This density was subsequently projected onto the NCI surface obtained above (i.e. onto the V vdW ). The SF for each atomic basin on each sampled point within V vdW was calculated using YT integration scheme 57 The error was found to achieve the asymptote for 1000 points. Thus, to guarantee the most feasible NCI description V vdW were sampled with more than 1000 points neglecting the point with ρ less than 10 -6 e/bohr3. Scheme 1. Workflow diagram describing the methodology used for correct reproduction of the VSF using localised basis sets (LBS) within the MP2/aug-cc-pVTZ and plane waves (PW) electron densities. The function f1 is the Gatti's reliability parameter 42 , which describes the percentage error in the reconstruction of the electronic density within the vdW volume (V vdW ). 63 . The PW86PBE 64,65 exchange-correlation functional was used in combination with the exchange-hole dipole moment (XDM) dispersion correction with damping factors a 1 =0.6836 and a 2 =1.5045 [66][67][68] . XDM uses the interaction of induced dipoles (and higher-order multipoles) to model dispersion: the source of the instantaneous dipole moments is taken to be the dipole moment of the exchange hole [66][67][68] . The wavefunction was expanded in a plane wave basis set to 100 Ry, and Brillouin zone integration was performed on 6×6×6 Monkhorst-Pack k-point grid 69 . The SCF convergence was accepted when less than 10 -8 Ry, and geometry relaxation accepted once residual atomic forces fell below 10 -8 Ry/bohr. Comparison of the experimental and relaxed structures are given in Table E.S.I. 1. The NCI surfaces were generated from the PW density, using the NCI implementation within CRITIC2. Values of . ≤ ≤ . s 0 45 055 for computational generated densities as suggested in the original NCI work 26-29 were chosen. The VSF was calculated using an in-house code, according to the work scheme shown in Scheme 1. The LBS electron density used in the final integration step was obtained for the isolated dimers using Gaussian v.16 70 at the MP2 51-55 level with aug-cc-pVTZ basis set 56 for each atom. The dimers extracted for electron density analysis were used as input for SAPT 25 analysis of NCI energies. The scaled SAPT zero energies were calculated within the PSI4 software using the jun-cc-pVDZ basis set 71 , the bronze level SAPT method identified by Parker et al. with overall error of ± 2.05 kJ/mol 72 . To offer a more direct numerical comparison the VSF values in this article they are provided in term of VSF%, As analogue to the role of ρ bcp in HB systems 44,45 , one would expect that the value of n V e vdW should correlate to the total interaction energy across the dimer. As previously explored by Saleh et al. 73 (2015) the correlation between the value of electron density and the interaction energies calculated with SAPT 25 interaction energies, Table 1 and once again we do not find any notable correlation. This suggests that -unlike ρ bcp -the integral of electron density has different character and is not directly indicative of the strength of this category of intermolecular interaction. This is presumably due to the fact that a V vdW composed of points with λ ≈ 0 2 contains loci of both attractive and repulsive character. www.nature.com/scientificreports www.nature.com/scientificreports/ VSf from localized basis set (LBS) electron density. In order to consider the validity of the VSF, we adopt the approach suggested by Gavezzotti for electron density decomposition 24 . Here, the nuclear positions are extracted from the relaxed crystal structure and the electron density is recalculated using a LBS with the post Hartree-Fock correlation scheme, MP2. This approximation is supported by the fact that the Laplacian of electron density exponentially decreases far away from the nucleus as − e x 2 , where x represents the relative distance from nucleus at the point R 43 . The dimers can hence be extracted from crystals given that the primary neighbours will negligibly affect the intermolecular interactions between dimer pairs. To best correlate our values to crystalline structures, the electron density obtained using the MP2/LBS approach were projected onto the V vdW s calculated from periodic calculations. Decomposition of LBS-based ρ V vdW via Table 1. Number of electrons, n V e vdW , within vdW volumes (V vdW ) coming from local basis set electron density for each of the three vdW dimers: acetone (AC), adipic acid (AA) and maleic acid (MA). Note that symmetryadapted perturbation theory (SAPT) energies are given in kJ/mol. The SAPT energy is decomposed into electrostatic (ele.), exchange (exch.), inductive (ind.) and dispersion (disp.) contributions. www.nature.com/scientificreports www.nature.com/scientificreports/ we obtain values f1 of 0.14%, 1.19% and 1.32% for acetone, adipic and maleic acids, respectively. Considering the VSF more closely, we simply decomposed n V e vdW into its atomic contributions for each dimer starting from the electron density contribution of each atom inside the V vdW , Fig. 2. In each structure there is a single oxygen atom which clearly dominates in its contribution to n V e vdW . Hence, there is a single oxygen atom in each case which can be suggested as being largely responsible for the vdW interaction. There is no corresponding density sink visible in any of these three systems. Instead, it appears that the electron density donated by the oxygen atom is spread largely uniformly across the remaining atoms in the system. This is consistent with prevailing theories of vdW interactions, suggesting that the interaction is indeed dependent on the total number of atoms into which the donated density can distribute. VSf from plane waves (pW) electron density. Recent work has, however, demonstrated that the SF could be reliably reproduced from PW densities, provided sufficiently high kinetic energy cut-offs 48 . It is not immediately evident whether the same holds for reconstruction of regions with very low electron density, as V vdW , is possible and if from SF evaluated in such regions is possible to calculate VSF. We therefore extend the above discussion to consideration of VSF as obtained by a PW basis, with contributions to the V vdW taken from atoms throughout the entire structure, i.e. throughout the unit cell. Hence, the n V e vdW will be different with respect to that calculated from LBS electron density of extracted dimers. As the case for LBS electron densities, the f1 values for PW electron densities were obtained for acetone, adipic and maleic acids (i.e., 22.18%, 26.33% and 30.22% respectively). As expected from the above discussion, f1 values for PW electron densities are highest than for LBS electron densities. As compared with the atomistic VSF contributions obtained by LBS above, the qualitative picture obtained for acetone is well reproduced, Fig. 3A. Again, only a single oxygen atom contributes to the NCI surface, while the remaining atoms exhibit approximately equal acceptor contributions. Unfortunately, the systems which exhibit much lower values of electron density and consequent n V e vdW is poorly reproduced when the VSF is generated by PW densities, Fig. 3B,C. Hence, the attempt to use PW densities to derive SF values at the V vdW was seen to not be so feasible like the one with LBS densities. Albeit in previously works SF values derived by PW densities were seen comparable with those obtained with LBS densities at the BCP 43 . This can be due to the fact that the reconstruction of electron density into a point is amenable of the value that it itself assumes at that point. Thus, for electron densities in the order of 10 -3 as seen at the BCPs PW densities work well, but for electron densities less than 10 -3 e/bohr 3 PW densities are not enough even if they are taken in large numbers. Furthermore, a general note it is worth mentioning that the atomistic VSF contributions, Tables E.S.I. 5-7, suggests that atoms which exists within unit cell but outside of the dimer pairs contribute on average no more www.nature.com/scientificreports www.nature.com/scientificreports/ than 1% to n V e vdW . Thus, it is clear that all atoms within the periodic structure do contribute to the structure of the V vdW , but the atomic contribution for those are not directly involved is reasonable neglectable confirming ulterior the work of Gavezzotti with PIXEL 24 . conclusions Here we outline a different approach to deconvolve vdW interactions in terms of their atomic composition, which description has been a long-standing challenge for theorists. This approach is rooted in the QTAIM 56 and represents a vdW analogue to the previously developed SF analysis. This development includes the description of a V vdW and the corresponding VSF. Based on the approach for electron density decomposition proposed by Gavezotti 24 , it was found that electron densities based on the localized basis set allowed reliable calculation of VSF. Analysis in this way suggests that vdW surfaces are generated through the charge donation of a single heteroatom, here oxygen. The reciprocating charge accepting behaviour is shared across the remaining atoms, consistent with prevailing theories of vdW interactions. This suggests that the VSF method indeed presents a consistent picture of these weak interactions. Despite correctly reproducing the number of electrons, n V e vdW , inside the V vdW , plane wave basis sets were found less capable of providing reliable calculation of VSF. This was particularly true for systems with low values of electron density within the V vdW , and is presumably due to the inaccuracies associated with reconstructing points of low electron density. However, this work reports on a new approach for the quantum chemical topological investigation of weak vdW interactions. The methodology is based on empirically definition of vdW interaction using a fundamental dimensionless quantity in DFT that describes the deviation from a homogeneous electron distribution so called RDG [73][74][75] . This leads to atomistic-level detail of the interacting surface and hence offers the first approach for rational design of these ubiquitous interactions.
4,814.6
2020-05-08T00:00:00.000
[ "Chemistry" ]
On the condition number of the critically-scaled Laguerre Unitary Ensemble We consider the Laguerre Unitary Ensemble (aka, Wishart Ensemble) of sample covariance matrices $A = XX^*$, where $X$ is an $N \times n$ matrix with iid standard complex normal entries. Under the scaling $n = N + \lfloor \sqrt{ 4 c N} \rfloor$, $c>0$ and $N \rightarrow \infty$, we show that the rescaled fluctuations of the smallest eigenvalue, largest eigenvalue and condition number of the matrices $A$ are all given by the Tracy--Widom distribution ($\beta = 2$). This scaling is motivated by the study of the solution of the equation $Ax=b$ using the conjugate gradient algorithm, in the case that $A$ and $b$ are random: For such a scaling the fluctuations of the halting time for the algorithm are empirically seen to be universal. Introduction Consider the sample covariance matrix A = XX * where X is an N × n matrix with iid entries with some distribution F . Our main result is a limit theorem (see Theorem 1.3) for the condition number of these matrices when 1 F ∼ X c , X c has the standard complex normal distribution 2 and The study of this scaling is motivated by a computational problem discussed below. In the case that F ∼ X c we refer to the matrices {A} as lying in the Laguerre Unitary Ensemble (LUE). Here K Ai | L 2 ((s,∞)) represents the integral operator with kernel K Ai acting on L 2 ((s, ∞)). The function F 2 (s) is the distribution function for the largest eigenvalue of a random Hermitian matrix in the edge-scaling limit as N → ∞ and is known as the Tracy-Widom (β = 2) distribution [25]. For a positive Hermitian matrix A, let λ max be the largest eigenvalue of A, λ min be the smallest and let κ = λ max /λ min be the condition number. Fix c > 0. We prove: History of the problem. The study of the eigenvalues, and in particular, the condition number, of random positive definite matrices has a rich history in mathematics and statistics going back at least to the seminal paper of Goldstine and von Neumann [10]. The exact distributions of the largest and smallest eigenvalues of sample covariance matrices, with iid columns, were computed in [23] and [17], respectively, in terms of infinite series and hypergeometric functions for any finite N and n. When F is either a standard real or standard complex Gaussian distribution and α = 0, Edelman [7] determined the scaling limit of the smallest and largest eigenvalues and the condition number as N → ∞, It also can be shown that the condition number distribution is heavy-tailed for finite N because the density of λ min does not vanish near zero -λ min is exponentially distributed with parameter N/2 [7]. As noted by Johnstone 3 [15], a by-product of Johansson's work on last-passage percolation [14] is that the fluctuations of the largest eigenvalue of LUE with α fixed are given in terms of the Tracy-Widom distribution F 2 (t) as N → ∞. When α = cN , c > 0, it can be shown that the smallest eigenvalue also has Tracy-Widom fluctuations in the N → ∞ limit [1] and that the condition number has fluctuations given by the convolution of two independent, but scaled, Tracy-Widom distributions. This result does not appear to be explicitly stated in the literature but it follows from the asymptotic independence of the extreme eigenvalues [2]. See [13] for the case of n N 3 . In terms of interpolation between the limiting condition number distribution F E (t) at α = 0 and the convolution of two Tracy-Widom distributions when α = cN , we see that the scaling (1.1) is sufficiently strong to force λ min away from zero and give pure Tracy-Widom statistics for the condition number. Assuming independence of the smallest and largest eigenvalues, from Theorems 1.1 and 1.2 we have Here ξ (1) GUE and ξ (2) GUE are iid random variables with P(ξ (1) GUE ≤ t) = F 2 (t). Then using a Neumann series we have a formal expansion If α = cN , the first two terms in this expansion are of the same order and dominate the expansion. Thus, it is clear why the convolution is involved. Also, for α = 0 it is clear that such an expansion cannot be justified. In the case of (1.1), α ν and just the first term is dominant. Lemma 4.6 rigorously justifies the formal expansion in this case. It does not appear that the scaling (1.1) has been treated previously in the literature. Remark 1.1. Our calculations in this paper are for the case α = (4cN ) γ , γ = 1/2. Note that the first term in the right-hand side of (1.3) is still dominant for 0 < γ < 1. For this reason we anticipate a similar Tracy-Widom limit theorem for the condition number for 0 < γ < 1. In addition, we anticipate that the conclusions of the numerical studies that we discuss below when γ = 1/2 will be unchanged for 0 < γ < 1. Our method of proof makes use of the asymptotics of Laguerre polynomials {L (α) n (x)} n≥0 which are orthogonal with respect to the weight x α e −x dx on [0, ∞). These asymptotics are derived via the Deift-Zhou method of nonlinear steepest descent as applied to orthogonal polynomials [4] (see also [3] for an introduction). For fixed α, this problem was addressed by Vanlessen [27] for the generalized weight x α e −Q(x) dx (see also [21] for Q(x) = x). From the classical work of Szegö [24] and [27], the asymptotic expansion of L (α) N (x) as N → ∞ near x = 0 is given in terms of Bessel functions. Forrester [9] noted that this implies the statistics of the smallest eigenvalue are given in terms of a determinant involving the so-called Bessel kernel. As is seen in Theorem 1.1, under the scaling (1.1), the asymptotics of Laguerre polynomials is given in terms of the Airy function, giving rise to the Airy kernel and producing Tracy-Widom statistics. This was noted first by Forrester [9]. This transition from Bessel to Airy can also be seen by considering the weight x α e −x−t/x dx for varying t [28]. The difference here is that this transition is induced via the parameter α that is naturally present in the Laguerre polynomials. A computational problem. Our motivation for considering the scaling (1.1) comes from a computational problem. In numerical analysis, the condition number of a positive-definite N × N matrix A is arguably the most important scalar quantity associated to the matrix. Specifically, it controls the loss in precision that is expected when solving the system Ax = b. The condition number can also be tied directly to the difficultly encountered in solving the system by iterative methods. This is evident, in particular, in the conjugate gradient algorithm used to solve Ax = b [12]. The conjugate gradient algorithm is stated as follows (see [11] for an overview): Given an initial guess x 0 (we use x 0 = b), compute r 0 = b − Ax 0 and set p 0 = r 0 . For k = 1, . . . , N , 1. Compute the residual r k = r k−1 − a k−1 Ap k−1 where 4 a k−1 = r k−1 , r k−1 2 p k−1 , Ap k−1 2 . If A is strictly positive definite x k → x = A −1 b as k → ∞. Geometrically, the iterates x k are the best approximations of x over larger and larger affine Krylov subspaces K k , as k ↑ N . In exact arithmetic, the method takes at most N steps: In calculations with finite-precision arithmetic the number of steps can be much larger than N . The quantity one monitors over the course of the conjugate gradient algorithm is the norm r k 2 . By [16], from which we see that the larger κ is, the slower the convergence. It is remarkable that the bound (1.4) does not depend explicitly on n, only implicitly through κ. We note that (1.4) was derived in [16] under the assumption of exact arithmetic, but (1.4) is also useful in calculations with finite precision, provided the effect of rounding errors is anticipated to be small. We are interested in the statistical behavior of the conjugate gradient algorithm (1)-(3) when A and b are chosen randomly. Let b have iid entries distributed according to a distributionF which may differ from the matrix-entry distribution F . We use the pair E = (F,F ) to refer to the ensemble, encoding the distribution of the entries of both A and b in Ax = b. Let > 0, E, N > 0 and n > 0 be given. For a pair (A, b) we define the halting time T ,E,N,n = T ,E,N,n (A, b) to be the smallest integer such that r k 2 = r k (A, b) 2 ≤ . The residuals in the conjugate gradient method decrease monotonically, thus r k (A, b) 2 ≤ for k ≥ T ,E,N,n . In [6], the authors performed a numerical study of T ,E,N,n as a random variable. For n = N + 2 √ N , Monte Carlo simulations for different ensembles E were used to show that the fluctuations of T ,E,N,n had a universal limiting form 5 . The motivational goal of this paper is to further our understanding of the universality that is observed in [6] under the scaling (1.1). To this end, we focus on the condition number κ = κ N,n = κ N,n (A) of the matrix A and use the the rigorous results above, numerical simulations of τ ,E,N,n [6] and κ N,n , and the estimate (1.4), to infer properties of the halting time distribution. Numerical results presented in Appendices A.1-A.3 illustrate the sharply different behavior of the condition number and halting time distributions in the cases α = 0, α = 2 √ N and α = N . Universality appears to depend critically on the choice α = 2 √ N : If α = 0 or α = N , the histogram for τ ,E,N,n does not appear to have a universal form. The numerical experiments also reveal an interplay between the tightness of the condition number distribution and tightness of the halting time distribution that is consistent with the following upper bound on T ,E,N,n , which is obtained by taking a logarithm of (1.4): In order to use this bound to estimate the expectation of T j ,E,N,n , j = 1, 2, . . . one must estimate We make some observations: does not provide an a priori bound on the higher moments of T ,E,N,n , N and n fixed, if the distribution P(κ N,n ≤ s) is heavy-tailed. (ii) The right hand side of (1.5) may diverge as N → ∞ even when all moments are finite for finite N . 5 By universal, we mean that the histogram for τ ,E,N,n is independent of E for N sufficiently large and sufficiently small. (iii) When the system is well-conditioned, for example if κ N,n = C +N −a ξ, a > 0, C ≥ 1, for a fixed, positive random variable ξ with exponential tails, all moments of T ,E,N,n are bounded by an N -independent constant. In Appendix A.1, we present numerical experiments on both κ N,n and τ ,E,N,n in the ill-conditioned case, n = N , assumingF is uniform on [−1, 1] and F ∼ X c or F is Bernoulli. The numerical results in Figure 11(b) and 11(c) show that τ ,E,N,n does not converge as n, N → ∞ (Note from Figure 11(b), in particular, that the kurtosis for both distributions does not converge.). This case illustrates point (i): from [7] we see that the condition number distribution has infinite expectation and hence the right hand side of (1.5) is infinite. This is consistent with the numerical results, since the empirical mean of the halting time is (much) larger than N . Since the maximum number of steps in the conjugate gradient method is N in exact arithmetic, this also shows that round-off errors have degraded the accuracy of the computation. The well-conditioned case, n = 2N , is considered in Appendix A.2. We assumeF is uniform on [−1, 1] and F ∼ X c or F is Bernoulli. In contrast with the case above, the condition number κ N,n satisfies (iii) in the limit N → ∞ with a = 2/3 implying, in particular, by (1.5) The numerical results in Figures 12(b) and 12(d) indicate that the random variable τ ,E,N,n remains discrete in the large N limit. Indeed, this is necessarily true for any sequence of integer-valued random variables (X N ) N ≥0 with sup N Var[X N ] < ∞: Clearly, in such a case the fluctuations cannot have a limiting distribution with a density. The only universal limit possible here is a statistically trivial point mass. Finally, we turn to the critical scaling, n = 2 √ N in Appendix A.3. It follows from Theorem 1.3 that As in (ii), the estimates on the right hand side of (1.5) diverge as N → ∞. This divergence is consistent with an empirically observed divergence in the mean and variance of T ,E,N,n . Indeed, in contrast with the well-conditioned case, such a divergence is necessary to obtain a non-trivial limiting distribution for τ ,E,N,n . Further, in contrast with the ill-conditioned case, the empirical mean of the halting time, while large, is (much) smaller than N and rounding errors do not appear to play a dominant role. The paper is organized as follows. In Section 2 we review the global eigenvalue density for LUE and discuss its connection to Laguerre polynomials and hence to a Riemann-Hilbert problem. In Section 3, we use classical Riemann-Hilbert analysis to rigorously determine the asymptotics of the Laguerre polynomials that appear in the global eigenvalue density. In Section 4 we use these asymptotics to prove limit theorems for the distribution of the largest and smallest eigenvalues, along with the condition number. This final section contains the proofs of the main results. As the behavior of the conjugate gradient algorithm is universal with respect to the choice of E above, it is sufficient to consider one particular ensemble. For this reason we have decided to study the analytically tractable case E = LUE. We include a table of notation to guide the reader. N N = 2N (2 log 2 + 1), page 12 The branch of the logarithm that is cut on [0, ∞) with a real limit from above, page 10 The branch of the root that is cut on [0, ∞) with a positive limit from above, page 10 The analytic prefactor for S ← (z), page 20 We first recall the definition of the Laguerre kernel. These ideas are well-known and may be found in [9,Section 2]. Let A = XX * where X is an N × (N + α) matrix of iid standard complex Gaussian random variables. Then the eigenvalues 0 ≤ λ min = λ 1 ≤ λ 2 ≤ · · · ≤ λ N = λ max of A have the joint probability density The statistics of eigenvalues are more conveniently expressed as determinants involving Laguerre polynomials. Recall that the Laguerre polynomials, {L , are a family of orthogonal polynomials on [0, ∞), orthogonal with respect to the weight e −x x α . We normalize them as follows [20] L (α) We then define the associated wavefunctions, orthogonal with respect to Lebesgue measure on [0, ∞), and the correlation kernel The kernel K N defines a positive, trace-class operator on L 2 ([a, b]). Since K N has finite rank, it is clearly trace class. To see that K N is positive, consider f ∈ C ∞ ((s, t)) with compact support and note that It is by now classical that the statistics of the eigenvalues λ 1 . . . < λ N may be expressed in terms of Fredholm determinants of the kernel K N [3,9] (which are well-defined since K N is trace-class). In particular, the statistics of the extreme eigenvalues are recovered from the determinantal formula By the Christoffel-Darboux formula [24], we may also write Thus, questions about the asymptotic behavior of K N (x, y) as N → ∞ reduce to the study of the large N asymptotics of L (α) What is new in this paper is the study of these asymptotics in the scaling regime for α, see (1.1). The Riemann-Hilbert approach to Laguerre polynomials To compute the asymptotics of the Laguerre polynomials we use their representation in terms of the solution of a Riemann-Hilbert problem and follow [27] for the general theory. We also refer to [21] for some explicit calculations. We note that we use Riemann-Hilbert theory as opposed to using the integral representation for Laguerre polynomials to determine the appropriate asymptotics. This is because in the scaling region of interest the Riemann-Hilbert method gives a direct and algorithmic approach to the difficulties arising from turning point considerations. Define the rescaled polynomials (cf. [21]) Here {π j (x)} are the monic orthogonal polynomials for the weight This scaling is chosen so that the asymptotic density of the zeros ofL N (x) as N → ∞ is supported on the interval [0, 1] (see [21] for α fixed). Following [8] we define where C Γ denotes the Cauchy integral operator and, by (2.1), In the remainder of the manuscript we use the notation 6 Y ± (z) = Y ± (z), z ∈ Γ to denote the boundary values of an analytic function Y (z) from the left (−) or the right (+) side of as one moves along an oriented contour Γ. See Figure 1 for schematic of a contour Γ. We also use the notation By general theory, the matrix Y (z) defined in equation (2.6), is the unique solution to the following Riemann-Hilbert problem: 6 We allow the ± to be in either the sub-or super-script for notational convenience. Since det(Y (z)) ≡ 1, this identity is equivalent to equation (2.7). The goal of the reminder of this section is to solve for Y (z) asymptotically as N → ∞. This requires a series of explicit transformations or deformations: • T → S so that the jump matrices for S tend uniformly to the identity matrix on closed subsets of C \ [0, 1], and • S → E where the jump matrices of E tend to the identity matrix in L 2 ∩ L ∞ . This procedure is now standard and a general reference is [4]. The first deformation: normalization at infinity Before we proceed we introduce additional notation to fix branch cuts. For γ ∈ R let have its branch cut on (−∞, 0] such that (z) γ ← > 0 for z > 0, i.e., the principal branch. By contrast, let have its branch cut on [0, ∞) in order that (z) γ →,+ > 0 for z > 0, when the real axis is oriented left-to-right, i.e. the limit from above is positive. Note that for Im z > 0, (z) We similarly define branches of the logarithm. The map z → log ← (z) = log(z), denotes the principal branch of the logarithm, whereas has its branch cut on [0, ∞) so that log →,+ (z) > 0 for z > 1. It is then clear that if Im z > 0, log ← (z) + 2πi, if Im z < 0. In this notation the left arrow ← is used for emphasis and signifies the principal branch. It is sometime omitted when there is no confusion. Also √ · is always used to denote the non-negative square root of a non-negative number. As in [5], we remove the polynomial behavior of Y (z) at infinity using the log transform of the equilibrium measure as follows. In our case α/N → 0 as N → ∞ and the equilibrium measure (EM) is the so-called Marchenko-Pastur distribution [19] dµ(s) = 2 π 1 − s s χ [0,1] (s)ds, (2.8) where χ A denotes the characteristic function of the set A. We define the log transform We also introduce the following functions to simplify the analysis of the asymptotic behavior of g as z → ∞ and its jumps across the interval (0, ∞): (2.14) Proof. Parts (i) and (ii) follow directly from the definition of g(z) and log → (z). To establish (iii), we first show that extends to an entire function of z. First, it is clear that the function is bounded in the finite plane, and analytic for Re z < 0. For Re z > 0 we check the boundary values. For 0 < Re z < 1 we have Now, if z > 1 we use (2.13) and (2.14) and (2.15) holds. This shows that g(z) + φ → (z) is an entire function because 0 and 1 would have to be bounded, isolated singularities. To establish (2.12) we turn to an explicit integration of φ → (z). Consider for 0 < z < 1 Then, we must consider the analytic continuation of this function so that it is analytic in C \ [0, ∞). We find that if θ = arcsin √ z then If we take the (−) sign, we actually need π − θ to consider the correct branch. From this, it follows that First, it can be shown that if ψ → (z) = c ∈ R then z > 1. But for z > 1, ±ψ ± → (z) > 0 so that ψ → must map into either the upper-or lower-half plane. We find that Im ψ → (z) > 0 for z ∈ C \ [0, ∞). From this we find the analytic continuation to the whole complex plane where log ← an be replaced with log → because Im ψ → (z) ≥ 0. Also, for this reason, log ← ψ → (z) is analytic anywhere ψ → (z) is. We also have This proves (2.12). The second deformation: lensing We factor the matrix J T on (0, 1) as follows: This allows a lensing of the problem. Let Γ ↑ and Γ ↓ be contours as in Figure 2 and set if z is outside the region enclosed by Γ ↑ and Γ ↓ , , if z is inside the region enclosed by [0, 1] and Γ ↑ , , if z is inside the region enclosed by [0, 1] and Γ ↓ . We obtain the following Riemann-Hilbert problem for S. Riemann-Hilbert Problem 2.3. The function S(z) satisfies the following properties: Figure 2: The jump contours Γ for S. The region bounded by Γ ↑ and Γ ↓ is called the "lens". This deformation is valid for any choice of contours Γ ↑ and Γ ↓ arranged as in Figure 2. Proof. We now obtain an upper bound on the real part of φ → . For Im z > 0 the function (z) It is also clear that for arg z ∈ (0, π), Im (z) From this, the simple estimate follows for Im z > 0 Assume 0 ≤ Re z ≤ 1, Im z > 0 and consider the real part of the integral The first term is purely imaginary and the only contribution to the real part is from the second term. Then This quantity on the right-hand side is bounded uniformly above by a negative constant provided Im z ≥ δ > 0. A similar argument follows for Im z < 0 resulting in the same estimate. From this discussion, it follows that for some positive constants D δ , c δ . For z > 1 we also look to estimate the real part of φ → (z). It follows that which is necessarily a monotonic, strictly increasing function giving the estimate for some new positive constants D δ , c δ . These estimates hold even in the case α → ∞ because e −(2α+2)z+α log z ≤ 1 for z ≥ 1. The lemma follows from a redefinition of c δ , D δ . We require the following lemma in the sequel. Proof. We first claim that sign Im Assume that there exists two points a, b ∈ C + so that It follows that there exists a point z * on the line that connects a and b so that Im z * −1 because g is positive in the upper-half plane, f + (s) = 0 for 0 < s < 1 and f + (s) > 0 for s < 0. Also, for For z ∈ C − , φ → (z) = φ → (z * ) * because the two functions are equal for z < 0. Thus, it remains to show that φ + → (z) vanishes on [0, 1] only at z = 0. From is a strictly monotone function on (0, 1) and the lemma follows. This leads us to consider the solution of the Riemann-Hilbert problem obtained by removing the jumps on Γ ↑ , Γ ↓ and [1, ∞). Riemann-Hilbert Problem 2.4. We seek the function S ∞ (z) with the following properties: We now show that S ∞ exists by an explicit construction. We first start with the determination of a matrix function N which satisfies the following conditions Direct calculation shows that M = U −1 N U satisfies the conditions It is also important to note that for We write the jump condition for S ∞ (z) as We make the ansatz . Then (2.24) turns out to be equivalent to the condition d + (z)d − (z) = eŵ (z) . Formally, by taking the logarithm and letting h(z) = log d(z) To find h(z) we let h(∞) = 0 (d(∞) = 1) and we recall (2.16). Following [21], we claim that is an appropriate choice. First, we have that for z > 1, Finally, lim z→∞ h(z) = 1 2 α(2 log 2+1)+ 1 2 follows from (2.16). Then D(z) = e h(z)σ3 , D ∞ := e (α log 2+ 1 2 (α+1))σ3 and then We note that we can also write, using (2.13), The asymptotics of critically-scaled Laguerre polynomials From the above calculations, one may guess that S(z) ≈ S ∞ (z) in some sense as N → ∞. However, this is not justified because the convergence of the jump matrix of S to the jump matrix of S ∞ is not uniform. Actually, because of the singularity behavior of S ∞ (z) at z = 0, we'll see that S(z) ≈ S ∞ (z). We now develop local parametrices to solve the Riemann-Hilbert problem for S locally near z = 0 and z = 1. This requires the construction of the so-called Airy and Bessel parametrices. Remark 3.1. Unless otherwise noted we use the convention Specifically, O(·) should be treated as a scalar, but it can be a different function in each component. The classical Airy parametrix. First, we divide the complex plane for variable ξ into sectors, see Figure 3. Let ω = e 2πi/3 and Ai(ξ) denote the Airy function. Define From the asymptotic calculations in Appendix B.1, it follows that P Ai solves the following Riemann-Hilbert problem: Riemann-Hilbert Problem 3.1. We will come back to this and use it heavily later. We also rewrite the the asymptotics in a more convenient form: The classical Bessel parametrix. The Airy parametrix has the characteristic that four contours exit from the origin in the ξ plane. This is the case, locally, near z = 1 in our Riemann-Hilbert problem for S. Near z = 0 it, the contours look more like those in Figure 4. Define (3.2) Here I α , K α , H (2) α and H (1) α are the modified Bessel and Hankel functions [20]. From the calculations in (B.2) and the jump condition established in [18] we have the following: Figure 4: Dividing the complex plane to define the Bessel parametrix with the contour Σ Bessel = β 1 ∪β 3 ∪β 3 . Riemann-Hilbert Problem 3.2. When α is given by (1.1), the function P Bessel solves: As in the case of the Airy parametrix, we rewrite this in a more convenient form: These asymptotics apply for all ξ with | arg ξ| < π. Furthermore, the asymptotics remain valid up to the boundary, arg ξ = ±π. Mapping the Airy parametrix. As it stands, the function P Ai solves a Riemann-Hilbert problem that resembles the jumps of S near z = 1 but not exactly. We perform a change of variables and pre-multiply by an analytic matrix function to make this match exact. There is an additional constraint. We want the resulting local solution to also match with S ∞ in an appropriate manner. Consider for z > 1 whereĜ(z) has a convergent power series in s − 1 with real coefficients. Define the function which is analytic function in a neighborhood of z = 1. Here (·) 2/3 denotes the principal branch. We then have , in the sense that (f ← (z)) 3/2 = 3 2 φ ← (z) for z close to one. We establish the following facts about f ← : This follows from (3.3). This shows that, in particular, f ← is one-to-one (conformal) near z = 1. This follows becauseĜ(z) is real for z real. First, let δ be sufficiently small so that f ← is one-to-one on B(1, δ ). Let L > 0 be the largest value such Assume that Im f ← (a) > 0 and Im f ← (b) < 0 for a, b ∈ B(1, δ) ∩ C + . Therefore, on the line that connects a to b there must be a value z * such that f (z * ) ∈ (−L, L) and this contradicts that f ← is one-to-one. Then f ← (B(1, δ) ∩ C + ) must be mapped into either the upper-or lower-half planes. we see that Im f ← (1 + i ) > 0 for sufficiently small and the claim follows. Figure 6: The pullback of K δ ∩ Σ Ai to create the contour Γ Ai . Let δ > 0 be sufficiently small so that f ← is one-to-one, analytic and maps C + into C + when restricted to B(1, δ). Let K δ = f ← (B(1, δ). It follows that K δ is an open neighborhood of the origin. Define a contour Figures 5 and 6 for a graphical representation of this procedure and how this affects the precise definition of Γ ↑ and Γ ↓ inside the ball B(1, δ). We prove the following lemma in Appendix B.3. Lemma 3.1. The functions S ← (z) and M Ai (z) have the following properties: has the same jumps as S(z) in a neighborhood of z = 1. Mapping the Bessel parametrix. We perform the same steps for P Bessel so that it solves the Riemann-Hilbert problem for S(z) near z = 0. As before, we would want the resulting local solution to match with S ∞ but we will see, this is impossible and this complication will modify the entire discussion that follows. Consider for z < 0 Now because √ 1 − s has a convergent Taylor series about s = 0, with real coefficients, we find for an analytic function T (z) whose Taylor series has real coefficients. Define It is clear that this function is analytic in a neighborhood of z = 0. We further note that for z ∈ [0, ∞ sufficiently small because f → (z) and −φ → (z) are both positive for z < 0. We establish the following facts concerning f → : This follows directly from the definition of f → . This follows from the fact that the Taylor series of √ 1 − s has real coefficients. The same argument used to show that f ← (B(1, δ) ∩ C ± ) ⊂ C ± can be applied to −f → (z) and this claim follows. Consider Note that φ → (z) only vanishes at z = 0 from (2.22). Consider the function In the same way as before, let δ > 0 be sufficiently small so that f → is one-to-one, analytic and maps C + into C − when restricted to B(0, δ). Let L δ = f → (B(0, δ). It follows that L δ is an open neighborhood of the origin. Define a contour Γ Bessel := f −1 ← (L δ ∩ Σ Bessel ). We reverse the orientation of all contours. See Figures 7 and 8 for a graphical representation of this procedure and how this affects the precise definition of Γ ↑ and Γ ↓ inside the ball B(0, δ). We prove the following lemma in Appendix B.4. • M Bessel (z) is analytic in a neighborhood of z = 0. To account for the fact that (3.9) has a factor of e 2c/φ→(z)σ3 we consider an additional Riemann-Hilbert problem. First, we look closer at the jump matrix Note that A ∞ is independent of N . This jump matrix is analytic across (0, 1): Indeed, for 0 < z < 1 Following standard theory, the solution is unique if it exists. We now construct an integral representation for A ∞ (z) assuming it exists. Once we have the representation it is a rather simple matter to check that the formula gives a bonafide solution. First consider, B(z) := A ∞ (z)N (z). Then B(z) is analytic in C \ Γ δ , Γ δ = ∂B(0, δ) ∪ [0, 1] and it has the following properties be the first row of B(z). It follows that f 2 (z) = O(z −1 ) as z → ∞ and we are led to consider g(z) = (z(z − 1)) 1/2 f 2 (z)f 1 (z). Then for some ∈ C to be determined below g + (z) = g − (z), |z| = δ or 0 < z < 1, g(∞) = . Because B(z) should have at most fourth-root singularities at z = 0, 1, we require g(z) to be bounded at z = 0, 1. Thus g(z) is entire, g(z) ≡ and hence We are led to the following Riemann-Hilbert problem for f 1 (z): (3.10) Now, using the principal branch of the logarithm, consider the function r(z) = (log f 1 (z))(z(z − 1)) −1/2 : We choose to enforce the last condition. The function r(z) is the sum of a function that satisfies the first jump condition and a function that satisfies the second condition. We first claim that . This is true as the argument of the logarithm is never negative. Indeed, if But because (z(z − 1)) 1/2 > 0 for z > 1 and (z(z − 1)) 1/2 < 0 for z < 0 the claim follows. It also follows thatr + (z) +r − (z) = log(z(1 − z)) and hence Then r(z) = r 1 /z + O(z −2 ) as z → ∞ where We must have r 1 = 0, so we choose = (c) by Because φ → (z) = φ → (z) for z ∈ [0, ∞), the exponent is real, implying that ∈ iR + . Then Figure 9: The full definition of the contours Γ ↑ and Γ ↓ . The definition inside the shaded regions is given by Γ Ai and Γ Bessel and the remainder is a straight line connecting the two contours. Note that the contours are continuous but not necessarily smooth. It is easy to see that both f 1 and f 2 have at worst fourth-root singularities at z = 0, 1. Using (3.4) (3.14) From here on, we let 0 < δ < 1/2 be sufficiently small so that that f ← and f → are one-to-one on B(1, δ) and B(0, δ), respectively. We establish the following: Riemann-Hilbert Problem 3.4. The function E(z) satisfies the following properties: and continuous up to the contour Σ R . The jump conditions for E(z) are given by where the error terms tend to zero in L 2 ∩ L ∞ as N → ∞. Using (3.14) and (3.13) and Lemmas 3.2 and 3.1 it is clear that the jump matrix for E has the correct behavior on ∂B(0, δ) and ∂B(1, δ). On Γ ↑ , Γ ↓ and (0, ∞) the jump matrix for E(z) is given by Because A ∞ (z), A −1 ∞ (z), N (z) and N −1 (z) are all uniformly bounded on the contours being considered we only need to consider D(z)J S (z)D −1 (z). For z ≥ 1 + δ From classical theory [3] (see also [26]) it follows that It then directly follows that E(z) → I, E (z) → 0 uniformly for z bounded away from Σ R . We have thus proved the following theorem concerning the asymptotics of Laguerre polynomials. The extreme eigenvalues and the condition number In this section, we prove Theorem 1.1, Theorem 1.2 and Theorem 1.3. The proof of Theorem 1.1 and Theorem 1.2 relies on the asymptotic results of Section 3, and some basic operator theory. Since the kernel K N is related to Y through equation ( 2.7), we apply Theorem 3.1 to show that after suitable rescaling, the kernels K N converge to the Airy kernel K Ai at both the soft and hard edge. Standard results on operator theory are then applied to establish convergence of the associated Fredholm determinants. The proof of Theorem 1.3 follows from these results, though an additional lemma is necessary to account for the fact that the largest and smallest eigenvalues are not independent. Uniform convergence of K N at the hard edge We first consider the smallest eigenvalue. Let δ * be as in Theorem 3.1, fix δ ∈ (0, δ * ) and consider x in the interval (0, δ]. We use equations (3.6)-(3.7) relating S to the Bessel kernel and Theorem 3.1(a) to obtain It follows from the definition of M Bessel (z) thatM Bessel (z) := D ∞ M Bessel (z)M − 1 2 σ3 is an analytic function in B(0, δ) that is independent of N (also of M and α). It is convenient to introduce the following functions: A straightforward calculation then yields Thus, we may define the analytic function B(y) := E(y)M Bessel (y) and rewrite equation ( 2.7) in the form The study of the asymptotics of K N has now been reduced to the asymptotics of V and W . We further simplify V and W using the definition and properties of the Bessel parametrix. Comparing (3.2) and ( 4.1), we see that we must consider the Bessel parametrix P Bessel (ξ) with argument ξ = M 2 f → (x) with x ∈ (0, 1). The relevant regime here for the Bessel parametrix is (III). Indeed, for x ∈ (0, 1), φ + → (x) is purely imaginary with a positive imaginary part so that f → (x) < 0. Furthermore, because f → maps the upper-plane to the lower-half plane (at least locally) ξ 1/2 = − 1 2 M φ + → (x) (negative imaginary part) and (−ξ) 1/2 = − i 2 M φ + → (x) (positive real part). Recall also that P −1 Bessel is easily computed since P Bessel has determinant one. From (3.2)(III), we obtain ) . (4.3) In the second line we have used the Bessel function identity J α (x) = 1 2 H (1) α (x) + 1 2 H (2) α (x). A similar calculation for W yields The above formulas hold in the region x ∈ (0, 1). It is convenient to extend the range of definition of V and W by adopting the convention V (x) = W (x) = 0 for x ≤ 0. We now apply classical asymptotic formulas for the Bessel function J α to obtain quantitative decay estimates on V and W . These will imply convergence results for K N . More precisely, define the rescaled variables and the rescaled kernel ,K N (x, y)dy := K N (x,ŷ)dŷ. We then have the following convergence result. Proposition 4.1. As N → ∞ the rescaled kernels converge pointwise, and the convergence is uniform for (x, y) in any compact subset of (−∞, L] 2 for any L ∈ R. If x = y then the limit is determined by continuity. Further, there exists a positive, piecewsie-continuous function G : (−∞, L) 2 → (0, ∞), such that The technical lemmas that underly this result are stated below, but proved in Appendix C. otherwise. Assume x ∈ (−∞, L], L ∈ R. Then there exists a constant C L > 0 and A > 0 such that if α > A then , , where the error terms are uniform in x. Uniform convergence of K N at the soft edge We write the expansion for 1 < z < 1 + δ From (2.12) and (2.13) we have We simplify these expressions using and noting that B(z) := E(z)A ∞ (z)D ∞ M Ai (z)M − 1 6 σ3 has no N dependence, is analytic on (1 − δ, 1 + δ) and has a constant determinant. In view of (2.7) we define It follows from [20, (9.2.8)] that det P Ai (M 2/3 f ← (z)) = (2πi) −1 so that By analytic continuation, the same formula holds for 1 − δ < z < 1. And then for appropriate values of x and y. Next, for z ≥ 1 + δ we examine Define the N -independent functionB(z) = E(z)A ∞ (z)N (z) and the two functions And then for appropriate values of x and y. Consider the scaling operatoř uniformly for x in a compact set. DefineǨ N (x, y) through the equalityǨ N (x, y)dy = K N (x,y)dy. The following follows directly from (4.9). where the error terms are uniform in x. Proofs of the main theorems Our main tools for our proofs are from [22]: Here J 1 is the set of trace class operators with norm From [3] it is known that det(I − K N | L 2 ((t,s)) ), gives the probability that are is no eigenvalues in the interval (t, s). Thus taking into account the initial scaling by ν P(λ min /ν ≥ t) = det(I − K N | L 2 ((0,t)) ), P(λ max /ν ≤ t) = det(I − K N | L 2 ((t,∞)) ). The same statements hold forǨ N (x, y) andK N (x, y). From Theorems 4.2 and 4.1 we have that Also, ν = 4N + o(α −1 ) = α 2 /c so that this result can be simplified. We use the following lemma with Proof. First, consider Because is arbitrary, the lemma follows. To see that our choice of c N ,c N , d N andd N fits the hypotheses of this lemma, note that and then The theorem follows. Following the arguments in the proof of Theorem 1.1 we have sufficient conditions for Written another way, using that ν = 4M , This proves Theorem 1.2. Before we prove Theorem 1.3 we prove a critical lemma from first principles. Lemma 4.6. Assume two sequences of random variables (X n ) n≥0 , (Y n ) n≥0 and two sequences of real numbers (a n ) n≥0 , (b n ) n≥0 satisfy the following properties: • Y n > 0 a.s., a n , b n > 0, • Y n = a n + b nŶn so thatŶ n → ξ in distribution, and • a n /b n → ∞ and an bn |X n − 1| → 0 in probability. Then Proof. We then claim that for each t ∈ R (where t is a point of continuity for F (t)) where F (t) = P(ξ ≤ t). To see this, compute P X n Y n ≤ 1 a n + b n t = P X n a n + b nŶn ≤ 1 a n + b n t = P a n b n (X n − 1) + X n t ≤Ŷ n For 1 > > 0, consider P a n b n (X n − 1) + X n t ≤Ŷ n = P a n b n (X n − 1) + X n t ≤Ŷ n , a n b n |X n − 1| ≥ + P a n b n (X n − 1) + X n t ≤Ŷ n , a n b n |X n − 1| < . It is clear, by the fourth assumption that the first term here vanishes as n → ∞ so we concentrate on the latter. It is certainly true that for t ≥ 0 and sufficiently large n lim sup n→∞ P a n b n (X n − 1) + X n t ≤Ŷ n , a n b n |X n − 1| < ≤ lim sup Then we use the fact that P(A ∩ B) = P(A) + P(B) − P(A ∪ B) to find A n = a n b n (X n − 1) + X n t ≤Ŷ n , B n = a n b n |X n − 1| < , P a n b n (X n − 1) + X n t ≤Ŷ n , a n b n |X n − 1| < = P(A n ∩ B n ) = P a n b n (X n − 1) + X n t ≤Ŷ n + P(B n ) − P(A n ∪ B n ). It is also clear by the third assumption that lim n→∞ P(B n ) = lim n→∞ P(A n ∪ B n ) = 1. We use the estimate P a n b n (X n − 1) + X n t ≤Ŷ n ≥ P( + (1 + )t ≤ Y n ). We have shown that Letting ↓ 0 demonstrates the claim. For t < 0, this argument can be adapted by replacing (1 ± ) with (1 ∓ ). We now modify things and consider for t being a point of continuity of F (t) P X n Y n ≤ 1 a n 1 − b n a n t . We note that (for fixed t) 1 a n 1 − b n a n t = 1 a n + b n t and consider a n b n (X n − 1) = a n b n (X n − 1) + a n b n E n (t)X n . It then follows that an bn |X n − 1| prob → 0 and from this we may apply (4.11) with X n replaced byX n to state that for any s where s is a point of continuity of F (s), If we set s = t we find This proves the lemma. Proof of Theorem 1.3: The proof of this theorem relies critically on on Lemma 4.6. We note that For any L > 0 there exists N * > 0 such that for N > N * lim sup because ν/α → ∞. A similar estimate follows for Then because L is arbitrary we have This proves the theorem. A Motivating numerical calculations In this appendix, we discuss the simulations of the halting time T ,E,N,n that motivated the study of the critically-scaled where T ,E,N,n and σ ,E,N,n represent the sample average and sample standard deviation, respectively, taken over the M 1 samples. We plot the histogram of τ ,E,N,n in Figure 10 for three choices of F . This computation indicates universality for τ ,E,N,n . Notation. When F is a Bernoulli random variable, taking values ±1 with equal probability, we call the resulting ensemble the positive definite Bernoulli ensemble (PBE). We also refer to the pair E = (F,F ) as PBE (or LUE of F ∼ X c ). HereF is understood to be uniform on [−1, 1]. A.1 Ill-conditioned random matrices In this section we consider the distribution of T ,E,N,N , i.e. n = N in the case of LUE and PBE. The limiting distribution for the condition number is given in (1.2) for LUE. We plot a simulated histogram for the condition number when N = 100 and when N = 196 in Figure 11(a) again for LUE. In Figure 11(b) we plot the corresponding simulated halting time distribution once again for LUE. The computed moments are shown in Figure 11(c) for LUE and PBE and indicate that the fluctuations are not universal (compare with Figure 13(c) below). A.2 Well-conditioned random matrices Here we consider the distribution of T ,E,N,2N , i.e. n = 2N in the case of LUE and PBE. It is known that the distribution of the condition number has a limit with finite mean. This is demonstrated in Figure 12(a) for N = 100 and N = 196 for LUE. A simulated histogram for T ,E,N,2N is shown in Figure 12(b) for LUE. From this plot, it is apparent that the discrete nature of the distribution will persist as N → ∞ in agreement with the discussion in the introduction. 4cN (c = 1) in the case of LUE and PBE. In Figure 13(a) we examine the condition number which we know by Theorem 1.3 has Tracy-Widom fluctuations for LUE. In Figure 13(b) we examine the distribution for T ,E,N,N + √ 4N for LUE. The calculations show a limiting form for the halting time (see also Figure 10). In this case, the moments of the condition number are unbounded as N → ∞ but the limiting distribution is not heavy-tailed and a non-trivial limit exists. Numerical experiments also indicate that this phenomenon persists for the scalings n = N + (4cN ) γ if 0 < γ < 1. Universality is also apparent from Figure 13 B The Parametrices In this appendix we present the asymptotic calculations for the Airy and Bessel parametrices. B.1 The Airy parametrix We use the connection formulas (ω = e 2πi/3 ) and refer to Figure 3 for the sectors I, II, III and IV: Here Ai(ξ) is the Airy function. We also calculate the large-ξ asymptotics of the matrix functions in (3.1) using uniformly for | arg ξ| ≤ π − for any > 0. B.3 Proof of Lemma 3.1 We consider and seek a function M Ai (z) so that matches with the outer solution S ∞ . We make the ansatz Using M = N + 1 2 (α + 1), (2.17),(2.18) and (2.26) we find uniformly for |z − 1| = δ This calculation depends critically on (2.26). We now show that M Ai (z) is analytic in a neighborhood of z = 1. First note that Then by direct calculation From this we find that Then, it remains to show that the ratio f←(z) z(z−1) is analytic and does not vanish in a neighborhood of z = 1. But this follows directly from the expression for f ← (z) in terms of a power series. Finally, we check the jumps of S ← (z). In the following calculations we leave of ± signs for boundary values for functions that are analytic in a neighborhood of the point under consideration. Recall that the contours Γ ↑ and Γ ↓ in a neighborhood of z = 1 are defined in Figure 6. For z ∈ Γ ↑ , f ← (z) ∈ γ 2 and therefore The same calculation follows for z ∈ Γ ↓ . For z ∈ (1, 1 + δ) we have f ← (z) > 0 so We note that both S ← and S −1 ← are analytic in B(1, δ) \ Γ Ai and are continuous up to the contour. This follows from the fact that S −1 ← has unit determinant. To see this, note that P Ai has constant determinant in each sector of C \ Σ Ai by Liouville's formula and the fact that this constant is the same in each follows from the fact the jump matrices have unit determinant. Thus S ← (z) has a determinant that is independent of both M and z and the determinant is found to be unity by examining its asymptotics. B.4 Proof of Lemma 3.2 We consider We first check the asymptotic behavior for |z| bounded away from zero. This follows from (2.25). Note the extra factor of e 2c/φ→(z)σ3 when comparing this with (B.4). We now check that M Bessel (z) is analytic. So, we must consider From this, the question of analyticity of M Bessel (z) is reduced to the question of analyticity of the function . This is clearly analytic in a neighborhood of z = 0 because f → (0) = 0 but f → (0) < 0. Now, we check the jumps. Recall that Γ ↑ and Γ ↓ in a neighborhood of z = 0 are defined in Figure 8. For z ∈ Γ ↑ , f → (z) ∈ β 3 (see Figure 4). The limit to Γ ↑ from above (+ side) is the same as limit into β 3 from below (− side) so that Then we find that e −(α+1)πi+2N φ→(z)+w(z) = e 2N φ→(z)+ŵ(z) and the jump agrees with the corresponding jump for S(z). For z ∈ Γ ↓ , f → (z) ∈ β 1 and Then because log → z = log ← z + 2πi we have e (α+1)πi+2N φ→(z)+w(z) = e 2N φ→(z)+ŵ(z) and again, this is the same as the corresponding jump for S(z). Finally, for z ∈ (0, δ) we have f → (z) ∈ β 2 and 0 . Define t as a function of z by t(z) = − M α iφ + → (z) so that J α (αt) = J α (−iM φ + → (z)) to match (4.3) and (4.4). From the estimates in (C.2) it follows that t(z) ≤ 4 M α √ z so that if t(z * ) = 1 then α 4M 2 ≤ z * and ζ(t(z)) > 0 for z ≤ z * , as t(z) is an increasing function of z. Furthermore, we know that ζ ≥ 2 1/3 (1 − t) for 0 < t ≤ 1 so that From the last line of (C.2) we have As we will see (C.4) is useful near z = 0 and (C.3) is useful for slightly larger values of z. Using (C.1) we have for a constant C 1 and any 0 ≤ and C 2 is determined below in (C. 10). It also follows that zJ α (z) + J α (z) = z −1 (α 2 − z 2 ) J α (z) and therefore Then we have for 0 < z ≤ δ This follows because 1 − (1 + y) 1/2 ≥ −y/2 for y ≤ 0. Then 1 − 4M c/α 2 = O(N −1/2 ) and the second term in brackets vanishes as N → ∞. Thus, for sufficiently large N Next, for −1 ≤ x ≤ L consider the inequality The right-hand side is a decreasing function of x so we consider Given L, we choose ζ 1 so that this condition holds and the estimate for −ζ 1 α −1 ≤ ζ ≤ 0 in (C.5) can be used for −1 ≤ x ≤ L. Then We note that by (C.8) this choice of C 2 satisfies C 2 ≤ α 4M 2 . We estimate the derivative using (C.7) From (C.6) we havê We first consider the second term in the expression. We note that g(x)g(y) ≤ D L g(x)g(y). For the first term, we assume |x − y| ≥ 1 and we have 1 2πi V (x)W (ŷ) x − y ≤ C 2 L g(x)g(y). To determine the uniform limit ofǨ N we use (D. This proves the proposition.
13,239.8
2015-07-02T00:00:00.000
[ "Mathematics" ]
Forecast and analysis of aircraft passenger satisfaction based on RF-RFE-LR model Airplanes have always been one of the first choices for people to travel because of their convenience and safety. However, due to the outbreak of the new coronavirus epidemic in 2020, the civil aviation industry of various countries in the world has encountered severe challenges. Predicting aircraft passenger satisfaction and excavating the main influencing factors can help airlines improve their services and gain advantages in difficult situations and competition. This paper proposes a RF-RFE-Logistic feature selection model to extract the influencing factors of passenger satisfaction. First, preliminary feature selection is performed using recursive feature elimination based on random forest (RF-RFE). Second, based on different classification models, KNN, logistic regression, random forest, Gaussian Naive Bayes, and BP neural network, the classification performance of the models before and after feature selection is compared, and the prediction model with the best classification performance is selected. Finally, based on the RF-RFE feature selection, combined with the logistic model, the factors affecting customer satisfaction are further extracted. The experimental results show that the RF-RFE model selects a feature subset containing 17 variables. In the classification prediction model, the random forest after RF-RFE feature selection shows the best classification performance. Finally, combined with the four important variables extracted by RF-RFE and logistic regression, further discussion is carried out, and suggestions are given for airlines to improve passenger satisfaction. and the theory of customer satisfaction has also made continuous development. High and stable customer satisfaction is considered an important determinant of an organization's long-term profitability. Yeung 5 found a positive relationship between customer satisfaction and a range of financial performance indicators by using the American Consumer Satisfaction Index. Research has also shown a significant moderate-to-strong association between satisfaction and a company's financial and market performance. More specifically, customer satisfaction is strongly linked to retention, revenue, earnings per share, and stock price 6 . For the aviation industry, some studies have used aviation services to build an index model for passenger satisfaction. Based on the combination of China's customer satisfaction index model and the actual situation of China Southern Airlines' satisfaction management, Zhang 7 designed the China Southern Airlines customer satisfaction evaluation index and proposed nine secondary indicators with air transport characteristics: flight operation quality satisfaction degree, ticketing service satisfaction, ground service satisfaction, air service satisfaction, arrival station service satisfaction, irregular flight service satisfaction, consumption value perception, overall satisfaction, and customer loyalty. There are also studies using flight data or text reviews to predict passenger satisfaction. Sankaranarayanan et al. 8 used a logistic model tree (LMT) machine learning approach to predict passenger satisfaction levels based on factors such as airport punctuality, number of flights, punctuality rankings, average delays, and queue times for inferring passenger perceptions of punctuality and delay-related event satisfaction. Kumar et al. 9 predicted passengers' positive or negative attitudes by textually evaluating passengers' flight reviews. Many studies have discussed the impact of passenger satisfaction on modern businesses, and how passenger satisfaction affects business performance and development in the airline industry, so it is crucial to obtain timely and accurate information on passenger satisfaction with airlines. Service quality and passenger satisfaction. To achieve high levels of customer satisfaction, service providers should provide high levels of service quality, as service quality is often considered a prerequisite for customer satisfaction 10. Since passengers are the direct recipients of services, service quality indirectly affects enterprise development by affecting passenger satisfaction. Therefore, airlines can understand the quality of the services provided by passengers' satisfaction with each service, check the services, and then improve the service quality. Park et al. 11 show that airlines that provide services that meet customer expectations enjoy higher levels of passenger satisfaction and value perception. Jiang et al. 12 took China Eastern Airlines (CEA) as a case study to investigate the domestic passengers of China Eastern Airlines at Wuhan Tianhe International Airport, China. Hu et al. 13 found that poor service quality can lead to customer dissatisfaction by using the Kano model to design a quality risk assessment model. Chow et al. 14 studied the relationship between customer satisfaction measured by customer complaints and service quality using fixed-effect Tobit analysis, discussed the seasonality of customer satisfaction, and compared customer satisfaction between state-controlled and nonstate-controlled operators. In many studies, it is proposed to combine passenger satisfaction information to construct a service quality evaluation framework 15 . Therefore, the influence mechanism between service quality and satisfaction should be that service quality affects satisfaction, and satisfaction is a direct and effective indicator of service quality. Therefore, this study combines service quality with passenger satisfaction, predicts overall satisfaction with passengers' ratings of service quality and uses satisfaction information to analyze airline service quality. Important service attributes. For the aviation industry, it is more efficient to improve customer satisfaction by accurately understanding the main factors affecting passenger satisfaction and making improvements based on service priorities. Several studies have investigated the main factors influencing passenger satisfaction. By constructing a nested logit model of airport-airline choice in the "two-step" decision-making process of air passengers, Suzuki 16 determined that the factors that play an important role in airline choice are ticket price, frequency of flight service provided to desired destinations and frequent flyer membership. Tsafarakis et al. 17 proposed that the improvement of onboard entertainment onboard Wi-Fi services can improve airline passenger satisfaction according to the multi-standard satisfaction analysis method. Hess 18 looked at these factors separately for several market segments and concluded that visit times, flight times and airfares are important for both business and holidaymakers. With the development of text mining technology, some researchers have mined customer comment text on web pages to analyze passenger satisfaction and the main influencing factors. Lucini et al. 19 used a text mining method to analyze online customer comments and predict passengers' attitudes and concluded that first-class passengers should be provided with customer service for different reasons. Provide comfort for premium economy passengers; Conclusions on checked baggage and wait times for economy passengers Cabin staff, in-flight service and cost performance were found to be the three most important dimensions in the prediction of airlines recommended by passengers. According to an online review study by Brochado 20 , on-board service, airport operation, ground service and other factors have a significant impact on service quality assessment by using Leximancer to perform quantitative content analysis of airline passenger web reviews. Many previous studies evaluated passenger satisfaction by statistical tests, building a satisfaction index system 7 or text analysis 20 . Then, this study uses machine learning to determine the main factors affecting satisfaction based on the data of airline passenger satisfaction surveys and gives priority to the services that airlines need to pay attention to, providing strategic support for companies committed to improving passenger satisfaction. Feature selection based on RF-RFE. Feature selection refers to removing redundant or irrelevant features from a set of features. Whether the samples contain irrelevant or redundant information directly affects the performance of the classifier, so it is very important to choose an effective feature selection method. Guyon 21 reviewed the existing variable feature selection methods: filtering, embedding and packaging. The filtering method sorts the features in the preprocessing step and is independent of the learning algorithm, which means www.nature.com/scientificreports/ that the selected features can be transferred to any modeling algorithm. The filter can be further classified according to the filtering measures used, i.e., information, distance, dependence, consistency, similarity and statistical measures. The embedded method takes feature selection as part of the implementation of the modeling algorithm. The features of method selection depend on the learning algorithm. Two typical examples are lasso and various decision tree-based algorithms. Wrapper uses an independent algorithm to train the prediction model for candidate feature subsets and uses greedy strategies such as forward or backward to identify the optimal feature subset from all possible subsets in the learning process. Recursive feature elimination (RFE) is a sequence backward selection algorithm belonging to the wrapper. This method uses a specific underlying algorithm to continuously reduce the scale of the feature set through recursion to select the required features. Guyon 21 proposed RFE based on the SVM model and proved that SVM-RFE achieved very good results in the process of gene selection, which has become a widely used method in gene selection research 22 . Marcelo MCA combined RFE with logistic regression (LR) and a support vector machine (SVM) to select the features of tobacco spectral information, which improved the prediction accuracy of the model 23 . Wei 24 and others proposed the SVM-RFE-SF method, which divides the original gene set into several gene subsets, and RFE divides the genome with the smallest score of the sorting criteria, which effectively solves the problem of heavy calculation caused by eliminating only one gene at a time. The feature selection method using random forests in most studies is the wrapper 25 . In the research of feature selection based on RF-RFE, Wu combined RF with RFE and used the importance ranking output of the RF algorithm to select variables 26 . Chen tested two completely different molecular biology data sets and found that using RF-RFE feature selection improves the quality of the model and makes the model construction process more efficient than the model without any feature selection 27 . Shang et al. 28 used the RF-RFE algorithm to select important variables from the initial variables for evaluating traffic event detection and took important variables as input to prove that the model has better performance. This study uses machine learning models to predict the overall satisfaction of passengers with the full service provided by the airline. Combined with feature selection, the prediction models before and after feature extraction are compared, and the best prediction model is selected. We aim to select the important factors that affect passenger satisfaction. The preliminary feature selection is combined with the logistic model for further selection, and the model results are used to prioritize the influencing factors. Methodology In this section, we present a detailed introduction to the preliminary feature selection method RF-RFE (random forest-based recursive feature elimination) and various classification models used for passenger satisfaction in this study. Recursive feature elimination based on random forest. RF. The RF proposed by Breiman 29 is a parallel integration algorithm based on a decision tree. Because of its relatively good precision, robustness and ease of use, it has become one of the most popular machine learning methods. The decision tree may be completely different due to the small change in data, so it is not stable enough. The RF reduces the variance brought by a single decision tree, improves the prediction performance of the general decision tree, and can give the importance measurement of variables, which brings substantial improvement to the decision tree model. RF uses a decision tree as the base learner to construct a bagging ensemble. Bagging is a parallel integrated learning algorithm based on a self-help sampling method. Each sampling set is used to train a base learner, and then these base learners are combined. When combining the prediction output, the simple voting method is usually used for the classification task. Let the training set D = x 1 , y 1 , x 2 , y 2 , . . . , x n , y n , and the prediction result of the new sample is Eq. (1): where y is the output category set, h t (x) is the prediction result of the new sample x by the t-th learner, and y is the real category of the sample. RF introduces the selection of random attributes on the basis of bagging integration. Different from selecting an optimal attribute when a single decision tree divides attributes, RF adopts the method of random selection for each node attribute set in the decision tree, first randomly selects an attribute subset from all attributes, and then selects an optimal attribute from the subset. Therefore, based on the sample disturbance brought by bagging, the RF further introduces attribute disturbance, which increases the generalization performance of the integration. The algorithm description of RF is shown in Table 1. Importance of RF characteristics. The importance measurement indicators based on RF include the mean decrease impurity (MDI) based on the Gini index and the mean decrease accuracy (MDA) based on OOB data 30 . This method uses the frequency of attributes in the RF decision tree to reflect the importance of features. This paper chooses the MDI method based on the Gini index to measure the importance of features. When constructing the CART decision tree, RF takes the attribute with the largest Gini gain as the splitting attribute by calculating the Gini gain of all attributes of the node. Gini represents the probability that a randomly selected sample in the sample set is misclassified, let p k be the proportion of class k samples, and the calculation equation is Eq. (2): where V is the number of value categories of attribute a and |D v | is the number of value categories of attribute a. Based on the calculation of feature importance, the specific steps are as follows: (1) For each decision tree, the node where feature ∝ appears is set A, and the change in the Gini index before and after node i branch is calculated as follows Eq. (4): where Gini(l) and Gini(k) are the Gini index of the new node after branching. (2) The importance of feature ∝ in the tree is shown in Eq. (5): where a is the node where feature ∝ appears. (3) Suppose n is the number of decision trees, and the importance of feature ∝ is Eq. (6): Then, normalize the importance of all features in Eq. (7): where c is the number of features. (4) The larger the IM(∝) value is, the more important the feature is to the result prediction, that is, the higher the importance of the feature. Recursive feature elimination based on RF. RF-RFE uses RF as an external learning algorithm for feature selection, calculates the importance of features in each round of feature subset, and removes the features corresponding to the lowest feature importance to recursively reduce the scale of the feature set, and the feature importance is constantly updated in each round of model training. Based on the selected feature set, this study uses cross validation to determine the feature set with the highest average score based on classification accuracy. The algorithm flow chart is shown in Fig. 1. The RF-RFE flow is as follows: www.nature.com/scientificreports/ (1) Bootstrap sampling is carried out from the training set T containing all samples to obtain a training sample set T * with a sample size of n. The decision tree is established by using T * , and a total of b decision trees are generated by repeating this process; (2) The prediction results of each decision tree are combined by "voting", and the effect of the RF regression model is evaluated based on classification accuracy by using the fivefold cross validation method; (3) Calculate and sort the importance IM(∝) of each feature ∝ in the feature set based on MDI; (4) According to the backward selection of the sequence, delete the feature with the lowest feature importance, and repeat steps 1-3 for the remaining feature subset until the feature subset is empty. According to the cross-validation results of each feature subset, the feature subset with the highest classification accuracy is determined. Satisfaction prediction based on machine learning algorithm. According to whether the processed data are marked artificially, machine learning can be generally divided into supervised learning and unsupervised learning. Supervised learning data sets include initial training data and manually labeled objects. The machine learning algorithm learns from labeled training data sets, tries to find the pattern of object division, and takes labeled data as the final learning goal. Generally, the learning effect is good, but the acquisition cost of labeled data is high. Unsupervised learning processes unclassified and unlabeled sample set data without prior training, hoping to find the internal rules between the data through learning to obtain the structural characteristics of the sample data, but the learning efficiency is often low. The satisfaction status in this study is the data set label. In the training process, the supervised machine learning algorithm learns the corresponding relationship between features and labels and applies this relationship to the test set for prediction. k-nearest neighbors (KNN). KNN is a supervised learning algorithm. Because the training time overhead is zero, it is also representative of "lazy learning" 31 . K-nearest neighbor has been used as a nonparametric technique in statistical estimation and pattern recognition. The working principle is as follows: for a given new sample, find the K samples closest to the sample in the training set based on a certain distance measurement and take the number of categories with the largest number of K samples as the category of the new sample. The samples are not processed in the training stage, so it belongs to "lazy learning". As shown in Fig. 2, if there are 3 squares, 2 circles and 1 triangle around a data point, it is considered that the data point may be square. The parameter K in KNN is the number of nearest neighbors in majority voting. LR. LR is used to evaluate the relationship between dependent variables and one or more independent variables, and the classification probability is obtained by using logical functions 32 . It is a learning algorithm with a logistic function as the core. A logistic function is used to compress the output of the linear equation to (0, 1). The logistic function is defined as Eq. (8): Consider the binary classification problem, given the data set D = x 1 , y 1 , x 2 , y 2 , . . . , x N , y N , x I ⊆ R n , y i ∈ 0, 1, i = 1, 2, · · · , N. www.nature.com/scientificreports/ P is the probability that the sample is a positive example, and the coefficient in the following formula is determined by LR through the maximum likelihood method β 0 , β 1 , · · · , β k to make an estimate [Eqs. (9) and (10)]: When P is greater than the preset threshold, the sample is divided into positive examples, and vice versa. p 1−p is called the odds ratio (odds), which refers to the ratio of the probability of event occurrence to the probability of event nonoccurrence. The logarithm of the winning rate is linear with the coefficient of the variable. When the features have been standardized, the greater the absolute value of the coefficient, the more important the feature is. If the coefficient is positive, this characteristic is positively correlated with the probability that the target value is 1; if the coefficient is negative, this characteristic is positively correlated with the probability that the target value is 0. Gaussian Naive Bayes (GNB). Naive Bayes (NB) is a direct supervised machine learning algorithm 33 . The NB classifier is based on the Bayesian probability theorem and predicts future opportunities according to previous experience. NB assumes that the input variables are conditionally independent [Eq. (11)]. where X is the input vector (X 1 , X 2 , . . . , X n ) and Y is the output category. On the basis of NB, GNB further assumes that the prior probability of the feature is a Gaussian distribution, that is, the probability density function is as follows in Eq. (12): For a given test set sample X = (X 1 , X 2 , . . . , X n ) , calculate P [Eq. (13)]: To determine the class of the sample y [Eq. (14)]: RF. The working principle of RF 34 is to combine the results of each decision tree, as shown in Fig. 3. This strategy has better estimation performance than a single random tree: the estimation of each decision tree has low deviation but high variance, but clustering realizes the trade-off between overall deviation and variance and provides the importance of prediction variables to the prediction of result variables. RF has good prediction ) P Y = y k |X 1 , . . . , X n = P Y = y k P X 1 , . . . , X n |Y = y k j P(Y = y j )P X 1 , . . . , X n |Y = y k = P Y = y k i P X i |Y = y k j P(Y = y j ) j P X i |Y = y j www.nature.com/scientificreports/ performance in practical applications and can be used to address multiclass classification problems, category variables and sample imbalance problems. Backpropagation neural network (BPNN) . BPNN is one of the most widely used neural network models and is a typical error backpropagation algorithm 35 . Since the emergence of BPNNs, much research has been done on the selection of activation functions, the design of structural parameters and the improvement of network defects. The main idea of the BP algorithm is to divide the learning process into two stages: forward transmission and reverse feedback. In the forward transmission stage, the input sample reaches the output layer from the input layer through the hidden layer, and the output end forms an output signal. In the backpropagation stage, the error signals that do not meet the precision requirements are spread forward step by step, and the weight matrix between neurons is corrected through the pre-adjustment and post-adjustment cycles. When the iteration termination condition is met, the learning stops. (1) Forward transmission First, the input vector of the sample is X, T is the corresponding output vector, m is the number of neural units in the input layer, and P is the number of nodes in the output layer: The calculation process equation of the forward transmission output layer is Eq. (15): where j represents the node of the hidden layer, w is the weight matrix between the input layer node and the hidden layer node, θ j is the threshold of node j, and the output value of node j is Eq. (16): where f is called the activation function, which is the processing of the input vector. The function can be linear or nonlinear. (2) Reverse feedback Calculate the error between the true value of the sample and the output value of the sample. For the problem of second classification, two neural units are often used as the output layer. If the output value of the first neural unit of the output layer is greater than that of the second neural unit, it is considered that the sample belongs to the first category (Eq. (17)): The error of the middle hidden layer is accumulated by weight through the node error of the next layer (Eq. (18)): www.nature.com/scientificreports/ where E k is the error of the k-th node of the next layer and W jk is the weight from the j-th node of the current layer to the k-th node of the next layer. Update the weights and offsets, respectively (Eq. (19)): where λ is the learning rate, with a value of 0-1. When the training reaches a certain number of iterations or the accuracy is higher than a certain value, the training is stopped. Passenger satisfaction prediction Based on the preliminary selection of features, this study establishes models to predict passenger satisfaction. After comparing the prediction performance of various classification models before and after feature selection, we select the model with the best prediction performance. Figure 4 shows the research framework of the prediction analysis process. Data source. The data used in this study are the passenger satisfaction data set of an American airline on Kaggle (https:// www. kaggle. com/ binar yjoker/ airli ne-passe nger-satis facti on). Through the survey of passengers who arrived at the airport in 2015, a sample of 129,880 passengers using the full service of the airline was collected. There are 23 attributes in the data set, of which the input variables include 4 numerical continuous variables, 4 class discrete variables and 14 qualitative sequential variables, indicating the customer's satisfaction with relevant services (0-5 points). The data used in this study mainly contain three dimensions of information, including basic information, flight information and satisfaction information. The constituent elements and the specific variable names and variable attributes are shown in Feature selection based on RF-RFE. In this study, the combination of RFE and the cross validation method is used to calculate the selected feature set in each RFE stage for cross validation. Taking the accuracy as the evaluation criterion, the number of features with the highest accuracy and the corresponding feature subset are finally determined. The RF-RFE feature selection results are shown in Fig. 5. The broken line diagram can intuitively judge the accuracy results obtained by the number of different feature subsets, in which the number www.nature.com/scientificreports/ In general, the precision and recall contradict each other. The higher the precision is, the lower the recall; when the recall is high, the precision is often low. To comprehensively consider the precision and recall, the F value is introduced to more comprehensively evaluate the classification performance of a classifier. F is the weighted harmonic average based on precision and recall, and the equation is as follows Eq. (23): where β reflects the relative importance of precision to recall. When β = 1 , that is, the precision is as important as the recall, the F value is the commonly used F1 value. Its equation is Eq. (24): The ROC curve sorts the samples according to the prediction results of the classifier, forecasts the samples one by one as positive examples in this order, and draws the ROC curve as shown in Fig. 6 with "FP rate" as the abscissa and "TP rate" as the ordinate, and the AUC value is the area under the ROC curve. The AUC value can directly evaluate the quality of the classifier. The larger the AUC value is, the better the classifier performance. Model prediction results. Using the feature subset (including 17 features) after RF-RFE feature selection, a classification model is constructed to predict the satisfaction of passengers in the test set, including KNN, LR, GNB and RF. During BPNN training, the activation function is set as "ReLU", the L2 penalty (regularization term) parameter is 0.0001, and the solver is the default "Adam", which can work well in terms of training time and verification score in the face of relatively large data sets. This study uses five evaluation indexes, accuracy, precision, recall, F1 value and AUC value, to compare the classification performance of the classifier vertically and horizontally. Table 3 shows the classification performance of each classification model before and after RF-RFE feature selection. Among all classifiers, the RF model based on RF-RFE feature selection performs best, and the five indexes (accuracy, precision, recall, F1 value and AUC value) are greater than those of the other classifiers. The indexes (except precision) of the KNN model increased after RF-RFE. The five indexes of the logistic model decreased slightly after RF-RFE. After RF-RFE feature selection, the two indexes (accuracy and precision) of the GNB model increase, and the other indexes are slightly lower than those of the model without feature selection. After feature selection, the indexes (except recall) of the BPNN model are slightly reduced. In general, in the overall comparison of the five classification models, RF is better than BPNN, followed by KNN and logistics, and GNB is the worst model. Discussion on important variables In this study, the most important factors affecting passenger satisfaction were selected by the feature selection method. In "Model prediction" section, we used RF-RFE to make a preliminary selection of the feature set and initially selected 17-dimensional features from 22-dimensional features, but the number of features was still large. Therefore, we used the logistic regression model to further select the features in 4.1, and the features with the largest coefficients were further analyzed. Extracting variables from logistic model. The LR model is constructed based on RF-RFE feature selection, and further feature extraction is carried out through LR. Figure 7 shows the coefficient of the LR variable. The results show that except for the cleanliness of the variable, all other variables passed the significance test. In LR, when the features are standardized, the greater the absolute value of the coefficient, the more important the feature is. If the coefficient is positive, the characteristic is positively correlated with the probability that the output is 1; in contrast, it is positively correlated with the probability that the output is 0. Among the 17 variables Analysis and suggestions. In airline competition, service quality is the key to winning the choice of passengers. Airlines should adopt a customer-oriented service evaluation method to improve service by understanding which service strategies are the most effective strategies to win more passengers. Based on the big data of airline passengers, the research results of this study give the priority of services that airlines need to pay attention to and give suggestions for services with higher priority through the extraction of important variables. The five important variables extracted from RF-RFE-logistics are travel type (personal travel and business travel), customer type (loyal and not-loyal), customer class (economy class and business class), inflight Wi-Fi service and online boarding. The travel type is uncontrollable, so we further analyze the three aspects of customer type and customer class, in-flight wi-fi, and online boarding. Customer type and customer class. Figure 8 shows the distribution of satisfaction according to different passenger types and different cabin types. As seen from the figure, loyal customers account for more than 80%. Among loyal customers, the number of satisfied passengers is close to that of neutral or dissatisfied passengers. In the 1980s, an airline first proposed the Frequent Flyer Program (FFP). In 1994, Air China first implemented it in China. It is a plan proposed by airlines to reduce the risk of losing business and the loss of an existing customer base. It usually includes selling points and mileage to project partners to offset air tickets, upgrades or other rewards. For more advanced members, there are additional points as an incentive for high-value passengers. Since the cost of developing a new customer is often higher than that of maintaining a loyal old customer, FFP members of airlines are rapidly becoming the core competitiveness of business development. Therefore, airlines should invest more energy to improve frequent flyer plans, improve passenger satisfaction of loyal users and increase user stickiness. When the satisfaction level is divided according to customer class, it can be found that the overall satisfaction of economy class passengers is significantly lower than that of business class passengers. Most business class passengers are satisfied with the service, while most economy class travelers tend to be neutral or dissatisfied with the service. According to the customer class, 14 variables related to satisfaction, such as seat comfort and online boarding service, are compared to determine the main factors affecting the difference in passenger satisfaction of different class types. Figure 9 radar chart shows the average satisfaction score of 14 variables for passengers in different classes. The average scores of business class passengers and economy class passengers on seat comfort, leg extension space, on-board entertainment and online boarding are very different, but there is no significant difference in food and drinks. Inflight Wi-Fi service. The result of RF-RFE-LR shows that the most important type of service is inflight Wi-Fi service, and its satisfaction score is the lowest among all services. Mobile Internet has become indispensable in daily life. People's diversified lifestyle is in sharp contrast to traditional monotonous entertainment. Passengers' demand for connecting personal electronic devices to the Internet is becoming increasingly stronger. If airlines can provide Wi-Fi services and provide free internet access or reasonable Wi-Fi prices, they can attract more passengers. According to the survey of INMARSAT in 2016, more than half (54%) of passengers will choose Wi-Fi instead of on-board meals. According to the survey in 2018, onboard Wi-Fi is listed as the fourth www.nature.com/scientificreports/ most important factor when passengers choose airlines around the world after airline reputation, freely checked baggage and additional leg space. If the flight provides high-quality Wi-Fi, 67% of passengers said they would rebook the airline ticket. In addition, passengers carrying their own personal electronic equipment can also reduce the investment and maintenance of cabin entertainment equipment such as electronic displays and other hardware. The improvement of cabin facilities should be combined with the development of the times. Increasing the popularity of cabin Internet is the general trend. Therefore, from the perspective of improving passenger satisfaction, airlines have good reasons to provide and upgrade onboard Wi-Fi services. Online boarding. Furthermore, the service that needs to be improved is online boarding. CAAC proposed the concept of "paperless travel" at the 2018 national civil aviation work conference, replacing paper boarding passes with paperless forms such as electronic boarding passes. With ID cards and e-tickets, you can check in directly on the Internet platform without printing boarding passes. It can not only realize the paperless boarding system but also save passengers waiting in line at the counter for check-in, save passengers' time and energy, and make the boarding process faster and more convenient. Therefore, in addition to providing ticket booking, seat reservation, change reservation and meal ordering services, the airline travel service app can add the service of checkin and printing electronic boarding passes in the design, optimize the user boarding experience, improve the competitiveness of the airline travel service app and increase user stickiness. Currently, due to the development of artificial intelligence, the "one card mode" combined with face recognition is possible. Passengers only need to use the second-generation ID card and face recognition camera to realize the unification of ticket information to make each process of passengers at the airport more efficient and improve their travel experience. Conclusion We proposed a RF-RFE-Logistic feature selection model to extract the influencing factors of passenger satisfaction based on airline passenger satisfaction survey information. We first use the RF-RFE algorithm to preliminarily extract a feature subset containing 17 variables and use a variety of classification learning algorithms to predict passenger satisfaction. RF on this feature subset shows the best classification performance (accuracy: 0.963, precision: 0.973, recall: 0.942, F1 value: 0.957, AUC value: 0.961). Then, we use a logistic model trained on the feature subset selected by RF-RFE to further extract the important variables affecting passenger satisfaction. Finally, the satisfaction of different passenger types and class types are compared and analyzed, and suggestions are given from the perspectives of online boarding and onboard Wi-Fi services. There are also some deficiencies in this study. The evaluation indicators of passenger satisfaction surveys in the data set used in this study are not sufficient. In addition, we used the default parameters in the prediction model and did not consider the prediction results of different parameters. Based on the above limitations, we suggest that: (1) the ground service can also include aspects such as baggage claim speed, transfer service, etc. In addition, satisfaction level of related services under abnormal flight conditions can also be added; (2) pay attention to the optimized model parameters; (3) several other variables also affect passenger satisfaction to a certain extent, which should not be completely ignored. Data availability Data and methods used in the research have been presented in sufficient detail in the paper.
8,285.2
2022-07-01T00:00:00.000
[ "Engineering", "Business", "Computer Science" ]
Level and width statistics of the open many-body systems The level and width statistics of the two kinds of the random matrix models coupled to the continuum are analyzed. In the first model, the gaussian orthogonal ensemble with random couplings to the continuum, not only the width statistics deviate from the Porter-Thomas distribution due to the super-radiant mechanism, but also the distribution of the nearest neighbor level spacings shows deviation from the Wigner one simultaneously. In the second model, the two body random ensemble with correlated couplings to the continuum, the correlation between the target and the compound states leads to the global energy dependence of the widths. Within the narrow energy interval where states with widths deviating from the global energy dependence lie, the distributions behave similar way with the case of the random couplings. Namely, the deviation of statistics of the nearest neighbor level spacings from the Wigner distribution and the deviation of the width statistics from the Porter-Thomas distribution take place simultaneously within the models we investigated. Introduction The description of the isolated neutron resonances(INR) has close connection with the random matrix theory and the phenomenon of the quantum chaos. To begin with, N. Bohr's compound nuclear picture [1], which tried to explain the narrow widths of the neutron resonances, implies the chaotic motion of the nucleons in the nuclei.In the 50's, Wigner's random matrix theory(RMT) was proposed as the theory of the INR [2].The nearest neighbor level spacings(NNLS) following the Wigner surmise, and the widths following the Porter-Thomas(PT) distribution, are well explained by the RMT.In the 80's, the idea of the nuclear data ensemble(NDE) was proposed [3], in which the combination of the data of different nuclei is treated as an ensemble, and regarded as a strong evidence for the RMT.At the same time, it has been established that the RMT is a generic model of the quantum chaos.Hence, there has been no question that the INR are well described by the RMT and are typical examples of the quantum chaos. However in 2010, it was claimed that the widths of the INR of Th isotopes do not follow the PT distribution with 99.997% significance [4].Further claim was raised that the precise re-examination of the NDE indicates the exclusion of the RMT with 98.17% significance [5].Although the significance level claimed there might be overestimated [6], these new analysis gave a great doubt on the RMT.This issue is not restricted to whether a model is correct or not.As the RMT has close relation with the quantum chaos, the problem is concerned with the whole picture of the compound nuclear states.It is natural that the nuclear theory is nudged by the claim [7]. There are several arguments that the distribution of the width may not directly connected with the chaoticity.The expectation that the widths follow the Porter-Thomas distribution arises from the assumption that the randomness of the matrix elements of the intrinsic states reflects directly to the width distribution.There are however several possibilities giving rise to the deviation.The penetration factor may have a modified energy dependence [8], the neutron width may be different from the simple square of the coupling to the continuum due the super-radiant mechanism [9], and/or the randomness of the matrix elements of the intrinsic states may not manifest in the couplings to the continuum because of the correlation between the compound states and the target plus one neutron states [10]. Super-radiance is a phenomenon which occurs in the limit of the strong coupling to the continuum.In such a situation, only a few, the same number with the open channels, states monopolize the coupling to the continuum, and the remaining states have only small widths.If the strength is modestly large, the collectivization is weak and some states have relatively large widths than the others, which may cause the deviation from the PT distribution.Although the super-radiant mechanism together with the correlation between the compound and the target nuclei may explain the deviation of the width statistics from the PT distribution, the effect of the same mechanism on the level statistics has not been investigated so far.Since the level interaction occurs in the complex plane, there must be some effect on the level statistics as is seen in a past result [11].In this talk, adopting the same model with ref. [9] and ref. [10], we investigate the relation of the width statistics and the level statistics. The model The energies and the widths of the neutron resonances are given [12] as the eigenvalues of the effective Hamiltonian where the real part H is the Hamiltonian of the compound nuclei with bound state approximation, and W c stands for the coupling between the bound state and the channel vector associate with the channel c.P c is the strength of the continuum coupling for the channel c.We investigate two models.In one model, we adopt the Gaussian Orthogonal Ensemble (GOE) as H and random numbers with normal distribution as each elements of W c [9].We call this model GOE + random coupling (GOE+RC) model.In the other model, we adopt the two-body random ensemble (TBRE) as H and the values of the overlaps between the compound states and the target plus oneparticle states as the elements of W c [10].This we call TBRE + correlated coupling (TBRE+CC) model.In both cases, the discussion is restricted in the case of single channel.So we omit the subscript c hereafter. The deviations from the GOE limit, namely the Wigner surmise in the case of NNLS and the PT distribution in the case of widths, are characterlized in terms of the parameter β of the Brody distribution and the degree of freedom ν of the χ 2 distribution, respectively.Here, is the Brody distribution and is the χ 2 distribution. GOE+random coupling model We first check the GOE+RC model.Here, H is composed of the random numbers.Namely, where N(r, s) is the normal distribution with average r and variance s.W is also composed of the random numbers, The coupling constant P is scaled as where Ω is the dimension of the random matrix. TBRE+correlated coupling model The basic ingredients of the TBRE are the two body interactions, where c † i is the creation operator of the single particle.The number of levels is expressed as l.The basis states with particle number m are constructed as where {i n(α) } indicate a set of m single particle labels, and the basis states with particle number d = (m − 1) are constructed as The matrix elements of the real part of the Hamiltonian are while those of the target states are The coupling with continuum is assumed to occur through the channel vector state |C ; m .We choose as this state, in the TBRE+CC model, adding a single particle to the target ground state, |σ 0 ; d .To define the single-particle, we make use of the eigenstates of the density matrix, Namely we compose a creation operator c † a as Then the channel vector is constructed as where N indicates taking normalization.Another assumption is that the compound states with energy lower then the target ground state do not couple with the continuum.With these two assumptions, the coupling matrix elements to the continuum in the basis of the eigenstate of the compound Hamiltonian are expressed as where E λ is the energy of the compound state λ relative to the target ground state.The coupling strength with the continuum is controlled in terms of the parameter κ, defined as where λ 2 = 1 Ω Tr(H 2 ) and Ω is the size of the Hamiltonian H(m). GOE+RC model Figure 1 shows the deviation from the GOE limit when the coupling to the continuum increases.The filled squares show the parameter β in eq.( 3) while the filled circles are for the degrees of freedom ν of the χ 2 distribution in eq.( 4).As is shown, when κ is increased, both β and ν decrease, untile κ becomes around 1. With increasing κ further, both β and ν become close to unity. To show the reason of this behavior, we plotted the positions of the eigenvalues in the E − Γ plane for κ = 0.1(panel a), 1.0(panel b), 10.0(panel c) in fig. 2. As is seen in panel a), the widths do not have any energy dependence for small κ.For κ = 1.0 we see some widths relatively large but not isolated from others around E = 0 in panel b).In this situation, these states with large widths may have close real energies to other states with small widths, since the energy repulsion takes place in the complex plane.Therefore, the reason which drive to the deviation of the width distribution from the PT one causes also the deviation of the NNLS distribution from Wigner one, as states with large difference in width(the imaginary axis) may have small level distance(the real axis).With further increase of κ to 10.0, as is seen in panel c), states with large widths (one state for each realization) are separated from the other states and have no significance in the statistics, while the widths of the other states follow PT distribution as they are created by small, random real perturbations to a state with large width.In fig.3, the NNLS distribution (a) and the width distribution(b) are compared for κ = 0.1, 1.0, 10.0 together with the simple GOE prediction, namely the Wigner one (a) or the PT one(b).They are the combined plots for 10 realizations of the random matrix.In both κ = 0.1 case and κ = 10.0 case, the distributions are very close to the GOE ones, while for κ = 1.0 there are significant differences. TBRE + correlated coupling model The most significant difference of the TBRE+CC model from the GOE+RC model is the strong E λ dependence of the coupling W λ .In fig.4, the average of the coupling strength W 2 λ over an interval Δ E and realizations is plotted as a function of E λ .For small κ, the coupling can be treated perturbatively, hence the widths have the similar dependence on E f .In fig.Another difference is that the energy interval where the states with large widths deviating from the global energy dependence for κ 1 exist is much narrower in the case of the TBRE+CC model.This strong dependence of average Γ on E f derives a completely different distribution of Γ, as is seen in [10], if the states all over the energy region are considered.However, in the real situation of INR, neutron widths are analyzed within very small intervals compared with the whole energy region of the excited states of nuclei.Therefore, we restricted our analysis within a small energy interval in which the effect of the global energy dependence of the average width is, if not negligible, small.In fig.6 Motivated by the recent claim that the neutron resonance width distribution does not support the random matrix theory, we examine the consequence of the super-radiant mechanism in the strong continuum coupling and the effect of the correlation between the target and the compound nuclei on the statistics of both the levels and the widths simultaneously, by controlling the strength to the continuum. If the coupling to the continuum is large, several states have relatively large widths, resulting the deviation from the GOE limit both in the nearest neighbor level spacing distribution and the width distribution.However in the limit of strong coupling, the behaviors of both levels and widths are restored to the GOE ones, because only one state monopolizes the coupling to the continuum in this limit, and the rest states have the normal statistical property. The effect of the correlation between the target and the compound states results in the global excitation energy dependence of the widths.Therefore, the width distribution for whole spectrum is completely different from the GOE one.However, if the width distribution is evaluated within a relatively narrow energy interval, it approximately follows the Porter-Thomas distribution as long as the coupling to the continuum is week. When the coupling to the continuum becomes large, both the NNLS distribution and the width distribution deviate from the ones of the GOE in the energy interval where the states with large widths crowd.However, in other energy intervals away from "width concentrating region", the distributions are similar to the GOE ones.With further increase of the strength, both the NNLS distribution and the width distribution come again close to the GOE ones. The deviation of the width distribution from the Porter-Thomas one and the deviation of the NNLS distribution from the Wigner one occur coincidently within the model we adopted. Figure 1 .Figure 2 . Figure1.Fitted results of β, the parameter for the Brody distribution, and ν, the degree of the freedom in χ 2 -distribution, are shown as functions of κ, the parameter controlling the coupling to the continuum.Filled squares are for β while filled circles are for ν.The dimension of the Hamiltonian is 924, coincides with the dimension of the Hamiltonian of TBRE for l=12 and m=6.The result is calculated from the accumulation of the 50 realizations of the random matrix. Figure 3 .Figure 4 . Figure 3.The NNLS distribution (a) and the width distribution(b) obtained from GOE+RC model for κ = 0.1(dotted line), κ = 1.0 (solid line), κ = 10.0 (dashed line) compared to the Wigner one (a) or Porter-Thomas one(b).Ω = 924, and the number of realizations is 50.For κ = 0.1 and κ = 10.0 the results are close to the Wigner one or PT one while for κ = 1.0 the results are different. 5 ,Figure 5 . Figure 5.The positions of the complex eigenvalues in the E − Γ plane for κ = 0.1 (a), 1.0 (b), 10.0 (c).The Hamiltonian is for l=14 and m=7, and the plots are accumulated results of 10 realizations. Figure 6 . Figure 6.The NNLS distribution (a) and the width distribution(b) obtained from the TBRE+CC model in the energy interval 5.0 ≤ E ≤ 25.0 for κ = 0.1(dotted line), κ = 1.0 (solid line), κ = 10.0 (dot-dashed line) compared to the Wigner one (dashed line) (a) or PT one (b).l = 14 and m = 7, and the number of realizations is 50.For κ = 0.1 and κ = 10.0 the results are close to the Wigner one or the PT one while for κ = 1.0 the results are different. Figure 7 . Figure 7. β (square) and ν (circle) obtained from the result of the TBRE+CC model with l=14, m=7.States in the energy interval 5 ≤ E ≤ 20 are considered.The number of realizations is 50. Figure 8 . Figure 8. ν obtained from the TBRE+CC model with l=14, m=7.The circles are the result s obtained in the energy interval 5 ≤ E ≤ 20 (same with the ones in fig.7), while the pluses (the crosses) are the ones in the interval 20 ≤ E ≤ 35 (35 ≤ E ≤ 50), respectively.The number of realizations is 50.
3,481.8
2016-06-01T00:00:00.000
[ "Physics" ]
Rule-based Morphological Inflection Improves Neural Terminology Translation Current approaches to incorporating terminology constraints in machine translation (MT) typically assume that the constraint terms are provided in their correct morphological forms. This limits their application to real-world scenarios where constraint terms are provided as lemmas. In this paper, we introduce a modular framework for incorporating lemma constraints in neural MT (NMT) in which linguistic knowledge and diverse types of NMT models can be flexibly applied. It is based on a novel cross-lingual inflection module that inflects the target lemma constraints based on the source context. We explore linguistically motivated rule-based and data-driven neural-based inflection modules and design English-German health and English-Lithuanian news test suites to evaluate them in domain adaptation and low-resource MT settings. Results show that our rule-based inflection module helps NMT models incorporate lemma constraints more accurately than a neural module and outperforms the existing end-to-end approach with lower training costs. Introduction Incorporating terminology constraints in machine translation (MT) has proven useful to adapt translation lexical choice to new domains (Hokamp and Liu, 2017) and to improve its consistency in a document (Ture et al., 2012). In neural MT (NMT), most prior work focuses on incorporating terms in the output exactly as given, using soft (Song et al., 2019;Dinu et al., 2019;Xu and Carpuat, 2021) or hard constraints (Hokamp and Liu, 2017;Post and Vilar, 2018). These approaches are problematic when translating into morphologically rich languages where terminology should be adequately inflected in the output, while it is more natural and flexible to provide constraints as lemmas as in a dictionary. To the best of our knowledge, only one paper has directly addressed this problem for neural MT: (Bergmanis and Pinnis, 2021) design an NMT model trained to copy-and-inflect the terminology constraints using target lemma annotations (TLA) -TLA are synthetic training samples where the source sentence is tagged with automatically generated lemma constraints. While this approach improves translation quality, the end-to-end training set-up prevents fast adaptation to lemmas and inflected forms that are rare or unseen at training time. Its impact is also limited to a specific neural architecture, and it is unclear whether its benefits port to more generic sequence-to-sequence models. In this paper, we introduce a modular framework for inflecting terminology constraints in NMT. It relies on a cross-lingual inflection module that predicts the inflected form of each lemma constraint based on the source context only. The inflected lemmas can then be incorporated into NMT using any of the aforementioned constrained NMT techniques. Compared with TLA, this framework is more flexible, as it can be applied to diverse types of NMT architectures and inflection modules, and facilitates fast adaptation to new terminologies without retraining the base NMT model from scratch. This flexibility is enabled by the cross-lingual nature of the inflection module, which predicts the inflected form of each target lemma based on the source context only. This differs from traditional inflection models that predict the inflected forms based on pre-specified morphological tags or monolingual target context. Based on this framework, this paper makes the following contributions: • We construct and release test suites to evaluate models' ability to inflect terminology constraints for domain adaptation (English-German Health) and low-resource MT (English-Lithuanian News). • We show that integrating linguistic knowledge through a simple rule-based inflection module improves over its neural counterpart in intrinsic and end-to-end MT evaluations. • Our framework improves autoregressive and non-autoregressive translation, and outperforms the existing TLA approach for inflecting terminology translation. We open-source the code to facilitate replication and extensions. Background Autoregressive NMT with Constraints Terminology constraints can be incorporated in autoregressive NMT models via 1) constrained decoding where constraint terms are incorporated in the beam search algorithm (Hokamp and Liu, 2017;Post and Vilar, 2018), or 2) constrained training where NMT models are trained to incorporate constraints using synthetic parallel data augmented with constraint terms on the source side (Song et al., 2019;Dinu et al., 2019). These approaches all assume that the constraints are provided in the correct inflected forms and can be directly copied to the target sentence. Bergmanis and Pinnis (2021) extended the constrained training approach of Dinu et al. (2019) to incorporate lemma-form constraints in an end-to-end way -the inflected form of the lemma constraints are predicted jointly during translation. This approach requires a dedicated NMT model architecture to integrate constraints as additional inputs to the encoder, and learns inflection solely from the parallel data. By contrast, our approach can be applied to multiple NMT architectures and uses linguistically motivated rule that generalize better to rare and unseen terms. Non-Autoregressive NMT with Constraints Instead of generating the output sequence incrementally from left to right, non-autoregressive NMT generates tokens in parallel (Gu et al., 2018;van den Oord et al., 2018;Ma et al., 2019) or by iteratively editing an initial sequence (Lee et al., 2018;Ghazvininejad et al., 2019). Architectures differ with the nature of edit operations: the Levenshtein Transformer (Gu et al., 2019) relies on insertion and deletion, while EDITOR (Xu and Carpuat, 2021) uses insertion and reposition (where each input token can be repositioned or deleted). Edit-based nonautoregressive generation provides a natural way to incorporate constraints in NMT -the constraints can be put into the initial sequence and edited to produce the final translation (Susanto et al., 2020;Xu and Carpuat, 2021;Wan et al., 2020). Our approach can augment this family of techniques by inflecting constraints before they are used for further editing. Morphological Inflection Morphological inflection is the process of alternating the morphological form of a lexeme that adds morpho-syntactic information of the word in a sentence (e.g. tense, case, number). Traditionally, morphological inflection as computational task is framed as predicting the inflected form of a word given its lemma and a set of morphological tags (e.g. N;ACC;PL represents a plural noun used in accusative case) (Cotterell et al., 2017). The task was traditionally tackled using hand-engineered finite state transducer that relies on linguistic knowledge (Koskenniemi, 1984;Kaplan and Kay, 1994), while recent work has shown impressive results by modeling it using neural sequence-to-sequence models (Faruqui et al., 2016). More recently, a context-based inflection task has been proposed where the inflected form of a lemma is predicted given the rest of the sentence as context (Cotterell et al., 2018). The stateof-the-art models for the task are neural models trained on supervised data (Cotterell et al., 2018;Kementchedjhieva et al., 2018). The inflection module in our framework differs from those for the context-based inflection task in that it requires cross-lingual context-based inflection -it predicts the inflected form of a target lemma based only on the source language context. Morphologically-Aware Translation In phrasebased MT, modeling morphological compounds on the source (Koehn and Knight, 2003) and target sides (Cap et al., 2014) improves translation quality. In NMT, morphologically-aware segmentation is also useful when translating from or into morphologically complex languages (Huck et al., 2017;Ataman and Federico, 2018;Banerjee and Bhattacharyya, 2018). Tamchyna et al. (2017) propose to overcome data sparsity caused by inflection by training NMT models to predict the lemma form and morphological tag of each target word. Different from prior work, we incorporate grammatical and morphological knowledge in an inflection module for terminology constraints in NMT. Inflecting Target Lemmas Given the Source Context We introduce a modular framework for inflecting terminology constraints for NMT, where we first build an inflection module that predicts the inflected form of each target lemma term based on the source sentence and then incorporate the inflected constraints in NMT using any of the aforementioned techniques. By framing the problem this way, we assume that the inflected forms can be inferred based only on the source context and integrated in a fluent translation by NMT models. In cases where there are multiple possible inflected forms corresponding to different ways of translating the source, the inflection module can predict one of the possible forms, and the NMT model can generate a translation conditioned on the predicted forms of the constraints. Compared with Bergmanis and Pinnis (2021), our framework is more flexible -it can be combined with any NMT model that enables translation with constraints and can leverage diverse types of morphological inflection modules in which linguistic knowledge can be easily incorporated. Formally, given a source sequence x and k target lemma wordsz = (z 1 ,z 2 , ...,z k ) that need to be inflected, the inflection module Θ predicts the inflected form of each target lemma z = (z 1 , z 2 , ..., z k ) independently: Rule-Based Inflection Module One can predict the inflected form of a target word given its lemma and the source context in two steps: first predict the morphological tag of the target word based on the source context, and then predict the inflected form based on the lemma and morphological tag. The second step can be modeled using traditional inflection models (Cotterell et al., 2017), while the first step can be performed using rule-based inference based on linguistic knowledge. McCarthy et al. (2020) present a universal morphological (UniMorph) paradigm with universal morphological tags for hundreds of world languages. In UniMorph, the morphological tag of a verb includes information about the tense (past, present, or future), mood (indicative, conditional, imperative, or subjunctive), the number (singular or plural) and person (first, second, or third person) of the subject. The tag of a noun or adjective includes information about gender (masculine or feminine), number, and grammatical case. Some of these can be inferred from the target lemma (e.g. the gender of a noun) or the source term (e.g. the number of a noun), while some others need to be inferred based on the grammatical function of the source term in the sentence (e.g. grammatical case) or the sentencelevel semantics (e.g. mood). Many of the inference rules are shared across a wide range of languages, except for the tense and mood of verbs, as well as the gender and some grammatical cases of nouns and adjectives. In our rule-based inflection module, we extract the morphological features, part-of-speech tags, and dependency parsing tree of the source sentence using pre-trained Stanza models 2 and infer the aforementioned classes based on grammar rules and validation examples. The tense and mood of a verb are inferred from the morphological form of the corresponding source term, 3 while the number and person of its subject are inferred based on the morphological form of its subject. For nouns and adjectives, the number can be inferred from the morphological form of the source term or modified noun, while the gender can be determined based on the target lemma. To infer the grammatical case of a noun or adjective, one needs to infer about the grammatical role of the source term in the sentence. For example, in Lithuanian, there are seven main cases, including nominative, genitive, dative, accusative, instrumental, locative, and vocative cases. Figure 1 shows examples of how the case of a Lithuanian noun can be inferred from the dependency parsing tree of the source sentence. Some of the cases can be easily distinguished from the others, while some are more difficult to infer. In this example, the nominative case is comparatively easy to infer -the noun should be in the nominative case when the corresponding source term is the root or subject of the sentence. However, to distinguish between dative, accusative, instrumental, and locative cases, one needs to infer based on the grammatical and semantic role of the source term. In our rule-based module, we only take into account the most common scenarios. Figure 1: Examples showing how the grammatical case of a target lemma is inferred from the dependency parsing tree of the source sentence. In each example, the reference usage of the target constraint is underlined, and its corresponding source term is boldfaced and highlighted in the yellow, outlined box in each dependency tree. Figure (a) shows an example where the constraint term "smuikas" is used in nominative case in the reference, since its the root in the dependency tree. In Figure (b), the same constraint term is used in accusative case in the reference, since it is the object of the root verb "bought". However, not all objects should be used in accusative case. As shown in Figure (c), "smuikas" is used in instrumental case, since it serves the instrument with which the subject performs the action. Finally, given a lemma and its morphological tag, one can look up its inflected form in a morphological dictionary. We use DEMorphy (Altinok, 2018) for German and Wiktionary 5 for Lithuanian. Since most Lithuanian nouns follow a set of declension rules, 6 we inflect Lithuanian nouns based on the rules for lemmas unseen in the dictionary. Neural Inflection Module As prior work shows that BERT-style architectures (Devlin et al., 2019) can encode morphological information in their hidden representations and disambiguate morphologically ambiguous forms via contextualized encoding (Edmiston, 2020), we build the neural-based inflection module as a substitution model and base it on the encoder-decoder Transformer architecture, which embeds the source sentence through the encoder and the target lemmas through the decoder. Next, the decoder predicts the inflected form of each target word in parallel. The inflection module resembles the architecture of the conditional masked language model (CMLM) (Ghazvininejad et al., 2019) but differs in decoder input and output: CMLM takes the target sentence with some tokens masked out as input and is trained to predict only the masked tokens conditioned on unmasked ones, while our inflection module takes target tokens in their lemma forms as input and predicts their inflected forms. CMLM only allows for one-to-one substitution of subwords. However, in the case of inflection, the number of subwords that constitute a lemma The expert who played the carillon in July called it something else: "A cultural treasure" and "an irreplaceable historical instrument". carillon karilionas Liepos mėnesį karilionu grojęs ekspertas pavadino jį kitaip: "kultūros lobiu" ir "nepakeičiamu istoriniu instrumentu". Health Test Suite We construct the health test suite to test the models' ability to integrate terminology translations for fast domain adaptation. The test set contains English health information text annotated with domain-specific terminology translations and the human-translated sentences in German. We extract English→German test examples from the Himl Test Set, 10 which consists of English health information texts manually translated into German. We extract keyphrases from each source sentence using Yet Another Keyword Extractor (YAKE) (Campos et al., 2020) 11 and filter out phrases with high or medium frequency in the training corpora since they are mostly common and domain-generic phrases. 12 We extract 10 http://www.himl.eu/test-sets 11 YAKE extracts n-grams as keyphrases based on word casing, frequency, position, and their sentence context. 12 We filter out keyphrases with frequency > 100 in the WMT news training data. terminology translations from WikiTitles 13 and an online English-German dictionary, 14 and annotate the keyphrases whose dictionary translations match the reference translation. As shown in Table 1, each source sentence in the test set is annotated with health-related terminology translations in the lemma forms, some of which can be directly copied to the final translation while some need to be inflected based on the context. News Test Suite The news test suite simulates the scenario where a user looks up keyphrases of a document in a bilingual dictionary and pick the top translation for each keyphrase as a constraint to help low-resource MT. We choose English→Lithuanian as an example of low-resource translation. The test suite is constructed from English→Lithuanian test examples from WMT 2019 news test sets. We first extract keyphrases from each source document using YAKE. Then, we find the top translation of each keyphrase (for many terms there's only one translation available) in an online dictionary. 15 We filter out the keyphrases whose translations do not match the reference. Table 1 shows two examples from the same document in the test suite. All occurrences of a keyphrase in one document are annotated with its target translation to encourage consistent translation of keyphrases within a document. 16 Table 2 shows the number of sentences and constraints in each test suite. Experimental Settings Training Data For English→German (En-De), we use the training corpora from WMT14 (Bojar et al., 2014) and newstest2013 for validation. For English→Lithuanian, we use the training data from WMT19 (Barrault et al., 2019) and newsdev2019 as the validation set. For preprocessing, we apply normalization, tokenization, true-casing, and BPE (Sennrich et al., 2016). 17 Baselines We compare our model with the following baselines: • Auto-Regressive (AR) baseline without integrating terminology constraints. • AR with Constrained Decoding (CD) to incorporate hard constraints (Post, 2018). • AR with Target Lemma Annotation (TLA) that integrates lemma constraints as an additional input stream on the source side (Bergmanis and Pinnis, 2021). • Non-AutoRegressive (NAR) baseline based on the EDITOR model (Xu and Carpuat, 2021). • NAR with constraints (NAR+C) that integrates constraints as the initial sequence in EDITOR without explicit inflection. MT Models All models are based on the base Transformer (Vaswani et al., 2017). 18 All models are trained with the Adam optimizer (Kingma and Ba, 2015) with initial learning rate of 0.0005 and effective batch sizes of 32k tokens for AR models and 64k tokens for NAR models for maximum 300k steps. 19 We select the best checkpoint based on validation perplexity. NAR models are trained via sequence-level knowledge distillation (Kim and Rush, 2016). For decoding, we use beam search with a beam size of 4 for AR and AR with TLA, while for AR with CD we use a beam size of 20 16 Interestingly, in Lithuanian, the masculine foreign names are usually translated by appending a suffix to the name to reflect their inflection forms. In this example, the foreign name "Johnson" is translated into "Johnsonas" in the nominative form in the dictionary, while in the reference it becomes "Johnsono" in the genitive form. 17 See preprocessing details and data statistics in Appendix. 18 See more details in Appendix. 19 As shown in prior work (Zhou et al., 2020), the batch sizes for training NAR models are typically larger than the AR model. as suggested in prior work (Post and Vilar, 2018). To enhance constraint usage in NAR models, we adopt the techniques by Susanto et al. (2020): we prohibit deletions on constraint tokens or insertions within the constraint segments. Neural Inflection Model Its synthetic training data is derived from the MT parallel data. We first lemmatise and part-of-speech tag the target sentences using Stanza. We then randomly select adjectives, verbs, nouns, and proper nouns from each target sentence and train the inflection module to predict their inflected forms based on their lemma forms and the source sentence. Following Bergmanis and Pinnis (2021), we draw the proportion of words selected in each target sentence randomly from the uniform distribution between (0, 0.4].For training, we initialize its encoder parameters using the NAR baseline encoder and train it using Adam optimizer with a batch size of 32k tokens for maximum 200k steps. Evaluation We evaluate translation quality using sacreBLEU (Post, 2018). To evaluate how well the translation preferences are incorporated in the translation outputs, we measure lemma usage rate by first lemmatising the translation output and then computing the percentage of lemma terms that appear in the lemmatised output. To evaluate whether the terms are inflected correctly, we measure term usage accuracy by matching each lemma constraint with its inflected form in the reference and computing the percentage of reference inflected terms that appear in the translation output. Results and Discussion Intrinsic Inflection Accuracy To evaluate the quality of the inflection modules, we first compare the inflection accuracy of neural-based and rulebased inflection modules against the term usage accuracy of the TLA model. The rule-based inflection module achieves higher inflection accuracy than the neural-based module on both test suites: the neuralbased module obtains 81.2% accuracy on En-De health set and 15.4% accuracy on En-Lt news set, while the rule-based module achieves 87.6% accuracy on En-De and 77.4% accuracy on En-Lt. The rule-based module achieves close accuracy to TLA on En-De (89.2% term usage accuracy) and higher accuracy on En-Lt (67.9% term usage accuracy). To investigate why the neural-based inflection underperforms the rule-based one, we examine how Table 3: BLEU, lemma, and term usage rates on the En-De health and En-Lt news test suites. For lemma and term usage, we report scores on all constraints (All), constraints that require no inflection (No Inf ), and constraints that require inflection (Inf ). We boldface the highest scores and their ties based on the paired bootstrap significance test (Clark et al., 2011) with p < 0.05. the training and validation perplexity changes over the number of training epochs (see Appendix). On both languages, the validation perplexity stops decreasing after a few training epochs (10 epochs for En-De and 20 epochs for En-Lt) while the training perplexity decreases very slowly. The final training perplexity remains at around 5.1 on En-De and 5.7 on En-Lt, which is high considering the number of possible inflection forms given a German or Lithuanian lemma. This indicates that the neural-based module does not learn generalizable inflection rules from the data effectively. Table 3 shows the impact of rule-based and neural-based inflection modules on top of a range of AR and NAR baselines. NAR baselines without constraints achieves competitive BLEU to the AR baseline on En-Lt and slightly lower BLEU on En-De, as in Xu and Carpuat (2021). Given lemma constraints, AR with CD without inflection obtains lower term usage accuracy and lower BLEU than AR with TLA, as in Bergmanis and Pinnis (2021). Similar to AR with CD, NAR+C without inflection obtains lower term usage and close or lower BLEU than AR with TLA. Adding rule-based inflection helps all models leverage lemma constraints more accurately. On En-De, it significantly improves term usage accuracy of AR with CD by +4.7% and NAR+C models by +5.1%. 20 On En-Lt, it significantly improves both the lemma usage rate and term usage accuracy of AR with CD (+3.2% on lemma usage and +10.7% on term usage) and NAR+C (+3.5% on lemma usage and +11.0% on term usage). Remarkably, it also improves the term accuracy of En-Lt AR with TLA, which is already trained to inflect the target lemma constraints. When evaluating only on constraints that require inflection, the rule-based modules improves by 4.4-8.3% on TLA, 38.6-46.5% on CD, and 38.7-48.1% on NAR+C. As expected based on inflection accuracy results, rule-based modules outperform neural-based ones across the board. These improvements in term usage preserve or slightly improve BLEU. 21 , as can be expected since the constraints only constitute a small portion of the tokens in the translation outputs. Overall, these results indicate that our proposed framework is model-agnostic and supports our hypothesis that the lemma constraints can be effectively inflected based on the source context alone. End-to-End MT Evaluation We now compare our framework against TLA. Rule-based inflection combined with NAR+C achieves close lemma and term usage rates (∆ ≤ 2%) to TLA on En-De, +11.8% higher lemma usage, and +7.8% higher term usage accuracy on En-Lt (the improvements are significant). On En-Lt, the largest improvements are on constraints that require inflection: +20.5% on lemma usage and +10.6% on term usage. Incorporating the constraints preserves translation quality, with no significant difference in BLEU. Overall, these results show the benefits of integrating linguistic knowledge via rule-based inflection over purely data-driven approaches. Our approach is also more adaptive, as NAR+C with rule-based inflection does not require re-training the whole NMT model to incorporate new lemma terms. Instead, new terms can be incorporated by updating the morphological dictionary used in the inflection module. Cost Trade-offs Implementing the rule-based inflection module for the first target language (Lithuanian) took around 6 hours (including the time for learning the grammar knowledge from Wikipedia) by a computer scientist without prior knowledge of the target language nor formal linguistics training.The second language (German) implementation took only 3 hours, since some rules are shared across languages. By contrast, the neural-based module was implemented in about 3 hours but took around 38 hours to train a single model for one language pair on 2 GeForce GTX 1080 Ti GPUs. While these numbers do not provide a controlled comparison, they highlight that the rule-based module is relatively simple to build, as it can be done for both languages in 7-15% of the time required to train the neural model. 21 The improvements on BLEU is statistically significant for NAR+C on En-De, but not for other models. Term Frequency We analyze where rule-based inflection helps the most by computing the term usage accuracy on terms in different frequency bucket. As shown in Figure 2, the trends are different on En-De and En-Lt. On En-De, CD + rule slightly improves TLA on terms with frequency between [5, 100) instead of the rare terms. One reason is that the German morphological dictionary that we use to determine the gender of a word and its inflection forms only covers around 70% of the constraint terms in the health test suite. In addition, NAR+C + rule underperforms CD + rule on some constraint terms with frequency between [30, 100). This might be a side effect of knowledge distillation, which yields frequent errors for words that are rare in the training data (Ding et al., 2021). In En-Lt test set, 68% of the constraint terms are used in the inflection forms that are unseen in the training data. As shown in the figure, both CD + rule and NAR+C + rule bring substantial improvements over TLA on terms that are unseen in the training data. This is because most Lithuanian nouns and adjectives are inflected based on a fixed set of rules, thus even when the target lemma is unseen in the training data or morphological dictionary, it can still be inflected correctly. As a result, the rulebased inflection module can effectively incorporate linguistic knowledge in translation models and thus generalizes better to rare and unseen terms. Qualitative Analysis We examine a few randomly selected translation examples from TLA, NAR+C, and their counterparts with rule-based inflection. As shown in Table 4, TLA tends to copy constraint terms that are infrequent in the training data, and adding the rule-based inflection module helps TLA inflect the term correctly instead. In NAR+C models, the inflection module also improves the translation of the context around constraint terms, while the vanilla NAR+C model is prone to compounding errors caused by the uninflected constraints. Conclusion We introduced a modular framework for leveraging terminology constraints provided in lemma forms in neural machine translation. The framework is based on a novel cross-lingual inflection module that inflects the target lemma constraints given source context and an NMT model that integrates the inflected constraints in the output. We showed that our framework can be flexibly applied to different types of inflection modules, including rule-based and neural-based ones, and different NMT models, including autoregressive and non-autoregressive ones, with minimal training costs. Results on the English-German health and English-Lithuanian test suites showed that the linguistically motivated rule-based inflection module helps NMT models incorporate terminology constraints more accurately than both neural-based inflection and the existing end-to-end approach to incorporating lemma constraints. This work opens future avenues for further improving the inflection module by combining linguistic knowledge with data-driven approaches. Future work is needed to explore the strengths and weaknesses of this framework for languages with a broader range of morphological properties. A Data Preprocessing For preprocessing, we apply normalization, tokenization, true-casing, and BPE (Sennrich et al., 2016) with 37, 000 and 24, 500 merging operations for En-De and En-Lt. Table 5 shows the provenance and statistics of the preprocessed data. B Model and Training Details All models are based on the base Transformer (Vaswani et al., 2017) with d model = 512, d hidden = 2048, n heads = 8, n layers = 6, and p dropout = 0.3. We tie the source and target embeddings with the output layer weights (Press and Wolf, 2017;Nguyen and Chiang, 2018). We add dropout to embeddings (0.1) and label smoothing (0.1). All models are trained with the Adam optimizer (Kingma and Ba, 2015) with initial learning rate of 0.0005 and effective batch sizes of 32k tokens for AR models and 64k tokens for NAR models for maximum 300, 000 steps. 22 We select the best checkpoint based on validation perplexity. Following Xu and Carpuat (2021), we train NAR models using sequence-level knowledge distillation: we replace the reference sentences in the training data with translation outputs from the AR models. To train the neural-based inflection module, we initialize its encoder parameters using the NAR baseline encoder and train it using Adam optimizer with a batch size of 32k tokens for maximum 200, 000 steps. Models are trained on 2 GeForce GTX 1080 Ti GPUs. Table 6 shows the number of parameters in each model. C Evaluation Metric We evaluate translation quality using sacre-BLEU (Post, 2018). 23 22 As shown in prior work, the batch sizes for training non-autoregressive models are typically larger than the AR model (Zhou et al., 2020
6,756.4
2021-09-10T00:00:00.000
[ "Computer Science", "Linguistics" ]
UAV-Based Hyperspectral and Ensemble Machine Learning for Predicting Yield in Winter Wheat Winter wheat is a widely-grown cereal crop worldwide. Using growth-stage information to estimate winter wheat yields in a timely manner is essential for accurate crop management and rapid decision-making in sustainable agriculture, and to increase productivity while reducing environmental impact. UAV remote sensing is widely used in precision agriculture due to its flexibility and increased spatial and spectral resolution. Hyperspectral data are used to model crop traits because of their ability to provide continuous rich spectral information and higher spectral fidelity. In this study, hyperspectral image data of the winter wheat crop canopy at the flowering and grain-filling stages was acquired by a low-altitude unmanned aerial vehicle (UAV), and machine learning was used to predict winter wheat yields. Specifically, a large number of spectral indices were extracted from the spectral data, and three feature selection methods, recursive feature elimination (RFE), Boruta feature selection, and the Pearson correlation coefficient (PCC), were used to filter high spectral indices in order to reduce the dimensionality of the data. Four major basic learner models, (1) support vector machine (SVM), (2) Gaussian process (GP), (3) linear ridge regression (LRR), and (4) random forest (RF), were also constructed, and an ensemble machine learning model was developed by combining the four base learner models. The results showed that the SVM yield prediction model, constructed on the basis of the preferred features, performed the best among the base learner models, with an R2 between 0.62 and 0.73. The accuracy of the proposed ensemble learner model was higher than that of each base learner model; moreover, the R2 (0.78) for the yield prediction model based on Boruta’s preferred characteristics was the highest at the grain-filling stage. Introduction Winter wheat is one of the three major cultivated cereals and is the most widely-grown cereal crop in the world [1]. Wheat plays a crucial role in global food production, trade, and food security [2]. Estimating wheat yield prior to harvest on a large scale not only offers a scientific foundation for local governments to establish production goals, but also ensures food security [3]. Therefore, the timely and accurate estimation of winter wheat yield is crucial for intelligent agricultural management and people's livelihoods. The traditional yield assessment of winter wheat involves destructive sampling in the field in order to determine yield, which is not only time consuming, less objective, and lacking in robustness and sustainability, but also fails to monitor the crop growth throughout its reproductive life [4]. The development of remote sensing technology in recent years has provided a non-destructive, rapid, and efficient way to monitor crop growth [5]. Remote sensing techniques include ground-based platforms, satellite-based platforms, and UAV-based platforms [6]. The data collection from the ground-based poor repeatability [26,28,29]. Most of the previous studies have focused on the mining of spectral information and the exploration of regression techniques based on machine learning algorithms [30,31], and there has been little discussion and research on model fusion. Therefore, the potential problems of small sample size and single machine learning algorithms can limit the application to winter wheat yield estimation in practical production. In order to address this issue, we introduced decision-level fusion (DLF) models in ensemble machine learning. The DLF models fuse multichannel/multiscale information and typically produce more consistent and better prediction performance than individual models, have good noise immunity, can handle high-dimensional data, provide complete and detailed object information, and are simple to implement and fast to train [32,33]. These models are extensively used in the fields of injury detection, artificial intelligence, and image processing [34][35][36]. Based on previous studies, machine learning and hyperspectral imagery have been used successfully in many applications, but the strategy based on DLF model fusion has not yet been applied to crop yield prediction [37,38]. The aim of this study was to estimate winter wheat yield using hyperspectral imagery from a UAV. The specific objectives included the following: (1) investigating the potential of hyperspectral imagery for winter wheat yield prediction, (2) evaluating the performance of winter wheat yield prediction models under different feature selection methods, and (3) building a DLF model based on individual machine learning algorithms in order to improve prediction performance. Experimental Design and Data Collection This research trial was conducted in the 2019-2020 growing season at the experimental base of the Chinese Academy of Agricultural Sciences in Xinxiang, Henan Province (113 • 45 38 N, 35 • 8 10 E). During the winter wheat reproductive period (November 2019-June 2020), the total monthly rainfall, average monthly temperature, and average monthly sunshine hours all reached their maximums in May, while the monthly relative humidity reached its maximum in January ( Figure 1). Rainfall was mainly concentrated in January, February, April, and May; temperature and sunshine hours both increased gradually from January onwards as the crop developed; and relative humidity was fairly constant throughout the season. The trial area shown in Figure 2 consisted of 180 plots with three irrigation treatments set at high irrigation (irrigation treatment 1, IT1), moderate irrigation (irrigation treatment 2, IT2), and low irrigation (irrigation treatment 3, IT3) during the full growth period, using large sprinklers corresponding to a total irrigation water depth of 240 mm, 190 mm, and 145 mm, respectively. The irrigation schedule for each stage is shown in Table 1. Each irrigation treatment had 60 plots, 8 m long and 1.4 m wide, with an area of 11.2 m 2 . Thirty varieties of winter wheat were selected for this experiment, and each irrigation treatment was replicated twice in a group of 30 wheat varieties to ensure the objectivity of the experiment. For production fields, pesticide and fertilizer management was performed according to local management practices. At maturity (3 June 2020), winter wheat yields were collected using a plot combine. Tillering 35 35 35 Overwintering 35 35 35 Greening 35 25 20 Jointing 50 35 20 Heading 50 35 20 Grain filling 35 25 15 Total 240 190 145 Figure 1. Meteorological conditions during the wheat growth period from November to May: (a) total monthly rainfall, (b) average monthly temperature, (c) average monthly humidity, and (d) average monthly sunshine hours. Acquisition and Processing of Hyperspectral Data The M600 Pro (SZ DJI Technology Co, Shenzhen, China) was used as the flight platform with an onboard Resonon Pika L nano-hyperspectral propulsion scanner to acquire hyperspectral data. The Resonon Pika L Nano-Hyperspec with meter-level accuracy is a lightweight (0.6 kg) hyperspectral sensor specifically designed for use on UAV platforms. This sensor has 300 spectral bands in the 400-1000 nm wavelength range with a band width of 2.1 nm, including the visible and near infrared regions. It is externally pushed and scanned with a choice of scanning angles (vertically downwards, horizontally, or at any angle). The Resonon Pika L Nano-Hyperspec features a focal length of 12 mm and offers a 22 • field of view. Each scan line contains 640 pixels with a pixel pitch of 6 µm. The spectral resolution and resampling intervals are 6 nm and 2 nm, respectively. The sensor also includes a GPS/inertial measurement unit (GPS/IMU) navigation system, which enables the gathering of real-time altitude data from the UAV platform, allowing for better reflection calibration and geographic alignment. Depending on the environmental circumstances, certain criteria were established to fit the site size survey in this study. To ensure the quality of the data, hyperspectral data corresponding to the flowering (Zadok 65) and grain-filling (Zadok 85) stages of the wheat were acquired on 30 April 2020 and 13 May 2020, respectively. Both UAV flights were carried out between 10 a.m. and 2 p.m. in clear and cloudless weather conditions to minimize the effect of shadows. The UAV flew at a speed of 5 m/s at a height of 40 m, with a ground sampling distance of 2.5 cm. Three 0.25 m 2 reference panels that differed in brightness (95% white, 40% grey, and 5% black) were placed within the study area for postprocessing and measured with the spectrometer. In this study, 12 ground control points (GCPs) were evenly distributed across the field as precise georeferenced positions, and their centimeter-level positioning accuracy was obtained through the differential global positioning systems. The acquired hyperspectral data was subjected to radiometric correction, atmospheric correction, and geometric correction. Hyperspectral images were acquired at an altitude of 50 m and under stable light conditions, so atmospheric correction was not required. Spec-trononPro software (version 3.4.0, Resonon) was used for hyperspectral image correction. For hyperspectral radiometric corrections, empirical linear corrections were made using the measured images and field spectra of the wheat and reference panels. The hyperspectral image radiance data was converted to reflectance by the known reflectance of the white reference panel. Three standard panels with different reflectance properties were placed in the flight area to derive the three parameters for atmospheric correction. Geographical correction used position and attitude parameters from the GPS/IMU and the relationship between the GPS/IMU and the imager. The parameters were converted between their respective coordinate systems. Noise in the image data can cause large differences at the beginning and end of the spectral range shown by the image and field spectra, so it is necessary to eliminate certain bands from the image data. The background (shadows and dirt) was eliminated from each plot by thresholding the NIR band at a wavelength of 800 nm. According to previous studies, vegetation usually has higher reflection values than the background in the NIR region, which is the reason behind our filtering method, setting the threshold at 30%, and removing noise bands below 440 nm and above 960 nm. Acquisition of Spectral Indices Hyperspectral data acquired using UAVs consists of hundreds of bands that contain a wealth of spectral information, and many of the adjacent bands are highly correlated with each other [39]. Sixty published spectral indices calculated using spectral reflectance were selected for predicting yield (Table 2), with each spectral index derived from two or more spectral bands. These spectral indices included the curvative index (CI), chlorophyll absorption index (CAI), normalized difference vegetation index (NDVI), simple ratio index (SR), pigment-specific normalized difference (Psnd), renormalized difference vegetation index (RDVI), triangular vegetation index (TVI), modified versions of these indices, such as the modified normalized difference (MND), modified simple ratio (MSR), normalized difference (ND), and their combinations MCARI/ MTVI2, among others. The majority of the bands utilized are in the red, NIR, and red-edge spectral regions. Feature Selection Methods The choice of input features is as important as the choice of the algorithm to be used when building the model. In supervised learning, feature selection is often used prior to model development to minimize the feature set dimensionality and thus gain performance improvements in the learning algorithm. In this study, 60 spectral indices were chosen, so it was ideal to select the most sensitive spectral indices to reduce the number of features. The following three common feature selection methods were used in this study to rank the importance of features: recursive feature elimination (RFE), Boruta, and the Pearson correlation coefficient (PCC). Recursive feature elimination (RFE) [71] is a wrapper-based feature selection method that selects features with the help of a classification method. RFE requires training multiple classifiers to reduce the feature dimension, training time increases with the number of classifiers trained, and each part of the analysis can continue to be iterated, saving computational time. Low-weighted features are eliminated in each iteration, while equal weights are assigned to relevant attributes. RCE is performed in three steps, as follows: (1) an estimator is used to estimate the initial features' importance scores, (2) the feature with the lowest significance score is eliminated, and (3) a rank in given to each deleted variable in the order in which it was removed. The Boruta [72] algorithm is a wrapper method built around the random forest algorithm. It provides criteria for a number of important factors and captures the outcome variables for all relevant features in the dataset by scoring all candidate features as well as shaded features. The importance value of the shaded features depends on whether the candidate features are significantly correlated or not. The Boruta algorithm steps are as follows: (1) randomly disrupt the feature order to obtain the shadow feature matrix, (2) train the model with the shadow feature matrix as input, (3) take the maximum value in the shadow features and record the cumulative hits of the real features to mark the features as important or unimportant, and (4) remove the unimportant features and repeat the first three steps until all the features are marked. The Pearson correlation coefficient (PCC) [73] is a measure of the correlation between two variables, and it varies between −1 and 1. The PCC(r xy ) may be calculated using the following equation: x i and y = 1 y n ∑ i=1 y i denote the means of x and y, respectively, with n representing the sample size. The PCC is invariant to both linear and non-linear changes of the variables. The absolute value of PCC was used to compute the feature significance scores. Decision-Level Fusion Model for Ensemble Learning In this study, support vector machines (SVM), Gaussian process (GP), linear ridge regression (LRR), and random forest (RF) were the four regression models based on DLF for ensemble learning. The 'caret' R package in R4.0.2 was used to build the individual learner and DLF framework. The basic principle of DLF is shown in Figure 3. The hyperspectral index and winter wheat yield data pairs from 180 plots were randomly and uniformly divided into five groups, one group of which (n = 36) was randomly taken as the validation set and the remaining four groups (n = 144) as the training set. Predictions were made for each fold by training the model and five-fold cross-validation. In the five-fold crossvalidation process, the winter wheat yield predictions were generated separately for each regression model, and the model effects could be observed by examining the results of the individual learners on the validation set. The m individual learners would get a prediction matrix of m × n dimensions after completing the above process (n was the number of samples in the training set and m was the number of individual learners), and the results of the prediction matrix were then used to train the DLF model to make the final prediction. Importantly, a five-fold cross-validation method was used in all of the models to ensure the reasonableness of the comparison between methods. To avoid uncertainty in the results, the process of dividing the data into training and validation sets using the five-fold crossvalidation method was repeated 40 times to generate 200 models, and the mean prediction accuracy of the validation set of these 200 models was used as the final evaluation metric. Regression Methods Based on a survey of previous research, and in order to assess the effectiveness of different machine learning algorithms and to better comprehend the non-linear connection between the dependent and independent variables, the following four widely utilized machine learning models were selected and used for comparison: SVM, GP, LRR, and RF. The four machine learning algorithms are described below, as follows: SVMs (support vector machines) [74], which benefit from statistical learning theory and the principle of minimal structural risk, are sparse and robust classifiers, mainly used for the classification and regression of high-dimensional samples. SVMs are increasingly popular in existing research areas because of their characteristics, such as good generalization ability and robustness to noise. SVMs are trained on sufficient samples in order to obtain a set of samples that approximate the hyperplane by fitting estimates of successive optimal output variables. The hyperplane is approximated by two important parameters, the kernel function and the loss function. The radial basis function was utilized as the kernel function in this research to change the regularization parameters using a cross-validation method. GP (Gaussian process) [75] is a supervised learning process for estimating regression model parameters through sample learning. GP belongs to the stochastic process in probability theory and mathematical statistics, where any linear combination of random variables conforms to a normal distribution. GP is now widely used in modelling in the field of remote sensing, and therefore this algorithm was used in this study. LRR (linear ridge regression) [76] is a biased estimation regression method dedicated to covariance data analysis. LRR obtains more objective regression coefficients by losing some information and reducing precision. Typically, LRR has a low R 2 and high regression coefficients, and is widely used in co-linear problems and research with a large amount of data. The LRR algorithm was used in this study to construct yield estimation models. RF (random forest) [76] is an ensemble learning method that constructs multiple decision trees and can perform decision making and regression. RF is able to model the relationship between dependent and independent variables based on decision rules. It can handle a large number of input variables, assess the importance of variables while deciding on categories, produce higher accuracy, balance errors, and quickly mine the data. Therefore, the RF algorithm was used for modelling in this study. The machine learning algorithms used in this study were all implemented independently. To improve the prediction accuracy of the models, we further processed these results to construct a DLF model [77], which is a model that fuses the results of different machine learning models by the training weights obtained. Based on previous research, a weighted prior (WP) approach was introduced to construct the DLF model, taking into account the estimated variance of each model. The DLF and WP can further improve the model accuracy and generalization ability and minimize the result bias. The procedure for this method is as follows [78]: where ε (i) is the estimation variance, y (i) is the predicted value from the i th model, and y is the observed value. where N is the total number of samples in the training set. where l denotes the total number of models and w i denotes the weight of the i th model. where w * i is the final DLF weighting. where y WP is the final result based on the WP method. The individual machine learning models were used as input to build the DLF model. Cross-Validation and Parameter Optimization A five-fold cross-validation was used to form the prediction matrix in the personal machine learning process of the DLF, which can be used as external cross-validation. In addition, internal random grid search cross-validation allows the fine-tuning of the hyperparameters of the individual learner shown in Figure 4. In external cross-validation, the original dataset is randomly divided into five equal parts (Figure 4), one of which is then used as the validation set and the remaining four as the training set each time. Each training set used for external cross-validation was also randomly divided into five equal parts, of which 1/5 was used as the validation set for internal cross-validation and the remaining 4/5 was used as the training set for internal cross-validation. The model was trained by setting different combinations of candidate hyperparameters for internal crossvalidation, and the model was then validated on the internal cross-validation set. Each hyperparameter combination was validated five times, and after training model evaluation, the hyperparameter combination with the highest average validation accuracy was applied to the outer cross-validation to construct the ideal model. Statistical Analysis In this study, the regression model was evaluated in the four following ways: coefficient of determination (R 2 ), root mean square error (RMSE), ratio of performance to interquartile distance (RPIQ), and ratio of performance to deviation (RPD). The criteria for evaluating models are yield estimation models with higher accuracy, and an RPD of >1.5 is usually considered to indicate a reliable prediction. The formulae for the four evaluation methods are as follows: where y i is the measured value,ŷ i is the predicted value, y is the mean of the measured values, N is the sample size, SD is the standard deviation of the measured value of the prediction set, Q 3 is the lower boundary of the third quartile, and Q 1 is the upper boundary of the first quartile. Descriptive Statistics The yield of winter wheat in all of the test plots in this study was 6.55 t·ha −1 , and the mean yields differed for the three irrigation treatments. The yield statistics for the test plots under each irrigation treatment and all of the plots are shown in Table 3. In general, the treatments with higher irrigation levels were associated with higher yields. IT1 had the highest average yield of 7.97 t·ha −1 , followed by IT2 at 6.73 t·ha −1 , and IT3 at 4.94 t·ha −1 . The data ranges, quantile statistics, standard deviations (SD), and coefficients of variation (CV) for the yield datasets for all of the plots and the three experimental treatments showed significant yield differences between the treatments and well separated datasets. The simple linear regression coefficients of determination for each vegetation index at the flowering and the grain-filling stages are shown in Table A1. The results show that the R 2 of each spectral index in the grain-filling stage was mostly greater than that in the flowering stage. The RVSI index performed best at both stages, with R 2 values of 0.48 at the flowering stage, and 0.49 at the grain-filling stage. The poorest performing index was CI in the flowering stage, with an R 2 of 0.08, and the index with the poorest performance in the grain-filling stage was TCARI/OSAVI, with an R 2 of 0.1. Feature Importance Ranking In this study, RFE, Boruta, and PCC methods were used to rank the importance of 60 vegetation indices at the flowering and grain-filling stages. The results of the ranking of the importance of each vegetation index are shown in Table A2 of Appendix A. Comparing the ranking of feature importance at the flowering and grain-filling stages for the three feature selection methods revealed that RVSI ranked highly and performed consistently well overall. The ranking of each of the other vegetation indices for the different stages varied with the different feature selection methods. Of the 60 vegetation indices selected, 23 were composed of three or four bands, and about 15 of them were in the top 40 in order of importance. We also noted that two integer indices, MCARI/MTVI2 and TCARI/OSAVI, were ranked in the top 40 by both the RFE and Boruta trait-screening methods in both of the wheat growth stages. Both of the indices were ranked in the top 25 after RFE screening at the grain-filling stage. After the PPC trait-screening method, both of the indices were ranked outside of the top 40 at the flowering stage, and only MCARI/MTVI2 remained in the top 40 at the grain-filling stage. Comparison and Performance of Feature Selection Methods and Model Accuracy In order to further explore the high-performance features, a total of 60 features were iteratively added to the machine learning model, starting with the first feature in each order, and updating the model training performance until all of the 60 features were included. The training accuracy was calculated for four base models (SVM, GP, LRR, and RF), obtained under three feature selection methods, for the two wheat growth stages ( Figure 5). For the SVM model, the Boruta method performed best in both the flowering and grain-filling stages, followed by PCC and RFE, and the accuracy of the model improved as the number of features increased ( Figure 5(a1,2)). For the GP model, the flowering stage was more accurate when using the Boruta method, followed by the PCC and RFE methods, and the grain-filling stage was better with the Boruta method and PCC compared to RFE ( Figure 5(b1,2)). For the LRR model, the RFE method performed best at the flowering stage. The Boruta method performed the best at the grain-filling stage and PCC performed the worst ( Figure 5(c1,2)). In the RF model, the best accuracy was achieved at the flowering and grain-filling stages when the Boruta method was used to rank the models, with the PCC and RFE methods performing in general agreement at the flowering stage and the results of the RFE method were the worst at the grain-filling stage ( Figure 5(d1,2)). The combined results showed that the accuracy of all of the four models (SVM, GP, LRR, and RF) remained stable as the number of features increased after about 25 features. Therefore, this study used the top 25 features for the ensemble model development. Comparing the R 2 of the four models constructed for the two growth stages showed that the LRR model had the lowest accuracy, with R 2 ranging from 0.48 to 0.54 at the flowering stage and 0.48 to 0.63 at the grain-filling stage, and after the input features were stable, the R 2 values were 0.54 and 0.59, respectively. The R 2 of the GP model ranged from 0.12 to 0.72 for the flowering stage, and 0.55-0.81 for the grain-filling stage. The RF model had the highest accuracy, with R 2 ranging from 0.76 to 0.94 at the flowering stage and 0.86-0.95 at the grain-filling stage, and when the input features were stabilized, the R 2 values were 0.93 and 0.95, respectively. The five models (four base models and the DLF model) were trained using the full features of the training samples and selected features, and model performance was evaluated on the validation samples. The mean values of the validation accuracy obtained from 200 trials are shown in Table 4. Among the base models constructed in this study, the validation accuracy of the SVM model constructed using the RFE method with the preferred spectral indices at the flowering stage was the highest (R 2 = 0.63, RMSE = 1.03 t·ha −1 , RPIQ = 2.40, RPD = 1.60), and the validation set accuracy of the SVM model constructed using the Boruta method with the preferred features at the grain-filling stage was the highest (R 2 = 0.73, RMSE = 0.87 t·ha −1 , RPIQ = 2.74, RPD = 1.90). Among the constructed DLF models, the best accuracy of the models constructed using the Boruta and PCC methods with the preferred features at the flowering stage achieved an R 2 of 0.66, and the highest accuracy of the models constructed using the Boruta method with the preferred features was at the grain-filling stage (R 2 = 0.78, RMSE = 0.79 t·ha −1 , RPIQ = 2.99, RPD = 2.08). Overall, all of the methods gave an R 2 of 0.56 or higher, indicating the effectiveness of these models in estimating the winter wheat yield. The DLF models outperformed all of the individual models. The R 2 values for the DLF models constructed using the preferred features were ≥0.65 for the flowering stage, and 0.63 for the DLF models constructed using all features. At the grain-filling stage, the R 2 values were ≥0.77 for the DLF models constructed using the preferred features, and 0.75 for the DLF models constructed using all of the features at the grain-filling stage. The accuracy of all of the feature selection methods was improved in this study relative to the full feature model, and the RFE method improved the most at the flowering stage. The R 2 values of the SVM, GP, LRR, RF, and DLF models improved by 0.04, 0.03, 0.04, 0.03, and 0.02, reaching 0.63, 0.59, 0.62, 0.60, and 0.65, respectively, at the flowering stage. The Boruta method improved the most at the grain-filling stage. The R 2 values for the five models increased by 0.05, 0.05, 0.06, 0.03, and 0.03, reaching 0.73, 0.72, 0.66, 0.68, 0.78, respectively. In addition, the accuracy of the models was higher at the grain-filling stage compared to the flowering stage. Scatter plots ( Figure A1) were used to better show the yield prediction performance of the models and the feature selection methods. In general, all of the models gave good results and performed well with all three of the feature selection methods. In addition, the accuracy of the DLF model varied by growth stage and feature selection method. The performance was stable across all of the feature selection methods at the different growth stages, indicating that it was more adaptable to different feature selection methods. Most of the observed and predicted yields obtained from the DLF model showed good agreement with each other, and it was good at simulating the high and low yields obtained at harvest for the different irrigation treatments. Yield Distribution A comparison of all of the models used in this study revealed that the DLF model, constructed using the Boruta method for preferential feature selection at the grain-filling stage, achieved the best accuracy, and it was therefore used to generate a distribution of predicted yields ( Figure 6). The results of the t-test analysis between the different irrigation treatments are shown in Table 5 and indicate that the yield distribution differed significantly between the three treatments, in the order IT1 > IT2 > IT3. Overall, the predicted yield distribution in the IT1 treatment was in the range of 5 to 10 t·ha −1 . Based on the observed results, the IT1 treatment had the highest yield of 5 to 9 t·ha −1 , followed by the IT2 and IT3 treatments; this is consistent with the yield distribution predicted by the DLF model and demonstrates the feasibility of using a model to estimate yield. Discussion We selected 60 hyperspectral narrow-band indices for this study, of which approximately 74% were associated with red-rimmed bands. At the flowering stage, more than six indices associated with the red-edge band were in the top 10 after sorting by the RFE, Boruta, and PCC methods. These red-edge spectral indices all provided better prediction performance than other band indices, which agrees with the findings of previous studies [79][80][81]. For example, Xie et al. [82] analyzed the relationship between yield and canopy spectral reflectance of winter wheat at maturity under low-temperature stress and found that the red-edge region was associated with grain yield. However, there was a large variation in the ranking of the different indices, which may be due to the use of different feature selection methods or the different environments to which the vegetation indices apply. Some of the spectral indices performed consistently well among the different feature selection methods at the two wheat growth stages, such as the three spectral indices RVSI, DSWI-4, and ND [553,682] . The RVSI index, which consists of three bands including the red-edge band, performed well in assessing wheat rust symptoms and constructing rice physiological trait models [83], and was in the top five in the different methodological feature rankings in this study. This could be because it provided more spectral information and was more sensitive to the yield of the different feature selection methods at the different growth stages. The DWSI-4 index, originally a variant of the plant disease-water stress index constructed using simple and normalized ratios, also had good stability and performance in crop disease prediction [84]. The ND [553,682] index can be used to estimate the chlorophyll content and can minimize the effect of shading and leaf area index size [85,86]. Our study showed that these three spectral indices can be used for yield estimation. MCARI/MTVI2 and TCARI/OSAVI are integrated indices. In previous studies, their performance was better than the individual MCARI, MTVI2, and OSAVI indices, because the integrated indices had richer band information and effectively eliminate the background effects. [87,88]. The Boruta method was second to the RFE method for winter wheat at the flowering stage and performed best at the grain-filling stage, probably due to the difference in the performance between the two methods in the different environments. The Boruta method is a fully correlated feature selection method that aims to select features that are truly correlated with the dependent variable and can be used for prediction, rather than model-specific selection, and can help us to understand the characteristics of the dependent variable more comprehensively and make better and more effective feature selections [24,89]. The RFE method takes into account the correlation between the features, continuously builds models to find the best features, has good generalization ability, and is a suitable method for small sample data sets [90]. The PCC method, which performed the worst in this study, is very commonly used in sensitivity feature selection in the crop science community. It does not require any model training, but does not objectively represent correlations when the correlations between the variables are complex. There is also a risk of multicollinearity between features [91,92]. In this study, the accuracy of the model construction, based on the preferred features under feature selection, was better than that of the model under the full feature condition, which was consistent with the findings of Hsu et al. 2011 [93] and validated the effectiveness and generalizability of the feature selection method. In this study, four individual machine learning algorithms were used to construct winter wheat yield estimation models based on a subset of spectral indices obtained after feature selection. The RF model had the highest accuracy and performed best when trained using the training set data, but the RF model was not the best performer in the validation set of the model, probably due to the overfitting phenomenon of the RF model in the training set [94]. In the model training set, the LRR models all performed the worst, but in the model validation set the GP models performed the worst at the flowering stage and the LRR models performed the worst at the grain-filling stage. LRR models tend to have a lower R 2 than the ordinary regression models but can generate a value on covariance problems [95]. The GP models use the full sample for prediction, and as the dimensionality of the data rises, the effectiveness decreases [96]. The SVM models did not perform well in the training set but had the highest accuracy in the validation. SVM is a machine learning method based on the inner product kernel function. The wrong choice of kernel hyperparameters may cause a decrease in the accuracy of the model training set estimation. However, the high accuracy of the SVM model validation set was due to its better robustness, suitability for small sample data regression, and the lack of sensitivity to kernel functions with the ability to avoid dimensional catastrophe problems. [97,98]. We also found that the accuracy of yield estimation models constructed using the four independent machine learning algorithms, SVM, GP, LRR, and RF, at the two developmental stages of winter wheat also differed greatly. Based on the model validation set, the accuracy of each model at the grain-filling stage was higher than that at the flowering stage under the different feature selections. This was due to the dry matter stored in the wheat seeds through carbon assimilation in the winter wheat during grain filling, indicating that this stage contains more spectral information that can be used to predict yield. In addition, the spectral information collected from the winter wheat was increased in order to provide a more comprehensive and accurate reflection of the yield of the winter wheat [2,99]. A DLF (decision-level fusion) model was developed based on the individual machine learning models used in this study. The results showed that the DLF model performed significantly better than each of the other models when all of the features or the selected features were used. When using the selected features, the DLF model performed best at the flowering and grain-filling stages, and the model accuracy was better than that of the individual models. In addition, using selected features obtained under the different feature selection methods, the DLF model produced R 2 values of >0.65 at the flowering stage and >0.77 at the grain-filling stage. Overall, the DLF model gave more satisfactory and better results than the individual models. This was the same conclusion reached in a previous study [33] where the DLF model was able to minimize the individual model bias and improve the accuracy of the inverse model. Taken together, the above description suggests that adequacy and diversity are two important principles in the selection of base models in the decision-level fusion process [100]. This requires that the different base learners should all have a good predictive performance and be able to minimize inter-model dependencies and act as complementary information [101,102]. This prerequisite requirement is justified by the fact that the DLF methods fuse the prediction results of different independent machine learners so that the final fusion results are all influenced by each base model [103]. Furthermore, fusion of models with similar high performance will yield limited prediction results [104]. Based on the requirements of DLF and the limitation problem, this study used the SVM, GP, LRR, and RF machine learning algorithms with completely different training mechanisms to construct the yield estimation models and improved the model performance through parameter optimization, and the experimental results provided further evidence of the effectiveness of the underlying models. This study used the acquired hyperspectral image time series to predict the yield of winter wheat, and the yield prediction model constructed for the grain-filling period had a high accuracy rate. The use of hyperspectral data to construct yield estimation models has been widely used in previous yield estimation studies, and all have achieved high model accuracy, consistent with the findings of this paper [105,106]. For example, Chandel et al. (2019) [107] used hyperspectral indices to construct a yield prediction regression model and found that the yield of irrigated wheat was estimated with an accuracy of 96%. However, relying on hyperspectral data alone for yield estimation still has some limitations. In future research, we intend to integrate UAV RGB and multispectral image data into yield estimation models as well, in order to broaden the application area of yield estimation. In addition, we will also consider examining the effects of biotic (weeds, pests, and diseases) and abiotic (nutrients, temperature, and salinity) stresses based on UAV imagery and ground data. Finally, additional feature selection methods and integrated learning methods will be considered for yield estimation in order to further improve the prediction accuracy. Therefore, in the future, we will also analyze the impact of diseases, insects, and fertility on wheat yield. Conclusions In winter wheat production, real-time insight into yield conditions prior to harvesting can help to optimize crop management and guide the field practices. In this study, we developed a DLF-based machine learning model for winter wheat yield prediction using UAV-based hyperspectral imagery. The narrow-band hyperspectral indices were extracted, and the most important indices were selected for model development using each of the three feature selection methods. The results showed that the RFE-based method for feature selection at the flowering stage had a higher accuracy, the Boruta-based method for feature selection at the grain-filling stage had a higher accuracy, and the DLF model outperformed the base models and achieved the highest accuracy when using the preferred features. This study demonstrates the effectiveness of using hyperspectral images to build a model for yield estimation in winter wheat. Data Availability Statement: The data presented in this study are available within the article. Conflicts of Interest: The authors declare no conflict of interest. Disclaimer: The findings and conclusions in this article are those of the authors and should not be construed to represent any official USDA or U.S. Government determination or policy. Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S. Department of Agriculture and the Chinese Academy of Agricultural Sciences. USDA is an equal opportunity provider and employer.
9,244.2
2022-01-14T00:00:00.000
[ "Agricultural and Food Sciences", "Computer Science", "Environmental Science", "Engineering" ]
A Rapid Prototyping System, Intelligent Watchdog and Gateway Tool for Automotive Applications Hand in hand with the inevitable increase in vehicle connectivity solutions, the high security and safety demands for automotive embedded systems are emphasizing the importance of thorough testing of newly developed software components. The software quality is ensured through the usage of various tools at the development stage. This paper introduces an implementation of such a tool, which is based on a standard microcontroller platform and provides functionalities of a Rapid Prototyping System (RPS). The solution is based on the universal calibration protocol XCP and use of an XCP-Master controller. The tool also acts as an intelligent watchdog, as it enables close behavioural monitoring of novel functionalities within an Electronic Control Unit (ECU). Hence, the tool is specially apt for dependable AI testing in the scope of safety-critical applications. This is potentially a key building block of a safety net for testing of novel functionalities on a ’grey-box-like’ ECU. The XCP-Master controller also provides the possibility to utilize the platform as an Ethernet-CAN message gateway for access to remote devices. I. THE AUTOMOTIVE TOOLING CHALLENGE The peculiarities of the automotive applications exert unique challenges to the associated embedded systems [1]. The rigorous safety and security, with a frequently added realtime aspect, demand thorough testing measures in a realistic environment. This is matched by a constant rise in the complexity of the vehicle controls, hence enforcing more complex communication in the evolving system of systems [2]. The added complexity is posed by usage of Electronic Control Units (ECU) from a wide range of providers. These devices execute the embedded software, which can contain up to 100 million lines of code [3]. Inevitable time-consuming and inefficient software testing, prior to integration into ECUs, are creating a conflict between the drive to improve existing functionalities and the need for their meticulous testing. AVL regularly encounters this issue when performing verification, validation or calibration of the developed functions on in-situ ECUs. The challenge is tackled in collaboration with Graz University of Technology, which also brings additional competencies in the field of automotive safety. A. Rapid Prototyping System Rapid Prototyping Systems (RPS) offer a possible solution. Such systems are able to directly interface to existing integrated systems to conduct measurements and calibration. The added possibility to bypass ECU functions is exploited when isolating specific aspects or functions within the ECU. Such RPS, which are readily available on the market, enable rapid deployment and testing of new software components, without any hardware-specific considerations [4]. These high-end devices couple a multitude of functionalities with high processing power at a considerable financial cost. Hence, they are often shared between engineers and their use is limited. In contrast, the testing procedures frequently do not demand the full capabilities of these RPSs [5]. The tests often rely on available functionalities and could be performed at a fraction of the available processing power. Hence, we target the usage of a microcontroller-based platform, which provides the possibility to access an ECU and perform measurement and calibration. Aside from the conceptualisation, the presented work goes into the implementation of such a tool. B. Intelligent Watchdog Just as all computer systems, embedded systems are also prone to errors. These result from a range of factors, such as random bit flips when writing to the RAM, or environmental influence, such as radiation. Some faults cause permanent system failure, with severe consequences in the safety-critical scenarios. This can be prevented using watchdog devices. These system monitors form general fault detection schemes. They are much simpler systems than the ones they are monitoring and can be connected either as external devices, or implemented directly on the same board as the monitored system. Once it detects a fault, the watchdog's task is to trigger a system reset and thereby restore the system to its former, fully functional state [6]. Intelligent watchdogs monitor system state by evaluating gathered vital system data, as well as data of interest (e.g. safety-critical functionalities). Any unexpected behaviour can trigger not only a reset, but also supplementary actions from the intelligent watchdog. Those actions are not limited to only setting an alert or triggering a system reset. In this way higher dynamics in error cases can be achieved as the smart watchdog can implement any algorithm to maintain safe functionality. Such intelligent watchdogs are based on more advanced algorithms for system evaluation and complex decision making. Watchdogs are inevitable in safety-critical applications, such as automated driving (AD), which has zero tolerance for faults [7]. The trustworthiness of autonomous vehicles mandates safety and security. The trust is the key component for acceptance of AD by drivers [8] and other stakeholders. Hence, the vehicles must fully handle safetycritical situations [9], yielding that the safety-critical components must have redundant systems and system monitors for ensuring that the system does not fail under any circumstance. When needed, the built-in redundancy takes over the control of the system and a decision is reached in terms of operating mode [10]. The intelligent watchdog supports this need for monitoring of the safety-critical systems through utilization of the XCP protocol to directly access the memory and to gather the data of interest from the monitored system. These features of the intelligent watchdog are especially crucial for development of systems that adapt during operation time (such as adaptive systems or AI-based systems). C. Ethernet-CAN Gateway CAN remains the standard automotive communication protocol for message exchange between ECUs. However, many tools that access or test ECUs, rely on standard PC architecture, which does not contain a CAN module by default. The need for message exchange between Ethernet and CANbased devices poses a challenge when using remote devices. Implementation of an Ethernet-CAN gateway would enable Internet access to CAN-based devices from remote locations. II. THE METHOD The implementation relies on the Universal Calibration Protocol XCP for enabling RPS functionalities. The core of the offered solution is based on an integrated XCP-Master Controller, which manages all necessary processes. The platform connects to a target ECU via XCP on CAN and performs measurements, calibration, and function bypassing. The platform is also configured as an XCP-Slave and thus provides access for other XCP-Master tools such as CANape. This enables the run-time configuration of the RPS-parameters. As the RPS-implementation runs on a basic microcontroller platform, it shows resource limitation issues early on. Such findings are absent when using powerful commercial RPSs. Despite being computationally inferior, this RPSimplementation offers an added flexibility and, in many cases, serves as a low-cost alternative to commercially available RPSs. The added benefit is the ability to check the behaviour of the tested functions before their real-hardware integration. The XCP connection is used for monitoring the function behavior within an ECU. It is easily configurable to gather ECU data and evaluate it according to a predefined algorithm within the platform itself. Thus, the platform also acts as an intelligent watchdog, which could aid integration of AI-based or run-time adaptive systems in safety-related context. This implementation is extremely useful during hardware-in-theloop tests, for capturing the behaviour of certain signals of the complete testing procedure. Furthermore, as it is possible to configure the intelligent watchdog to freely evaluate and act upon the data, it can also be configured to execute the same ECU function and thus act as a redundant system. The Ethernet-CAN gateway relies on Ethernet and CAN transport layers. It enables message exchange between Ethernet and CAN-based devices. That establishes remote connection to external CAN-based devices by communication over the Internet and trough the Ethernet-CAN gateway. A. The RPS The RPS ( Figure 1) is realised on an Infineon's development platform, which is centred around TC277 microcontroller from Aurix™family. The algorithms use C programming language. Infineon's MultiCAN library handles the CAN module on the development platform. The open-source lwIP stack controls the Ethernet module. Aside from being geared toward the usage of the XCP protocol and development of an XCP-Master Controller, this toolchain also integrates an XCP-Slave driver, which allows other calibration tools, such as CANape, to perform run-time configuration of the RPS. 1) The Implementation: The XCP-Master Controller enables the platform to access ECU's memory and manipulate its data. The ensued core capabilities include measurement and calibration. A derivative of these two combined capabilities is ECU function bypass. To perform measurement, calibration or bypass, the RPS must be aware of all addresses that correspond to the variables of interest, their sizes in memory and their values. Thus, two buffers are implemented. One of those buffers stores the measurement-related data, while the other one stores calibration-related data, including calibrationenabling switch variables. For easier configuration of the RPS parameters, a configuration tool is also provided [11]. Based on the a2l description file of the target ECU, the tool provides an overview of all available functions and variables of the ECU. The user can choose the variables to be measured/calibrated by name, and the tool automatically creates configuration files that include all necessary information about the variables. This specific implementation does not use all XCP-Master features. It focuses on resources that enable connection, measurement and calibration. The communication channel to an ECU is the CAN bus, which is still the standard vehicular communication interface and XCP on CAN is commonly present within ECUs. Therefore, a CAN transport layer is developed for the XCP-Master Controller. A key characteristic of the proposed RPS is its ease to adopt functions-under-test into own structure and hence bypass the function of interest in a target ECU. This is possible because of the ability to measure and calibrate ECU variables. Thereby, the RPS measures the input values of the function of interest from the target ECU and feeds those values to the function to be tested, which is integrated on the RPS-platform. The RPS executes the function and sends output values for calibration to the target ECU, thereby bypassing the ECU function. An integrated switch is ECUs software mechanism, which enables manipulation by calibration tools. The proposed platform uses this switch to bypass the ECU internal variables. This switch variable determines if the program flow within the ECU should use the internally calculated value or the value from an external device. Therefore, care must be taken when setting/resetting these switch-variables when performing calibration and bypass. This implementation sets the required calibration switch variables during the first calibration cycle. 2) The Test Environment: The test setup uses a production ECU. Selection of the test function is based on ease of demonstration. This function is interchangeable i.e. it is a representative example for the demonstration. In this instance, a function for regulating the duty cycle of a cooling pump is chosen for testing. The source code of the function is integrated into the RPS-Platform and configured for usage as a bypass function of its equivalent counterpart, which is integrated into the ECU. At the start, the bypass function and its ECU counterpart are identical. As no changes were made to the function on the RPS-platform, measurements show a comparison between the RPS-calculated and the ECUcalculated values. The input value of the function is the temperature, which is simulated by the RPS and calibrated into the ECU (Number 1 in Figure 2). The values are linearly increasing from 0°C to 100°C, with cyclic repetitions. This temperature value, which is calibrated into the ECU, is again "measured" by the RPS. This ensures that the input values for the function to be bypassed in the RPS are provided from the ECU (Number 2 in Figure 2). Upon obtaining the input signals, the RPS executes the function to be bypassed and generates the outputs (Number 3 in Figure 2). Finally, the output values, together with the calibration switch-variables (only during the first calibration cycle), are sent for calibration to the target ECU, thus bypassing the ECU-calculated output values (Number 4 and 5 in Figure 2). A debugger, which is connected to the ECU, logs the calibrated and calculated duty cycle values and the calibrated temperature values. Two methods, DAQ and Polling, are employed for testing both XCP measurement methods. Details about the working principle of DAQ and Polling can be found in [11]. Furthermore, as in some cases RPSs are used to just scale a signal value, this use case is also integrated into the test by scaling the calibrated duty cycle with a factor of 1.1. B. The Intelligent Watchdog This implementation enables the desired watchdog approach. It gathers data from a monitored ECU via XCP on CAN and evaluates it based on the users' needs. Furthermore, it can also impact the behaviour of the ECU via XCP calibration. The conceptualisation is shown in Figure 3. The watchdog, which is connected to an ECU via XCP on CAN during hardware-in-the-loop tests, gathers data that are to be closely monitored. The watchdog can also be configured to act as a redundant system to an ECU by executing the Fig. 3. Intelligent watchdog concept. same function. By gathering the function output values from the ECU, it can ensure proper execution of the function. If it detects abnormal behaviour, it can overtake execution of that function, hence providing fail-operational performance. This feature can also be used for ensuring safe application limits of adaptive or AI-based systems. Especially in the context of these systems safety strategies can be based on the intelligent watchdog feature and its establishment of a safety frame. 1) The Implementation: To access the ECU to be monitored, the XCP-Master Controller is used in the same way as the RPS implementation. The initial step is to connect to the ECU and to set up the measurement configuration. For watchdog purposes, only DAQ is viable as the measurement method, since it guarantees that all measurements are from the same computation cycle and that they correlate to each other. After receiving the data to be monitored, it proceeds with evaluation based on the implemented algorithm. In this work, the watchdog is configured to execute the same function as the monitored ECU and compare the output values of both executions. The watchdog detects when the values start to considerably deviate from each other for too long. 2) The Test Environment: The RPS test setup is reused to evaluate the intelligent watchdog. The watchdog is configured to execute the same function as the monitored ECU, thus acting as a redundant system to that function. It connects to the ECU via XCP on CAN and gathers the data to be monitored, in this case, the ECU-calculated duty cycle. It also compares the duty cycle that is calculated within ECU with own calculated counterpart. At the time, new ECU measurements are disabled to enable fault simulation. As the deviations are formed between the watchdog-calculated values and the ECU-measured values, it is possible to observe the watchdog's reactions. C. The Ethernet CAN interface The implementation of Ethernet and CAN transport layers for utilization of the XCP-Master and XCP-Slave drivers laid the foundation for the Ethernet-CAN interface. The payload of incoming messages over one layer can be extracted and embedded into the payload of a message for another layer. 1) The Implementation: Incoming Ethernet packages trigger a function, which extracts the message payload. The payload is forwarded to a structure for defining CAN message payloads. To conclude, the function for triggering CAN mes-sage transmission is executed and the payload of the Ethernet message is forwarded to the CAN bus, as in Figure 4. This process is reversible i.e. an XCP master can connect to the platform via XCP on Ethernet and obtain measurement results from a device which is connected via XCP on CAN to the platform. 2) The Test Environment: The interface is evaluated by busload measurements. CANape is used as the XCP-master and the same test ECU as in previous measurements as the XCP-slave. The Ethernet-CAN interface is used for directly transferring the incoming messages from one layer to another. The CAN baud rate is set to 1 Mb/s and the bus is loaded with an increasing number of signals, until the maximum busload is reached. Measurements are performed with DAQ and polling measurement mode, and with 4-byte and 1-byte signals. III. MEASUREMENTS AND EVALUATION This section considers measurement results obtained during the evaluation of all three use cases. Besides the visual representation, it also includes the discussion of results. A. Validating RPS The results associated with the DAQ measurements during bypass of the function and scaling of the duty cycle are shown in Figure 5. The switch process is observed through enabled calibration during the measurement taking. Before the activation of the switch, the function output value and the ECU calculated value are identical. As the switching takes place, there is an observable jump in the function output signal from the ECU-calculated value, to the scaled RPS-calibrated value. The associated messages which travel over the CAN bus during one calibration cycle are documented in [11]. As this test is performed in DAQ mode, all signal values are sent to the RPS-platform cyclically by the ECU itself. The CAN traffic demonstrates 12 signal values being obtained within a time window of 0.9 ms. Furthermore, the calibration of the two signals takes 0.8 ms. Figure 6 shows the obtained signal values during the measurement with Polling. The test setup remains the same as when implementing DAQ measurements. The depicted signal behaviour is identical to that with DAQ-based measurements. Hence, we conclude the correct implementation of the function bypass. The CAN traffic during the testing process are documented in [11]. In this case, data gathering of 12 signals with polling takes 2.7 ms. Calibration of 2 signals takes the same amount of time as with DAQ, 0.8 ms. As expected, the main limitation of the RPS implementation is based on a limited communication speed between the RPS and the target ECU. A proper implementation must consider the number of signals that should be measured, the time taken to execute the bypassing function and the number of signals that should be calibrated. In line with these observations, DAQ measurement method provides improved performance over the polling functionality and is the recommended measurement method. B. Function execution monitoring The measurement results obtained during evaluation of the watchdog are shown on Figure 7. The watchdog is executing the same function as the ECU, thereby gathering the ECUcalculated values and checking their correlation to its own calculated values. The upper plot of Figure 7 shows the two duty cycle values. When the new ECU-measurements are disabled, the watchdog error counter begins to increase, as shown in the plot below. When this counter reaches a critical value ( set to be 10 in this case), the watchdog fault detection signal is triggered. Furthermore, to observe the watchdog recovery stage, the fault is disabled at a point of time. Thereby, it can be observed that the watchdog error counter begins to decrease until it reaches the normal values again and the fault detection signal gets deactivated. Figure 8 shows the busload measurements obtained during evaluation of the Ethernet-CAN interface with the DAQ and Polling measurement methods. In the case of DAQ, it can be observed that usage of 4-byte signals yield the maximum busload at 140 signals, while it takes 560 1-byte signals to saturate the bus. This is due to the fact that CANape configures four 1-byte signals to be transferred with one XCP message, while with 4-byte messages it is capable of transferring only one signal per message. C. Gateway performance measurements In case of polling, reduced performance is achived in comparison to DAQ measurement method. This is caused by the need to send a poll request from master to slave for every signal. Thereby, the average time delay between two successive poll requests is 0.5 ms, making it impossible to utilize the full potential of CAN bus bandwidth. Regardless of the signal size, the maximum number of signals which could be reliably transferred is 20. IV. LESSONS LEARNED The measurement results confirm that the XCP protocol can be successfully utilized on a microcontroller platform for enabling RPS capabilities in the form of measurement, calibration and bypass. Furthermore, this implementation provides insight into how the tested functions perform hardwarenear to a real ECU, possibly showing problems with resource limitations early on. However, this implementation is limited by the CAN communication speed. An adequate utilization demands the measurement and calibration processes, as well as the execution of the function to be bypassed on the RPS, to be completed within one computation cycle of the function to be tested. The performance is also heavily impacted by the chosen data gathering method. DAQ provides improved performance compared to the polling. Furthermore, the performance of the test ECU can also impact the performance of this implementation. This is due to the fact that the RPS always waits for a response from the ECU before sending a new command, thus possibly causing communication delays. Evaluation of the watchdog implementation shows that XCP is also usable for monitoring purposes. The watchdog is a useful tool for closely monitoring certain signals during hardware-in-the-loop tests but can also act as a redundant system. Thereby, it is important to utilize DAQ measurement method, since for watchdog purposes, it is essential for the measurement data to be in correlation. The Ethernet-CAN interface implementation successfully transfers messages from one layer to another. Thereby, if it is used for transferring measurement data, DAQ provides much better performance compared to polling. V. CONCLUSION This work summarises the implementation of three different use cases, all of which carry a highly exploitable value. The rapid prototyping system functionality enables calibration and bypass of functions within an ECU, but thereby the limiting factor is the CAN communication speed. The whole bypass process must be shorter than the computation cycle of the tested function. The intelligent watchdog functionality provides the possibility to closely monitor specific signals and implement specific fault detection and decision making algorithms. Its potential usage includes being an additional safety net during hardware-in-the-loop tests, enabling close monitoring of parameters of the ECU under test. This ability to closely monitor the behaviour of new ECU functions turns the watchdog into a powerful tool for dependable AI testing in the scope of safety-critical applications. With the Ethernet-CAN gateway, XCP messages are easily communicated between the CAN and Ethernet transport layers, hence achieving an easy access to devices on a CAN bus for remote monitoring and control over the Internet. As all three use cases are implemented on a standard microcontroller platform, a minor investment is required to return high value to automotive developers.
5,247.2
2021-03-01T00:00:00.000
[ "Computer Science" ]
The Falsification of Nuclear Forces We review our work on the statistical uncertainty analysis of the NN force. This is based on the Granada-2013 database where a statistically meaningful partial wave analysis comprising a total of 6713 np and pp published scattering data from 1950 till 2013 below pion production threshold has been made. We stress the necessary conditions required for a correct and self-consistent statistical interpretation of the discrepancies between theory and experiment which enable a subsequent statistical error propagation and correlation analysis Introduction Error propagation and uncertainty quantification have recently become a central topic in nuclear physics [1][2][3][4][5][6]. In the particular field of phenomenological Nucleon-Nucleon (NN) interactions uncertainties can be classified as statistical or systematic 1 . Statistical uncertainties are the result of unavoidable random fluctuations during the experimental process. Most NN scattering measurements consist of counting events which corresponds to a Poisson distribution. If the number of events is large enough then the distribution can be safely approximated as a normal one. This allows to fix the parameters of a phenomenological potential via the usual chi square minimization process to reproduce the collection of NN scattering data. As a consequence the statistical uncertainty of the experimental data propagates into the fitting parameters in the form of a confidence region in which the parameters are allowed to vary and still give an accurate description of the data. Such confidence region can be easily determined by the parameters' covariance matrix if the assumption of normally distributed residuals is fulfilled. Systematic uncertainties are a consequence of our lack of knowledge of the actual form of the NN potential and the assumptions that have to be made in order to give a representation of the NN interaction. Some potentials are separable in momentum space while others are not, some are energy dependent and others range from fully local to different types non-localities in coordinate space. Even though most of the potentials are fitted to the same type of experimental NN scattering a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>1 Numerical uncertainties are also present but can be made small enough to be dominated by the other two. data, their predictions of unmeasured scattering observables or nuclear structure properties can sometimes be incompatible. This residual incompatibilities beyond statistical consistency and equivalence is what we refer as systematic uncertainties. In this contribution we review our determination of the NN interaction statistical uncertainties along with the necessary conditions for such type of analysis. Coarse graining and the delta-shell potential Coarse graining embodies the Wilsonian renormalization [7] concept and represents a very reliable tool to simplify the description of pp and np scattering data while still retaining all the relevant information of the interaction up to a certain energy range set by the de Broglie wavelength of the most energetic particle considered. The V lowk potentials in momentum space are a good example of an implementation of coarse graining by removing the high-momentum part of the interaction [8,9]. In 1973 Aviles introduced the delta-shell (DS) potential in the context of NN interactions [10]. In [11] the DS potential was used to implement coarse graining in coordinate space via a local potential that samples the np interaction at certain concentration radii r i by where µ is the reduced mass and λ i are strength coefficients. After fitting the λ i parameters to np phase-shifts the properties and form factors of the deuteron were calculated. A variational method with harmonic oscillator wave functions was used to calculate upper bounds to the binding energy of the double-closed shell nuclei 4 He, 16 O and 40 Ca [11]. Description of NN scattering data In order to quantify the statistical uncertainties of the NN interaction a fit to experimental data becomes necessary. The usual first step to fit a phenomenological potential to reproduce scattering phase-shifts is not sufficient to get an accurate description of the actual experimental data. As was recently shown in [6], the local chiral effective potential of [12] fitted to phase-shifts yields a significantly large χ 2 /d.o.f. value when compared to experimental scattering data. However it is possible that small readjustments of the potential parameters have a significant impact in lowering the total χ 2 . Given the wide applicability of this type of interactions [13,14] a full fledged fit to NN experimental scattering data would be of great interest. Historically, a successful description of the complete database with χ 2 /d.o.f. 1 has never been possible. The potentials and PWA were gradually improved over time by explicitly including different physical effects like OPE in the long range part, charge symmetry breaking in the central channel and electromagnetic interactions among others. The first PWA with χ 2 /d.o.f. 1 was obtained in 1993 when the Nijmegen group introduced the 3σ criterion to exclude over 1000 inconsistent data [15]. The 3σ criterion deals with possible over and underestimations of the statistical uncertainties by excluding data sets with improbably high or improbably low values of χ 2 (for a clear description of this process see [16]). However, this method identifies only inconsistencies between individual data sets and a model trying to describe the complete database. To improve on this method, so that inconsistencies between each data set and the rest of the database can be found, the 3σ criterion was applied iteratively to the complete database and the potential parameters were refitted to the accepted data sets until no more data are excluded or recovered [17]. The self-consistent data base obtained with this procedure contains 6713 experimental points and recovers 300 of initially discarded data with the usual 3σ 21st International Conference on Few-Body Problems in Physics criterion [18]. Although the 300 extra data do not significantly change the potential parameters, their inclusion can only improve the estimate of statistical errors. A simultaneous fit to pp and np scattering data was made representing the short range part of the interaction with a DS potential and OPE for the long range part. The fit requires a total of 46 parameters and yields χ 2 /d.o.f. = 1.04 to the self-consistent data base [17,19]. Description of NN scattering errors On a more fundamental level, any chi-squared distribution can be used to test goodness of fit, provided that the experimental data can be assumed to have a normal distribution [20,21]. If the residuals defined as a theoretical model correctly describes the data if they follow the standard normal distribution N(0, 1). 2 This self-consistency condition, which can only be checked a posteriori and entitles legitimate error propagation, has usually been overlooked in the NN literature. In [22] a few of these tests are reviewed along with a recently proposed Tail-Sensitive test [23] and it was found that the three potentials DS-OPE [17], DS-χTPE [24] and Gauss-OPE [22] have standard normal residuals. The three more recent potentials DS-Born, Gauss-χTPE and Gauss-Born also were found to have normally distributed residuals [25]. A simple and straightforward recipe to apply the Tail-Sensitive test to any set of empirical data with a sample size up to N = 9000 was developed in [26]. Thus, this six new potentials are the first to qualify for error estimation in nuclear physics. A direct application of normality is the re-sampling of experimental data via Monte Carlo techniques, as is noted in [27], for a robust analysis of possible asymmetries on the potential parameters distribution. Most recently a Monte Carlo method was used to calculate a realistic statistical uncertainty of the Triton binding energy stemming from NN scattering data [28]. Conclusions In this contribution we outline the two main requirements for a correct quantification of the NN statistical uncertainties; a correct description of the experimental data and the reproduction of experimental errors. Although the first one may seem obvious, some widely used interactions in nuclear structure and nuclear reaction calculations are fitted to phase-shifts instead of experimental data and as shown in [6] those two descriptions are not entirely equivalent. The second requirement can be easily reformulated into positively testing for the normality of residuals. Normality of residuals provides a criterion of "falsibility" to distinguish those NN interactions that can come in conflict with observation and those that cannot. Interactions that fail the criterion [5,29] come in conflict with observation, even if they apparently give a good description of the data, because the chi-squared distribution cannot be used to test goodness of fit, and require more complex statistical techniques to analyze the data. Even though the description here is for statistical uncertainties, a certain glance into the systematic uncertainty can already be perceived by looking into the differences in phase-shifts and scattering amplitude predictions given by the different realistic potentials with χ 2 min /d.o.f. ∼ 1. In particular the six interactions fitted to the Granada self-consistent database, give similar statistical uncertainties but present inconsistent phase-shifts at low angular momentum and high energy. The discrepancies between different potentials, accounting for the systematic uncertainty, are usually an order of magnitude larger than the statistical error bars. 2 More generally, if the sufficient goodness of fit condition χ 2 /ν = 1 ± √ 2/ν is not fulfilled one can scale globally the errors ∆O exp i → α∆O exp i by a common Birge factor α provided the residuals follow a scaled normal distribution N(0, α).
2,158.8
2015-08-13T00:00:00.000
[ "Physics" ]
New Hermite-Hadamard type inequalities for exponentially convex functions and applications 1 School of Science, Hunan City University, Yiyang 413000, China 2 Department of Mathematics, Government College University Faisalabad, Pakistan 3 Department of Mathematics, COMSATS University, Islamabad, Pakistan 4 Department of mathematics, SBK Women University, Quetta, Pakistan 5 Department of Mathematics, Huzhou University, Huzhou 313000, China 6 Hunan Provincial Key Laboratory of Mathematical Modeling and Analysis in Engineering, Changsha University of Science & Technology, Changsha 410114, P. R. China To the best of our knowledge, the Hermite-Hadamard inequality is a well-known, paramount and extensively useful inequality in the applied literature [42][43][44][45]. This inequality is of pivotal significance because of other classical inequalities such as the Hardy, Opial, Lynger, Ostrowski, Minkowski, Hölder, Ky-Fan, Beckenbach-Dresher, Levinson, arithmetic-geometric, Young, Olsen and Gagliardo-Nirenberg inequalities, which are closely related to the classical Hermite-Hadamard inequality [46]. It can be stated as follows: Let ϕ : I ⊆ R → R be a convex function and c, d ∈ I with c < d. Then (1.1) In [47], Fejér contemplated the important generalizations that is the weighted generalization of the Hermite-Hadamard inequality. Let I ⊆ R and a function ϕ : I → R be a convex function. Then the inequalities ϕ c + d hold, where w : I → R is non-negative, integrable and symmetric with respect to c+d 2 . If we choose w(z) = 1, then (1.2) reduces to (1.1). Several classical inequalities can be obtained with the help of inequality (1.1) by considering the use of peculiar convex function ϕ. Moreover, these inequalities for convex functions have a very important role in both applied and pure mathematics. In recent years, integral inequalities have been derived via fractional analysis, which has emerged as another interesting technique. Due to advancement in inequalities, the comprehensive investigation of exponentially convex functions as the K-conformable fractional integral in the present paper is new. The class of exponentially-convex functions were introduced by Dragomir and Gomm [48]. Bernstein [49] and Antczak [50] introduced these exponentially convex functions implicitly and discuss their role in mathematical programming. The proliferating research on big data analysis and deep learning has recently intensified the interest in information theory involving exponentially-convex functions. The smoothness of exponentially-convex functions is exploited for statistical learning, sequential prediction, and stochastic optimization. Now we recall the concept of exponentially convex functions, which is mainly due to M. A. Noor and K. I. Noor [51,52]. Recently, the fractional calculus has attracted the consideration of several researchers [53,54]. The impact and inspiration of the fractional calculus in both theoretical, applied science and engineering arose out substantially. Fractional integral operators are sometimes the gateway to physical problems that cannot be expressed by classical integral, sometimes for the solution of problems expressed in fractional order. In recent years, a lot of new operator have been defined. Some of these operators are very close to classical operators in terms of their characteristics and definitions. Various studies in the literature, on distinct fractional operators such as the classical Riemann-Liouville, Caputo, Katugampola, Hadamard, and Marchaud versions have shown versatility in modeling and control applications across various disciplines. However, such forms of fractional derivatives may not be able to explain the dynamic performance accurately, hence many authors are found to be sorting out new fractional differentiations and integrations which have a kernel depending upon a function and this makes the range of definition expanded [55,56]. Now, we recall the basic definitions and new notations of conformable fractional operators. . Then the Riemann-Liouville integrals J δ c + ϕ and J δ d − ϕ of order δ > 0 are defined by In [57], Jarad et al. defined a new fractional integral operator that has several special cases among many other features as follows: and It is easy to see the following connections: (1) Let c = 0 and δ = 1. Then (1.7) reduces to the Riemann-Liouville operator that is given in (1.5) and alike the other. (2) If we set c = 0 and δ → 0, then the new conformable fractional integral coincides with the generalized fractional integral [58]. The generalized K-conformable fractional integrals are defined by If Re(δ) > 0, then the K-Gamma function in integral form is defined by with δΓ K (δ) = Γ K (δ + K), where Γ K (·) stands for the K-gamma function. This paper is aimed at establishing some new integral inequalities for exponentially convexity via K-conformable fractional integrals linked with inequality (1.1). We present some inequalities for the class of mappings whose derivatives in absolute values are exponentially convex. In addition, we obtain some new inequalities linked with (1.2) and exponentially convexity via classical integrals. Moreover, we apply the novel approach of Hölder-İşcan and improved power-mean inequality are better than the Hölder and power-mean inequality. Moreover, we illustrate two examples to show the applicability and supremacy of the proposed technique. As an application, the inequalities for special means are derived. Certain Hermite-Hadamard type inequalities In this section, we demonstrate the Hermite-Hadamard type inequalities for exponentially convex function via K-conformable fractional integral operator. Theorem 2.1. Let K > 0, δ > 0, γ > 0 and ϕ : I → R be an exponentially convex function such that c, d ∈ I with c < d and e ϕ ∈ L 1 ([c, d]). Then the following inequality holds for K-conformable fractional integrals: Proof. Since ϕ is exponentially convex on I, for ξ = 1 2 , we have Moreover, multiplying both sides of (2.2) by 1−(1−ξ) δ δ γ K −1 (1 − ξ) δ−1 and then making use of integration with respect to ξ over [0, 1], we can combine the resulting inequality with the definition of integral operator as follows It is clear to see that To prove the second part of the inequality (2.1), by a similar discussion, we will start with the exponentially convexity of ϕ, then for ξ ∈ [0, 1], we have (1 − ξ) δ−1 and then integrating the above estimate with respect to ξ over [0, 1], we obtain After simplification, we get The proof is completed. Throughout this investigation, we use I • to denote the interior of the interval I ⊆ R. In order to establish our main results, we need a lemma which we present in this section. Lemma 2.2. Let K > 0, δ, γ > 0 and ϕ : I → R be a differentiable and exponentially convex function on I • such that c, d ∈ I • with c < d and (e ϕ ) ∈ L 1 ([c, d]). Then the following equality holds for K-conformable fractional integrals: Proof. Integrating by parts and changing variable of definite integral yield (2.6) By similar argument, we have (2.7) Subtracting these two equations leads to Lemma 2.2. Now we are in a position to establish some new integral inequalities of Hermite-Hadamard type for differentiable convex functions. The first main result is Theorem 2.3. Then the following inequality holds for K-conformable fractional integrals: Proof. It follows from Lemma 2.2 and the power-mean inequality that Utilizing the convexity of |(e ϕ ) | q on I, we have Substituting the above two inequalities into the inequality (2.10), we get the required inequality (2.8). This completes the proof. Theorem 2.4. Let K > 0, δ, γ > 0 and ϕ : I → R be a differentiable and exponentially convex function on I • such that c, d ∈ I • with c < d and (e ϕ ) ∈ L 1 ([c, d]). If |(e ϕ ) | q is convex on I for some fixed q > 1, q −1 + p −1 = 1. Then the following inequality holds for K-conformable fractional integrals: Proof. It follows from Lemma 2.2 and the noted Hölder's integral inequality that (2.14) Utilizing the convexity of |(e ϕ ) | q on I, we have Substituting the above two inequalities into the inequality (2.14), we get the required inequality (2.13). This completes the proof. Some better approaches of Hermite-Hadamard type inequalities In this section, we will derive the new generalizations by employing the Hölderİşcan [59] and improved power-mean [60] inequalities. Theorem 3.1. Let K, δ, γ > 0 and ϕ : I → R be a differentiable and exponentially convex function on I • such that c, d ∈ I • with c < d and (e ϕ ) ∈ L 1 ([c, d]). If |(e ϕ ) | q is convex on I for some fixed q > 1, q −1 + p −1 = 1. Then the following inequality holds for K-conformable fractional integrals: Proof. It follows from Lemma 2.2 and the Hölder-İşcan inequality [59] that Utilizing the convexity of |(e ϕ ) | q on I, we get where we have used the indeitites Combining (3.2)-(3.7) leads to the required inequality (4.4). This completes the proof. Theorem 3.2. Let K, δ, γ > 0 and ϕ : I → R be a differentiable and exponentially convex function on I • such that c, d ∈ I • with c < d and (e ϕ ) ∈ L 1 ([c, d]). If |(e ϕ ) | q is convex on I for some fixed p, q > 1, q −1 + p −1 = 1. Then the following inequality holds for K-conformable fractional integrals: where Proof. Making use of Lemma 2.2 and the improved power-mean inequality [60], we have . (3.10) Utilizing the convexity of |(e ϕ ) | q on I, we obtain (3.14) where we have used the facts that Combining (3.10)-(3.15) gives the required inequality (3.8). This completes the proof. Examples In this section, we present some examples to demonstrate the applications of our proposed results on modified Bessel functions and σ-digamma functions. . Weighted Hermite-Hadamard type inequalities for differentiable functions In order to prove our main results in this section, we need the following lemma. Proof. Note that Making use of integration by parts, we get Similarly, one has Therefore, and identity (5.1) can be obtained by the change of variable technique and multiplying the both sided by (d − c) in the above formula. Theorem 5.2. Let ϕ : I → R be a differentiable and exponentially convex function on I • such that c, d ∈ I • with c < d. Also, let w : [c, d] → R be a differentiable mapping and symmetric with respect to (c + d)/2. If (e ϕ ) is convex on I. Then the inequality Proof. It follows from Lemma 5.1 and the hypothesis given in Theorem 5.2 that Exchanging the integration order gives Similarly, we have Since w(z) is symmetric with respect to z = c+d 2 , for w(z) = w(c + d − z), we get combining (5.3)-(5.5) leads to (5.2). This completes the proof Theorem 5.3. Let ϕ : I → R be a differentiable and exponentially convex function on I • such that c, d ∈ I • with c < d. Also, let w : [c, d] → R be a differentiable mapping and symmetric with respect to (c + d)/2. If (e ϕ ) q is convex on I for some q > 1 with p −1 + q −1 = 1. Then the inequality holds for all z ∈ [c, d], where Υ 1 (c, d) is given in (2.9). Proof. Making use of Lemma 5.1 and changing the integration order, we get It follows from the convexity of (e ϕ ) q that for ξ ∈ [0, 1]. Therefore, one has Note that Therefore, inequality (5.3) follows from (5.7)-(5.9). This completes the proof. Proof. We clearly see that It follows from integration by parts that Similarly, we have Thus, we get Therefore, the desired result can be obtained by use of the change of variable technique and multiplying the both sided by (d − c)/2. Since w(z) is symmetric to z = (c + d)/2, we get It follows from the convexity of (e ϕ ) q that Therefore, which completes the proof. Applications In this section, we shall establish some inequalities for the arithmetic and generalized logarithmic means by use of our results obtained in section 5. Proof. By taking ϕ(z) = n log z in Theorem 5.2, we get the desired result. Proof. By taking ϕ(z) = n ln z in Theorem (5.3), we get the desired result. Corollary 6.6. Let u, v > 0 with v > u and n ∈ Z with n ≥ 2. Then the inequality holds for all q > 1. Conclusions Using the K-conformable fractional integrals, certain inequalities related to the Hermite-Hadamard inequalities for exponentially convex functions are established. The inequalities are parameterized by the parameters δ, γ and K. These inequalities generalize and extend parts of the results for Riemann-Liouville and Hadamard fractional integrals. Also, we have derived the weighted Hermite-Hadamard inequalities for exponentially convex functions in the classical sense. Some applications of the obtained results to special means are also presented. With these contributions, we hope to motivate the interested researcher to further explore this enchanting field of the fractional integral inequalities and exponentially convexity based on these techniques and the ideas developed in the present paper.
3,084.2
2020-08-24T00:00:00.000
[ "Mathematics" ]
A Method to Accelerate the Convergence of Satellite Clock Offset Estimation Considering the Time-Varying Code Biases : Continuous and stable precision satellite clock offsets are an important guarantee for real-time precise point positioning (PPP). However, in real-time PPP, the estimation of a satellite clock is often interrupted for various reasons such as network fluctuations, which leads to a long time for clocks to converge again. Typically, code biases are assumed to stay constant over time in clock estimation according to the current literature. In this contribution, it is shown that this assumption reduces the convergence speed of estimation, and the satellite clocks are still unstable for several hours after convergence. For this reason, we study the influence of different code bias extraction schemes, that is, taking code biases as constants, extracting satellite code biases (SCBs), extracting receiver code biases (RCBs) and simultaneously extracting SCBs and RCBs, on satellite clock estimation. Results show that, the time-varying SCBs are the main factors leading to the instability of satellite clocks, and considering SCBs in the estimation can significantly accelerate the filter convergence and improve the stability of clocks. Then, the products generated by introducing SCBs in the clock estimation based on undifferenced observations are applied to PPP experiments. Compared with the original undifferenced model, clocks estimated using the new method can significantly accelerate the convergence speed of PPP and improve the positioning accuracy, which illustrates that our estimated clocks are effective and superior. Introduction In order to implement precise point positioning (PPP), satellite clock offsets need to be estimated and disseminated to users [1,2]. Normally, there are two kinds of methods for estimating satellite clocks, one is based on undifferenced observations and the other is based on epoch-differenced [3,4]. Obviously, the latter one is more efficient; therefore, it is more popular to use differenced observations to estimate clocks [5][6][7]. The difference between the two methods is whether to use an epoch-differenced strategy to eliminate ambiguity parameters in order to reduce the number of parameters to be estimated and improve the efficiency of the calculation. Based on the idea of difference, a mixed-differenced method was developed in study [6], which makes proper use of the combination of epoch-differenced phases and undifferenced codes instead of processing them separately. Different from reducing the parameters to be estimated by difference, an undifferenced estimation method that combines full-parameter and high-rate models to reduce the number of ambiguities to be estimated is proposed [8]. In addition to estimating high-rate satellite clocks, many studies have focused on multi-global navigation satellite system (GNSS) clock estimation [9][10][11] and the evaluation of satellite clock offset accuracy and reliability [12][13][14][15][16]. It is a time-consuming process for initialization of the filter in estimation, and research shows that it usually takes 1 to 1.5 hours for initialization to generate stable satellite clocks [6,7]. Even if the initial convergence process is completed, the time-varying parameters still disturb the clocks within a few hours, causing the instability of clocks and the Remote Sens. 2021, 13, 2714 2 of 17 reduction in the positioning accuracy at the user end. At present, many companies and research institutions have begun to build real-time clock estimation platforms in order to meet the needs of real-time PPP applications. However, due to the influence of network anomalies and other factors, the estimation interruption often occurs, which takes a long time to converge again. Unfortunately, we have not found a solution to the above problem in the current literature. In this context, it is of great significance to study how to accelerate the convergence of estimation and improve the stability of clocks. In the implementation of PPP, we usually assume that satellite code biases (SCBs) and receiver code biases (RCBs) are constants and assimilated by other parameters to obtain a full rank model [17]. This assumption is also used in clock estimation to ensure consistency. However, more and more studies show that code biases are not stable constants, and their time-varying part leads to the deviation of other parameters [18][19][20][21][22][23][24][25]. In this study, we proposed a new satellite clock estimation method based on undifferenced observations, which takes into account the time-varying code biases. This paper proceeds as follows. Section 2 introduces the models of satellite clock estimation considering code biases and the strategies of eliminating rank deficiency and data processing. Section 3 exhibits the results of clock estimation and PPP to investigate the validity of the proposed method. Section 4 discusses the necessity of the proposed method. Finally, it comes to the conclusions and outlook of this study in Section 5. Methodology In this section, we start with the GPS dual-frequency ionosphere-free (IF) undifferenced observation equations. Then, the satellite clock estimation models considering code biases are presented. Finally, we give the detailed strategies for satellite clock estimation and PPP. Undifferenced Model For the satellite s observed by the receiver r, the GPS dual-frequency IF undifferenced observation equations in units of length can be written as follows: (1) where P 3 and L 3 denote the ionosphere-free code and phase observables, respectively; ρ is the receiver-satellite geometric distance; c is the speed of light in vacuum; dt r and dt s are the clock offsets of receiver and satellite, respectively; T is the slant tropospheric delay; A 3 denotes the ambiguity parameter in the unit of length; d r,3 and d s 3 are the ionosphere-free code biases at the receiver and satellite, respectively, while b r,3 and b s 3 are the ionosphere-free phase biases. Phase center offsets (PCOs) and variations (PCVs), phase windup [26], relativistic effect, earth rotation effects and tide loading are assumed to be precisely corrected, and random noise is ignored here for brevity. Generally, code and phase biases are assimilated by clocks to eliminate the rank deficiency of Equation (1). Considering that the ambiguity is estimated as a constant, the PPP model with full rank after reparameterization can be expressed as follows [27]: where ∆ and δ are the symbols representing the time constant and time-varying portions, dt r is the receiver clock offset, dt s is the International GNSS Service (IGS) legacy satellite clock offset and A 3 is the ambiguity parameter in the unit of length. Their specific forms are as follows: Usually, a network is needed for satellite clock estimation. Additionally, Equation (2) represents a rank deficient system for the linear correlation between receiver and satellite clocks. In this study, a zero-mean condition (ZMC) imposed on all satellite clocks has been adopted to define the clock datum, which can be expressed as follows [28]: where m is the number of satellites observed in the global network. We note that the nuisance term δd r,3 − δb r,3 − δd s 3 + δb s 3 of Equation (2) is part of the code residuals. In fact, the nuisance term is not driven into all residuals, resulting in the deviation of other parameters [25]. Additionally, the biased satellite clocks caused by satellite time-varying code biases will be discussed in Section 3. Models for Extracting Code Biases In this section, we design three time-varying code biases extraction models to compare their effects on satellite clocks, which are (1) extracting an SCBs model (Undifferenced-SCB), (2) extracting an RCBs model (Undifferenced-RCB), (3) simultaneously extracting an SCBs and an RCBs model (Undifferenced-SRCB). For brevity, we assume that except for the code biases to be extracted, the other nuisance terms (such as time-varying phase biases) are driven into code residuals and ignored. Considering the time-varying characteristics of code biases, i is used to represent the current epoch number. Then, Equation (1) can be written as follows: with: When only SCBs are extracted, the RCBs are assimilated using other parameters. Thus, where dt r,SCB (i)= dt r (i) Obviously, there will be a rank deficiency between cdt r,SCB (i), d s 3 (i) and A 3,SCB parameters. The SCBs at the first epoch are chosen as a datum to eliminate the rank [25,29]. Then, a full rank model for extracting SCBs in clock estimation can be obtained as follows: with d s It is worth mentioning that d s 3 (i) is the variation of the SCB with respect to the first epoch. Undifferenced-RCB Model According the previous section, a full rank model for extracting RCBs in clock estimation can be formulated as follows: with Undifferenced-SRCB Model When SCBs and RCBs are estimated at the same time, their values at the first epoch are still selected as the datum. Thus, with It is not difficult to find that, from the second epoch, rank deficiency is caused by d r,3 (i) and d s 3 (i). A ZMC can be imposed on all SCBs to define the bias datum. However, unfortunately, the robustness of the model will be easily destroyed by the four parameters (dt r,RCB (i), dt s SCB (i), d r,3 (i), d s 3 (i)) estimated as white noises. We assume that satellite clocks are only disturbed by SCBs, then the following RCB datum can be defined to enhance the stability of the model. Suppose a satellite is observed by n receivers, then a constraint can be obtained as follows: If there are m satellites in the network, then m constraints can be added to the Undifferenced-SRCB model. It should be noted that although the datum defined in this way can ensure the robustness of the model, it ignores the variations of RCBs to a certain extent, and the clocks estimated using this model will be similar to that of an Undifferenced-SCB. Processing Strategies With the strategies listed in Table 1, satellite clocks can be estimated using the above four models. In PPP, time-varying SCBs are not estimated because of the difficulty of eliminating rank deficiency. Therefore, if the satellite clocks employed in PPP are estimated using the model extracting biases, code equations will be disturbed. In this study, we reduce this adverse effect by amplifying the noise of code observations. Results To validate the effectiveness and evaluate the performance of our proposed method, an observation period, covering one week from day of the year (DOY) 031 to 037 in 2021, is selected for the experiments. Figure 1 shows the distribution of stations and none of the stations for PPP experiment are used for deriving the satellite clocks. Thereafter, analysis of the residuals of network processing and satellite and receiver code biases are conducted. Finally, the estimated satellite clocks are compared with the IGS final products and applied in a pseudo-kinematic PPP procedure to evaluate their performance. Residual Analysis Processing residuals are important indicators that reflect the consistency between the observations and the function model [8]. In this study, four models introduced in Section 2 are employed for satellite clock estimation, and the difference between them is the processing strategy of code biases. The time series of the root mean square (RMS) of the code residuals for different models on DOY 031 are shown in Figure 2. Obviously, the RMS values of code residuals of the four models are in a reasonable range and very close, but there are still slight differences. In order to compare the residuals of each model, the mean RMS values are summarized in Table 2. Residual Analysis Processing residuals are important indicators that reflect the consistency between the observations and the function model [8]. In this study, four models introduced in Section 2 are employed for satellite clock estimation, and the difference between them is the processing strategy of code biases. The time series of the root mean square (RMS) of the code residuals for different models on DOY 031 are shown in Figure 2. Obviously, the RMS values of code residuals of the four models are in a reasonable range and very close, but there are still slight differences. In order to compare the residuals of each model, the mean RMS values are summarized in Table 2. Residual Analysis Processing residuals are important indicators that reflect the consistency between the observations and the function model [8]. In this study, four models introduced in Section 2 are employed for satellite clock estimation, and the difference between them is the processing strategy of code biases. The time series of the root mean square (RMS) of the code residuals for different models on DOY 031 are shown in Figure 2. Obviously, the RMS values of code residuals of the four models are in a reasonable range and very close, but there are still slight differences. In order to compare the residuals of each model, the mean RMS values are summarized in Table 2. As shown in Table 2, if code biases are extracted in satellite clock estimation, the mean RMS values for code residuals will be reduced, which indicates that the time-varying part of the code biases are driven into the code residuals of the Undifferenced model. However, it does not mean that the code residuals contain all the time-varying code biases as in Equation (1), because some time-varying code biases may be absorbed by other parameters, such as clocks. In addition, it can be seen from Table 2 that the mean RMS value decreases from 1.033 to 0.997 m after extracting RCBs, while the mean RMS value only decreases to 1.014 m after extracting SCBs, which means that the time-varying characteristics of RCBs in the Undifferenced model code residuals are stronger than that of SCBs. Theoretically, the Undifferenced-SRCB model should have the smallest mean RMS value, but it is not the case, as shown in Table 2, which may be caused by too many constraints on the model. The time series of the RMS of the phase residuals for different models on DOY 031 are shown in Figure 3. The phase residuals of the four models are approximately equal, which indicates that the code bias extraction does not affect the residuals of phase equation. It should be noted that this does not mean that the code biases will not affect other parameters of the phase equation [25]. As shown in Table 2, if code biases are extracted in satellite clock estimation, the mean RMS values for code residuals will be reduced, which indicates that the time-varying part of the code biases are driven into the code residuals of the Undifferenced model. However, it does not mean that the code residuals contain all the time-varying code biases as in Equation 1, because some time-varying code biases may be absorbed by other parameters, such as clocks. In addition, it can be seen from Table 2 that the mean RMS value decreases from 1.033 to 0.997 m after extracting RCBs, while the mean RMS value only decreases to 1.014 m after extracting SCBs, which means that the time-varying characteristics of RCBs in the Undifferenced model code residuals are stronger than that of SCBs. Theoretically, the Undifferenced-SRCB model should have the smallest mean RMS value, but it is not the case, as shown in Table 2, which may be caused by too many constraints on the model. The time series of the RMS of the phase residuals for different models on DOY 031 are shown in Figure 3. The phase residuals of the four models are approximately equal, which indicates that the code bias extraction does not affect the residuals of phase equation. It should be noted that this does not mean that the code biases will not affect other parameters of the phase equation [25]. Code Bias Analysis Compared with the residual analysis, the direct analysis of code biases can better reflect their time-varying characteristics. Figures 4 and 5 show the SCB time series extracted from the Undifferenced-SCB and Undifferenced-SRCB models, respectively. For the convenience of analysis, only six satellites are displayed, and the time series of each satellite is shifted to different degrees. As shown in the two figures, the time-varying code biases of the same satellite extracted using different models are consistent, which indicates that the excessive constraints imposed on the Undifferenced-SRCB model do not significantly damage the time-varying characteristics of the SCB. Though SCBs are contaminated by code noises due to the low accuracy of code observables, it can be found that SCBs appear in a peak-to-peak range of about 3 ns, as shown in Figure 4. If those time-varying SCBs are driven into estimated satellite clocks, it will significantly damage the stability of clocks and reduce the accuracy of PPP at the user end. Code Bias Analysis Compared with the residual analysis, the direct analysis of code biases can better reflect their time-varying characteristics. Figures 4 and 5 show the SCB time series extracted from the Undifferenced-SCB and Undifferenced-SRCB models, respectively. For the convenience of analysis, only six satellites are displayed, and the time series of each satellite is shifted to different degrees. As shown in the two figures, the time-varying code biases of the same satellite extracted using different models are consistent, which indicates that the excessive constraints imposed on the Undifferenced-SRCB model do not significantly damage the time-varying characteristics of the SCB. Additionally, the RCB time series extracted from the Undifferenced-RCB and Undifferenced-SRCB models are also displayed in Figures 6 and 7. For the convenience of analysis, only six receivers are displayed, and the time series of each receiver is shifted to different degrees. Compared with Figure 6, the white noise characteristic of the RCB in Figure 7 is destroyed, and the constraint of Equation (15) should be responsible for this. Different from SCBs, the variation of the RCB with time is more severe. Therefore, if those biases are not driven into code residuals such as Equation (2), it will bring adverse effects on the estimated parameters [25]. Additionally, the RCB time series extracted from the Undifferenced-RCB and Undifferenced-SRCB models are also displayed in Figures 6 and 7. For the convenience of analysis, only six receivers are displayed, and the time series of each receiver is shifted to different degrees. Compared with Figure 6, the white noise characteristic of the RCB in Figure 7 is destroyed, and the constraint of Equation (15) should be responsible for this. Different from SCBs, the variation of the RCB with time is more severe. Therefore, if those biases are not driven into code residuals such as Equation (2), it will bring adverse effects on the estimated parameters [25]. Though SCBs are contaminated by code noises due to the low accuracy of code observables, it can be found that SCBs appear in a peak-to-peak range of about 3 ns, as shown in Figure 4. If those time-varying SCBs are driven into estimated satellite clocks, it will significantly damage the stability of clocks and reduce the accuracy of PPP at the user end. Additionally, the RCB time series extracted from the Undifferenced-RCB and Undifferenced-SRCB models are also displayed in Figures 6 and 7. For the convenience of analysis, only six receivers are displayed, and the time series of each receiver is shifted to different degrees. Compared with Figure 6, the white noise characteristic of the RCB in Figure 7 is destroyed, and the constraint of Equation (15) should be responsible for this. Different from SCBs, the variation of the RCB with time is more severe. Therefore, if those biases are not driven into code residuals such as Equation (2), it will bring adverse effects on the estimated parameters [25]. Satellite Clock Validation In order to assess the influence of different code bias extraction models on the stability of satellite clocks after the start of estimation, the satellite clocks are compared with the IGS final products. In this assessment, the RMS and standard deviation (STD) values for each satellite are calculated. The RMS values reflect the deviation degree of estimated clocks relative to the IGS, and the STD values represent the quality and stability of clocks. Figure 8 shows the RMS and STD values of clock estimates compared with the IGS final products on DOY 031. Obviously, the RMS and STD values of the Undifferenced model are more consistent with the Undifferenced-RCB model, and the Undifferenced-SCB is closer to the Undifferenced-SRCB model, which indicates that the RCBs do not damage the quality of satellite clocks. Additionally, the mean RMS and STD values of the clock estimates displayed in Figure 8 are given in Table 3, which also confirms the above conclusion. Satellite Clock Validation In order to assess the influence of different code bias extraction models on the stability of satellite clocks after the start of estimation, the satellite clocks are compared with the IGS final products. In this assessment, the RMS and standard deviation (STD) values for each satellite are calculated. The RMS values reflect the deviation degree of estimated clocks relative to the IGS, and the STD values represent the quality and stability of clocks. Figure 8 shows the RMS and STD values of clock estimates compared with the IGS final products on DOY 031. Obviously, the RMS and STD values of the Undifferenced model are more consistent with the Undifferenced-RCB model, and the Undifferenced-SCB is closer to the Undifferenced-SRCB model, which indicates that the RCBs do not damage the quality of satellite clocks. Additionally, the mean RMS and STD values of the clock estimates displayed in Figure 8 are given in Table 3, which also confirms the above conclusion. Satellite Clock Validation In order to assess the influence of different code bias extraction models on the stability of satellite clocks after the start of estimation, the satellite clocks are compared with the IGS final products. In this assessment, the RMS and standard deviation (STD) values for each satellite are calculated. The RMS values reflect the deviation degree of estimated clocks relative to the IGS, and the STD values represent the quality and stability of clocks. Figure 8 shows the RMS and STD values of clock estimates compared with the IGS final products on DOY 031. Obviously, the RMS and STD values of the Undifferenced model are more consistent with the Undifferenced-RCB model, and the Undifferenced-SCB is closer to the Undifferenced-SRCB model, which indicates that the RCBs do not damage the quality of satellite clocks. Additionally, the mean RMS and STD values of the clock estimates displayed in Figure 8 are given in Table 3, which also confirms the above conclusion. It can also be seen from Figure 8 and Table 3 that if the SCBs are extracted during the clock estimation process, the STD values of the satellite clocks decrease, but the RMS values increase at the same time, which means that the quality and stability of the satellite clocks are improved and the relative biases between the estimated and IGS clocks become larger. Obviously, the SCBs contained in IGS clocks are responsible for the increase in the RMS. It is worth noting that, the stability of IGS clocks is not destroyed by time-varying SCBs; therefore, those SCBs are likely to be driven into the code residuals such as Equation (2). Then, we can infer that the reason for the instability of clocks estimated using the Undifferenced model is that the time-varying SCBs are assimilated by the satellite clocks. Moreover, it is a time-consuming process to drive the time-varying SCBs into the code residuals; therefore, we extract SCBs directly to accelerate the convergence and stabilization of clocks. Considering that SCBs are the main factors that damage the stability of satellite clocks, then only the Undifferenced and Undifferenced-SCB are compared in the following. Figure 9 shows the biases of estimated satellite clocks with respect to the IGS final clocks. Compared with the Undifferenced model, the satellite clocks without SCBs have a faster convergence speed and are more stable. In particular, the satellite clock bias of the Undifferenced model, represented by the blue line, is changed by about 0.5 ns from the 12 h to the 20 h, but it is not the case in the Undifferenced-SCB model. The accuracy of the satellite clocks estimated using the above two models in one week is shown in Table 4. As It can also be seen from Figure 8 and Table 3 that if the SCBs are extracted during the clock estimation process, the STD values of the satellite clocks decrease, but the RMS values increase at the same time, which means that the quality and stability of the satellite clocks are improved and the relative biases between the estimated and IGS clocks become larger. Obviously, the SCBs contained in IGS clocks are responsible for the increase in the RMS. It is worth noting that, the stability of IGS clocks is not destroyed by timevarying SCBs; therefore, those SCBs are likely to be driven into the code residuals such as Equation (2). Then, we can infer that the reason for the instability of clocks estimated using the Undifferenced model is that the time-varying SCBs are assimilated by the satellite clocks. Moreover, it is a time-consuming process to drive the time-varying SCBs into the code residuals; therefore, we extract SCBs directly to accelerate the convergence and stabilization of clocks. Considering that SCBs are the main factors that damage the stability of satellite clocks, then only the Undifferenced and Undifferenced-SCB are compared in the following. Figure 9 shows the biases of estimated satellite clocks with respect to the IGS final clocks. Compared with the Undifferenced model, the satellite clocks without SCBs have a faster convergence speed and are more stable. In particular, the satellite clock bias of the Undifferenced model, represented by the blue line, is changed by about 0.5 ns from the 12 h to the 20 h, but it is not the case in the Undifferenced-SCB model. The accuracy of the satellite clocks estimated using the above two models in one week is shown in Table 4. As expected, the RMS of the Undifferenced-SCB model is larger than that of the Undifferenced model. After SCBs' extraction, the mean STD of the clocks decreases from 0.15 to 0.07 ns. Remote Sens. 2021, 13, 2714 11 of 17 expected, the RMS of the Undifferenced-SCB model is larger than that of the Undifferenced model. After SCBs' extraction, the mean STD of the clocks decreases from 0.15 to 0.07 ns. Kinematic PPP Validation Although extracting SCBs can improve the quality and stability of satellite clocks at the beginning of estimation, if these biases are not considered in PPP, the positioning accuracy will be seriously damaged. Unfortunately, as discussed in Section 3.2, SCBs are contaminated by code noises; therefore, they cannot be used as the corrections in PPP. In this study, we increase the noises of code observations to realize the application of satellite clocks without SCBs in PPP. Here, we define Ratio as the prior noise ratio of the code observations and the phase observations. Generally, the Ratio is set to 100, which means that if the prior noise of the phase observations is 0.003 m, then the noise of the code observations is 0.3 m. The observation data on DOY 031 of 57 stations distributed in Figure 1 are used for PPP with Kinematic PPP Validation Although extracting SCBs can improve the quality and stability of satellite clocks at the beginning of estimation, if these biases are not considered in PPP, the positioning accuracy will be seriously damaged. Unfortunately, as discussed in Section 3.2, SCBs are contaminated by code noises; therefore, they cannot be used as the corrections in PPP. In this study, we increase the noises of code observations to realize the application of satellite clocks without SCBs in PPP. Here, we define Ratio as the prior noise ratio of the code observations and the phase observations. Generally, the Ratio is set to 100, which means that if the prior noise of the phase observations is 0.003 m, then the noise of the code observations is 0.3 m. The observation data on DOY 031 of 57 stations distributed in Figure 1 are used for PPP with different ratio values. The positioning accuracy and convergence time are investigated to determine the appropriate Ratio. Figure 10 shows the statistical results, where the positioning error is a three-dimensional error and the convergence time is defined as the time required to attain a three-dimensional positioning error less than 15 cm and keep at least for 10 epochs [32]. The positioning accuracy is poor when the Ratio is 100; therefore, it is not plotted in the figure. 700, the positioning performance is not significantly improved. In addition, large observation noises are not conducive to the convergence of PPP. Therefore, we set the Ratio to 700 when the satellite clocks estimated using the Undifferenced-SCB model are applied to carry out PPP experiments, and set the Ratio to 100 when the satellite clocks estimated using the Undifferenced model or the IGS final clocks are used. Then, the above three kinds of clocks are used in PPP to evaluate the effectiveness of Undifferenced-SCB clocks. It should be noted that in order to prove the advantage of the Undifferenced-SCB model at the beginning of the estimation, satellite clocks estimated using the Undifferenced and Undifferenced-SCB models used in PPP do not converge in advance; therefore, the PPP solutions obtained by using the IGS clocks should have obvious positioning performance advantages. Figure 11 shows the time series of the positioning errors for station ABPO as an example. Obviously, compared with the Undifferenced solution, the Undifferenced-SCB offers a better positioning and convergence performance. Moreover, Figure 12 shows the RMS of the positioning errors with IGS solutions in contrast to those with the Undifferenced and Undifferenced-SCB solutions for each station from DOY 031 to 037. Each dot in the figure represents one station. There is no doubt that IGS solutions have the most effective performance. In addition, the positioning performance of the Undifferenced-SCB solutions is closer to the IGS than that of the Undifferenced solutions. It can be seen from Figure 10 that with the increase in the Ratio, the positioning accuracy is improved, and the convergence time is reduced. When the Ratio is greater than 700, the positioning performance is not significantly improved. In addition, large observation noises are not conducive to the convergence of PPP. Therefore, we set the Ratio to 700 when the satellite clocks estimated using the Undifferenced-SCB model are applied to carry out PPP experiments, and set the Ratio to 100 when the satellite clocks estimated using the Undifferenced model or the IGS final clocks are used. Then, the above three kinds of clocks are used in PPP to evaluate the effectiveness of Undifferenced-SCB clocks. It should be noted that in order to prove the advantage of the Undifferenced-SCB model at the beginning of the estimation, satellite clocks estimated using the Undifferenced and Undifferenced-SCB models used in PPP do not converge in advance; therefore, the PPP solutions obtained by using the IGS clocks should have obvious positioning performance advantages. Figure 11 shows the time series of the positioning errors for station ABPO as an example. Obviously, compared with the Undifferenced solution, the Undifferenced-SCB offers a better positioning and convergence performance. Moreover, Figure 12 shows the RMS of the positioning errors with IGS solutions in contrast to those with the Undifferenced and Undifferenced-SCB solutions for each station from DOY 031 to 037. Each dot in the figure represents one station. There is no doubt that IGS solutions have the most effective performance. In addition, the positioning performance of the Undifferenced-SCB solutions is closer to the IGS than that of the Undifferenced solutions. In addition to positioning accuracy, the convergence speed of different solutions has also been assessed. Figure 13 shows the mean convergence time of all the stations in a single day. Obviously, significant convergence improvements can be found in the Undifferenced-SCB solutions with respect to the Undifferenced solutions, and the convergence time of the former is half that of the latter. Considering that clock estimation starts from 0 h in each day, the convergence time gap between the Undifferenced-SCB and the IGS solutions is reasonable. In addition to positioning accuracy, the convergence speed of different solutions has also been assessed. Figure 13 shows the mean convergence time of all the stations in a single day. Obviously, significant convergence improvements can be found in the Undifferenced-SCB solutions with respect to the Undifferenced solutions, and the convergence time of the former is half that of the latter. Considering that clock estimation starts from 0 h in each day, the convergence time gap between the Undifferenced-SCB and the IGS solutions is reasonable. Figure 14 shows the SCB time series extracted from the Undifferenced-SCB model. For the convenience of observation, the time series of each satellite is shifted to different degrees. As shown in the figure, although those SCBs are not stable constants, their timevarying characteristics are not dramatic. Therefore, the constant part of the SCB is usually absorbed by the satellite clock in estimation, and the time-varying part is driven into the code residual to generate stable clock products. On the one hand, the time-varying part of the SCB is not dramatic, and it can be driven into the code residual after a long time of convergence. On the other hand, it is also to eliminate the rank deficiency and obtain the full rank model. However, when the filtering system needs to converge again, the timevarying SCB will seriously damage the convergence speed and stability of the clocks. Discussion The key to accelerating the convergence of the satellite clock to a stable state is to get rid of the disturbance of the time-varying part of the SCB. Additionally, there are two methods to solve this problem. The first is that a robust filtering model needs to be constructed to accelerate the time-varying part of the SCB being driven to code residual. The advantage of this method is that the estimated satellite clocks are consistent with that of the IGS, which avoids trouble at the user end. However, how to build a robust filtering Figure 14 shows the SCB time series extracted from the Undifferenced-SCB model. For the convenience of observation, the time series of each satellite is shifted to different degrees. As shown in the figure, although those SCBs are not stable constants, their timevarying characteristics are not dramatic. Therefore, the constant part of the SCB is usually absorbed by the satellite clock in estimation, and the time-varying part is driven into the code residual to generate stable clock products. On the one hand, the time-varying part of the SCB is not dramatic, and it can be driven into the code residual after a long time of convergence. On the other hand, it is also to eliminate the rank deficiency and obtain the full rank model. However, when the filtering system needs to converge again, the time-varying SCB will seriously damage the convergence speed and stability of the clocks. model needs more research. The second method is to extract the SCB directly. The advantage of this method is that it can directly avoid the adverse effects of SCBs and other unmodeled satellite related errors on clocks, but the influence of SCB should be considered in PPP at the user end. The latter method is adopted in this study. At the user end, the full rank SCB extraction model cannot be constructed, as in the network processing. In addition, due to the contamination of the code observation, the SCB extracted from network processing cannot be used as a correction in PPP directly. In this study, the weight of the code equation is reduced to eliminate the negative effect of SCB, and experiments show that this is a simple and feasible strategy. Conclusions It usually takes a long time for clocks to converge when estimating satellite clocks based on an undifferenced IF model. Moreover, the clocks cannot reach stability in a short time after convergence. For this reason, we investigated the influence of time-varying code biases on the stability of satellite clocks and proposed a new clock estimation method considering SCBs. Clock estimation and PPP experiments were carried out to verify the effectiveness of the new method. By comparing the results of the Undifferenced and Undifferenced-SCB models, the following conclusions can be drawn. First, the time-varying characteristics of SCBs can be determined using the Undifferenced-SCB model, and they are the main factors that cause the instability of satellite clocks. Then, different from the original undifferenced method, the clocks estimated using the new method are free of the adverse effects of SCBs and can converge and reach stability in a short time. Finally, the clocks estimated using the new method can significantly improve the convergence speed and positioning accuracy of PPP, that is, the convergence speed can be doubled, and the positioning errors can be reduced by 20-45%. Since the SCBs are not corrected to code observables in PPP, the weight of the code equations is reduced to eliminate the negative effects of SCBs. Although this is effective, it is only a compromise strategy. On the one hand, the code equations with large noise reduce the convergence speed. On the other hand, the filter may converge to the wrong states due to the high weight of the phase equations, which leads to poor positioning solutions. Therefore, more attention should be paid to how to apply the satellite clocks without SCBs to PPP in the future. The key to accelerating the convergence of the satellite clock to a stable state is to get rid of the disturbance of the time-varying part of the SCB. Additionally, there are two methods to solve this problem. The first is that a robust filtering model needs to be constructed to accelerate the time-varying part of the SCB being driven to code residual. The advantage of this method is that the estimated satellite clocks are consistent with that of the IGS, which avoids trouble at the user end. However, how to build a robust filtering model needs more research. The second method is to extract the SCB directly. The advantage of this method is that it can directly avoid the adverse effects of SCBs and other unmodeled satellite related errors on clocks, but the influence of SCB should be considered in PPP at the user end. The latter method is adopted in this study. At the user end, the full rank SCB extraction model cannot be constructed, as in the network processing. In addition, due to the contamination of the code observation, the SCB extracted from network processing cannot be used as a correction in PPP directly. In this study, the weight of the code equation is reduced to eliminate the negative effect of SCB, and experiments show that this is a simple and feasible strategy. Conclusions It usually takes a long time for clocks to converge when estimating satellite clocks based on an undifferenced IF model. Moreover, the clocks cannot reach stability in a short time after convergence. For this reason, we investigated the influence of time-varying code biases on the stability of satellite clocks and proposed a new clock estimation method considering SCBs. Clock estimation and PPP experiments were carried out to verify the effectiveness of the new method. By comparing the results of the Undifferenced and Undifferenced-SCB models, the following conclusions can be drawn. First, the time-varying characteristics of SCBs can be determined using the Undifferenced-SCB model, and they are the main factors that cause the instability of satellite clocks. Then, different from the original undifferenced method, the clocks estimated using the new method are free of the adverse effects of SCBs and can converge and reach stability in a short time. Finally, the clocks estimated using the new method can significantly improve the convergence speed and positioning accuracy of PPP, that is, the convergence speed can be doubled, and the positioning errors can be reduced by 20-45%. Since the SCBs are not corrected to code observables in PPP, the weight of the code equations is reduced to eliminate the negative effects of SCBs. Although this is effective, it is only a compromise strategy. On the one hand, the code equations with large noise reduce the convergence speed. On the other hand, the filter may converge to the wrong states due to the high weight of the phase equations, which leads to poor positioning solutions. Therefore, more attention should be paid to how to apply the satellite clocks without SCBs to PPP in the future. Author Contributions: S.L. provided the initial idea and wrote the manuscript; S.L. and Y.Y. designed and performed the research; Y.Y. helped with the writing and partially financed the research. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: The datasets analyzed in this study can be made available by the corresponding author on request.
9,426.2
2021-07-09T00:00:00.000
[ "Engineering" ]
Quantifying the effect of magnetopause shadowing on electron radiation belt dropouts Energetic radiation belt electron fluxes can undergo sudden dropouts in response to different solar wind drivers. Many physical processes contribute to the electron flux dropout, but their respective roles in the net electron depletion remain a fundamental puzzle. Some previous studies have qualitatively examined the importance of magnetopause shadowing in the sudden dropouts either from observations or from simulations. While it is difficult to directly measure the electron flux loss into the solar wind, radial diffusion codes with a fixed boundary location (commonly utilized in the literature) are not able to explicitly account for magnetopause shadowing. The exact percentage of its contribution has therefore not yet been resolved. To overcome these limitations and to determine the exact contribution in percentage, we carry out radial diffusion simulations with the magnetopause shadowing effect explicitly accounted for during a superposed solar wind stream interface passage, and quantify the relative contribution of the magnetopause shadowing coupled with outward radial diffusion by comparing with GPS-observed total flux dropout. Results indicate that during high-speed solar wind stream events, which are typically preceded by enhanced dynamic pressure and hence a compressed magnetosphere, magnetopause shadowing coupled with the outward radial diffusion can explain about 60– 99 % of the main-phase radiation belt electron depletion near the geosynchronous orbit. While the outer region ( L > 5) can nearly be explained by the above coupled mechanism, additional loss mechanisms are needed to fully explain the energetic electron loss for the inner region ( L ≤ 5). While this conclusion confirms earlier studies, our quantification study demonstrates its relative importance with respect to other mechanisms at different locations. Introduction Reductions of energetic electron flux in the outer radiation belt can generally be attributed to (a) adiabatic motion (i.e., Dst effect) (Kim and Chan, 1997) that radially transports particles adiabatically following a configuration change in the magnetosphere to conserve the three adiabatic invariants (µ, K, φ) and (b) nonadiabatic processes, such as the loss caused by pitch-angle scattering via various cyclotron wave-particle interaction, which leads to electron precipitation to the lowaltitude atmosphere (e.g., Lyons et al., 1972;Thorne et al., 2005;Summers et al., 2007a, b;Millan et al., 2007) as well as the loss across the magnetopause (i.e., magnetopause shadowing) into the interplanetary space (e.g., Desorgher et al., 2000;Ohtani et al., 2009;Ukhorskiy et al., 2006Ukhorskiy et al., , 2011)).Magnetopause shadowing is often caused by either an inward motion of the magnetopause that opens up the previously closed particle drift shells and depletes the particles or by outward motion of the particles that subsequently encounter the magnetopause boundary.Magnetopause shadowing is usually coupled with outward radial diffusion (Schulz and Lanzerotti, 1974) as the sudden loss to the magnetopause generates a sharp gradient that further drives particles outwards and then through the magnetopause onto open drift shells.While adiabatic processes allow electron flux in the storm recovery phase, depleted in the main phase, to return to its pre-storm level, many events associated with storm main-phase dropouts do not recover (Reeves et al., 2003).A statistical study by Li et al. (2009) found that the development of a storm leads to a net decrease of relativistic electrons in the outer radiation belt.The flux dropout in such a case can only be a result of nonadiabatic processes that permanently remove the energetic electrons from the system.Several different mechanisms fall into this "nonadiabatic" Y. Yu et al.: Quantifying the effect of magnetopause shadowing category; however, they are likely to act together and the relative role of each mechanism remains an open question (see Turner et al., 2013, for a review). Observational analysis, aided by a variety of spacecraft measurements, provides a useful means of investigating the mechanism(s) possibly responsible for the rapid dropout in the outer radiation belt.For example, analysis of a number of satellite measurements (e.g., Bortnik et al., 2006;Millan et al., 2010;Turner et al., 2012) has suggested that loss to the magnetopause plays a major role in depleting the energetic electrons near and outside the geosynchronous orbit during storm main phases, while atmosphere precipitation has a much less significant contribution.Morley et al. (2010b) studied a rapid loss of energetic electrons observed by the GPS constellation and found that the loss at and beyond geosynchronous orbit is highly correlated with the motion of the magnetopause.Matsumura et al. (2011) presented the correlation between the outer radiation belt boundary location and the magnetopause location, suggesting that the magnetopause shadowing plays a major role in the variation of the outer radiation belt.Besides the above case studies, Meredith et al. (2011) conducted a superposed epoch analysis of energetic electron dropouts during 42 high-speed solarwind-stream-(HSS) driven storms and found no evidence for enhanced precipitation of MeV electron during the mainphase dropout and suggested that the decrease in the MeV electron flux is not caused by the precipitation to the atmosphere.This result was confirmed using a different set of HSS drivers by Hendry et al. (2012).Some of the above studies qualitatively compared the precipitation signature and the dropout in the electron radiation belt, and since no significant precipitation was observed in the high L shell, the authors concluded that the magnetopause shadowing contributes most of the electron dropout.Some of the studies correlated the radiation belt variability with the magnetopause location and suggested the similar conclusion based on a high-correlation relationship.None of these studies, however, directly examined the loss to the solar wind since the loss out of the magnetopause boundary is difficult to measure. On the other hand, radiation belt numerical modeling (e.g., Desorgher et al., 2000;Brautigam and Albert, 2000;Kim et al., 2008Kim et al., , 2010Kim et al., , 2012;;Su et al., 2011b;Subbotin et al., 2011) has the advantage for researchers that by manipulating the code and switching on/off the mechanisms of interest, their relative importance in the flux dropout can be revealed.For example, Kim et al. (2008) numerically studied the extent of the drift loss/magnetopause shadowing under different solar wind pressure or IMF Bz conditions through drift path tracing, using the guiding-center method, and estimated the relative decrease of the MeV electron flux when pressure or IMF Bz is enhanced.Using a variable boundary condition from the satellite CRRES, Shprits et al. (2006) simulated the main-phase depletion and concluded that the radial diffusion can effectively propagate the nonadiabatic boundary flux variations down to L * = 4 and that the possible magnetopause loss in the variable boundary conditions together with the radial transport can account for the mainphase flux dropout.Su et al. (2011b) investigated the relative contribution of different loss mechanisms by comparing a CRRES-observed dropout with 3-D electron radiation belt modeling by gradually including more processes in the code.They found that the magnetopause shadowing together with adiabatic transport overestimate the electron flux outside L of 5 but underestimate that inside L of 5 and that a further introduction of the plume wave-particle interaction remarkably captures the 1 MeV electron flux profile. These modeling studies shed some light on the relative importance of magnetopause shadowing in depleting the electron radiation belt, but again, none of these simulations quantitatively reported the contribution of magnetopause shadowing.Exactly how much of the loss is caused by this mechanism?What is the relative percentage of its contribution in the total dropout?This study will tackle this issue by using GPS observations combined with radial diffusion simulations to quantify its contribution in HSS ensemble events. In radiation belt modeling, the outer boundary, with appropriate boundary conditions, can highly control the dynamic evolution of the radiation belt.The outer boundary condition can propagate any loss or acceleration information inward, influencing the dynamics in the radiation belt including an ultimate depletion of the electron population away from the radiation belt (e.g., Shprits et al., 2006).The outer boundary of the radiation belt is commonly placed at a fixed drift shell to model the trapped radiation belt electron population, with a boundary condition derived from measurements such as geosynchronous observations or CRESS and POLAR measurements (e.g., Brautigam and Albert, 2000;Shprits et al., 2006) or based on a kappa-type distribution function as used in Su et al. (2011a). Such a fixed drift shell is incapable of representing the time-varying magnetopause boundary, and the data-inferred boundary condition carries the net outcome of competing mechanisms.Therefore the real loss into the solar wind cannot be truly identified.To study the effect of magnetopause shadowing on the depletion of energetic electrons, the boundary of the radiation belt modeling is required to be explicitly set at the last closed drift shell across which the ultimate electron loss occurs.This last closed drift shell, however, is not only time varying but also K (bounce-motion-related adiabatic invariant) dependent, which is associated with the drift shell splitting in a nondipolar magnetic field (Roederer, 1970).In this study, a K-dependent boundary is implemented in the radial diffusion model to be consistent with the simulation of the trapped electrons under a particular K coordinate.This boundary also varies with time according to the upstream solar wind conditions.Such a boundary can account for the magnetopause shadowing as well as the drift splitting, and the amount of electron loss into the solar wind can be explicitly quantified. This paper primarily focuses on quantifying the relative contribution of magnetopause shadowing in the electron flux dropouts during 67 HSS events (from Morley et al., 2010a) by simulating the magnetopause shadowing effect in the 1-D radial diffusion model.Section 2 describes the GPS data that will be used to obtain the total electron loss during HSS dropout events.Section 3 presents the methodology for carrying out radial diffusion simulations using a time-varying, K-dependent last closed drift shell boundary to explicitly capture the magnetopause shadowing.The quantitative comparison with GPS observations of the relative flux decrease during the superposed HSS event is shown in Sect. 4. Data In this study, we use the superposed epoch results of interplanetary and geomagnetic parameters from 67 HSS events spanning from year 2005 to 2008 studied by Morley et al. (2010a) (Fig. 1).The zero epoch time is taken at the time when the east-west solar wind flow (V y ) deflection reverses.The median Dst index minimizes around −22 nT, and the median Kp approaches 4 at maximum.Although over 75 % of these events have a minimum Dst above −35 nT, and would therefore typically not be identified as storms (Loewe and Prölss, 1997), they display a consistent response that is qualitatively storm-like.In the following, we will refer to the superposed median parameters of these events as a "typical HSS event".The rapid increase in the solar wind speed and solar wind number density in these events leads to enhancement of the solar wind dynamic pressure.This ensures an inward motion of the magnetopause and thereby a compressed magnetosphere during the HSS event, providing a good example of examining the real electron loss due to magnetopause shadowing.Note that the solar wind at epoch of 0 is already in the transition to a high dynamic pressure, so the last closed drift shell (shown later) starts to move inward before zero epoch. The GPS data used here are obtained from the CXD (combined X-ray dosimeter) instrument package (Distel et al., 1999).Energetic electrons are measured by two subsystems: the low-energy particle (LEP) subsystem resolves 0.14 to > 1.25 MeV electrons into five energy channels; the highenergy X-ray and particle (HXP) subsystem resolves 1.3 to > 5.8 MeV electrons into six energy channels (see also Denton and Cayton, 2011).The CXD electron count rates are inverted to obtain the omni-directional differential number flux, j , by solving the spectral inversion function (Ginet et al., 2013) where y is a vector of observed counts, δt is the integration time, G is a vector of energy-geometric factors (response functions), j (E) is the omni-directional differential particle flux at energy E, and b is a vector of expected background counts.The instrument response functions have been derived through extensive Monte Carlo modeling of the instrument package. Following previous work, and consistent with observation (Cayton et al., 1989;Varotsou et al., 2008;Denton et al., 2010), a relativistic Maxwellian energy spectrum is assumed in the inversion procedure: where the constant E 0 is the rest energy of the particle species (511 keV for electrons).The inversion of Eq. ( 1) is carried out using InvLib, a C inversion library developed by Paul O'Brien at Aerospace Corporation and used in the data preparation for the AE9/AP9/SPM radiation belt climatology model (see Ginet et al., 2013, and references therein). For each time we assess the goodness of fit of the relativistic Maxwellian and discard those data that cannot adequately be described by the chosen energy spectrum. Y. Yu et al.: Quantifying the effect of magnetopause shadowing In order to compare with the following simulation results, the GPS omni-directional differential flux is sorted in the drift shell L * coordinate, averaging over magnetic local time.The determination of the drift shell L * generally requires a global magnetospheric configuration as the computation involves numerical tracing of global magnetic field lines (Roederer, 1970).Instead, in this study, the recently developed L * neural network (Koller et al., 2009;Koller and Zaharia, 2011;Yu et al., 2012) is employed to compute the drift shells where the GPS instruments were located during all of the 67 HSS events.The L * neural network has been proven to be efficient while preserving high accuracy with a prediction efficiency of 99.7 % (Yu et al., 2012) when comparing to the traditional time-consuming numerical fieldline tracing method.The L * neural network employed here was trained from the T89 empirical magnetic field model (Tsyganenko, 1989) and was run through the SpacePy software package (Morley et al., 2010c). Methodology The one-dimensional radial diffusion model simplified from the Fokker-Plank equation with a loss term is used to simulate the evolution of phase space density (PSD) distribution f (L * , t) of the trapped radiation belt energetic electron during the above superposed HSS event: where D LL is the radial diffusion coefficient adapted from the empirical result in Brautigam and Albert (2000): where the Kp index follows the superposed median value of the above 67 HSS events.The electron lifetime τ is set to be one minute once it migrates across the magnetopause, implying a prompt loss of radiation belt electrons into the solar wind.Different timescales were tested, including 0.1, 5, and 10 min, and no significant difference is found in the PSD at L * ≤ 7.0, indicating that the selected lifetime would not considerably influence the results near geosynchronous orbit as long as it is in timescales of drift periods. Initial condition and outer boundary The initial condition is determined from DREAM (the Dynamic Radiation Environment Assimilation Model) (Reeves et al., 2012) for different (µ, K) combinations.DREAM performs data assimilation using an ensemble Kalman filter technique (Koller et al., 2007) follows the algorithms described in Chen et al. (2005).The PSD is assimilated at different µ and K coordinates, i.e., µ at 167.0, 462.0, 1051.0, or 2083.0MeV G −1 and K at 0.005, 0.01, 0.03, or 0.1 G 1/2 R E .The quiet-time-(Dst > −20 nT) assimilated PSD (µ, K, L * ) during this half-year period is then averaged over time, which is subsequently applied as the initial condition in this study as displayed in Fig. 2. The outer boundary is a crucial element in the radiation belt modeling owing to its high-impact modulation on the systematic variation.In this study, the outer boundary in the radial diffusion model represents the last closed drift shell, which is the magnetopause boundary.We will conduct a group of radial diffusion simulations with different K values, so the outer boundary (i.e., the last closed drift shell) must be K consistent, i.e., the last closed drift shell is a function of K value.Because of the drift shell splitting (Roederer, 1970) and ever-changing magnetospheric configurations, the K parameter cannot be simply related to one pitch angle on the last closed drift shell.Therefore the following steps are carried out to obtain the outer boundary at a certain K parameter (K 0 ): (1) a bisection iterative method is applied in tracing each drift shells from the midnight meridian until the last closed drift shell L * max (α) is determined for a set of equatorial pitch angle α ranging from 20 to 90 • ; (2) the above procedure also provides the corresponding K parameter at the last closed drift shell for a specific equatorial pitch angle, i.e., K(α); (3) these L * max (K(α)) are then interpolated into L * max (K 0 ). Figure 3a shows the last closed drift shell obtained from different pitch angles (shown by traces along the time, with the top ones computed with smaller pitch angles).The last closed drift shell appears to be more outward (darker red) when the pitch angle is intermediate (around 50 • ) than that with smaller pitch angle or larger pitch angle.The possible reason is that the particle with intermediate pitch angles undergoes a Shabansky orbit (Shabansky, 1971;Öztürk and Wolf, 2007;Ukhorskiy et al., 2011) in which it does not come across the equator but bounces within one hemisphere, allowing a larger drift shell until it encounters the magnetopause boundary (see Kim et al., 2008, for an illustration).The quasi-periodic daily evolution of the last drift shell is caused by the warping of the tail current sheet across the magnetic equator in the T89 magnetic field model (Tsyganenko, 1989) that is used to account for the geodipole tilt angle.During the storm main-phase of the superposed HSS event, the last closed drift shell clearly moves inwards (blue coloring) and the K parameter increases because of the stretching in the magnetotail.Figure 3b shows the interpolated last closed drift shell at particular K values for the desired simulations.For smaller K values (around 0.005 and 0.01 G 1/2 R E ), the last closed drift shell shows a small discrepancy; however, it can significantly differ (up to 0.5) when K becomes larger. From simulated PSD to flux Exhibited in Fig. 4a is the phase space density simulation result for (µ, K) of (462.0MeV G −1 , 0.03 G 1/2 R E ) with drift loss to the corresponding boundary.The phase space density is significantly reduced near the epoch time when the enhanced solar wind dynamic pressure compresses the magnetosphere, resulting in permanent electron loss to the solar wind.The goal of this work is to quantify the effect of magnetopause shadowing on the flux dropout of trapped radiation belt electrons via quantitative comparisons between simulation results and GPS observation.While GPS measures count rates, these are inverted to omni-directional differential flux as a function of energy (in Sect.2); in order to allow for direct comparison with the GPS flux observation, the simulated PSD f (µ, K, * ) results are also converted to omnidirectional differential flux j (E).The conversion procedure is hereby summarized as follows: 1. "Fly" 24 stationary virtual satellites on the midnight equator from 4.0 to 10.0 R E with 0.25 R E separation and calculate L * (α) at these positions with 18 different pitch angles (from 5 to 90 • ) using the T89 L * neural network technique (Yu et al., 2012) (K parameter is a by-product along this step). 2. Interpolate L * (α(K 0 )) into the drift shell with the K value in the simulations (i.e., L * (K)), and extract from the simulated PSD f (µ, K, L * ) for all virtual satellites, which together should reproduce the original simulated radiation belt PSD environment (Fig. 4b).3. The extracted PSD f (µ, K, L * ) at each virtual satellite is subsequently converted to flux j (α, E, r) using the following equation: where the differential flux j is in units of cm −2 s −1 sr −1 keV −1 and PSD f is in units of c 3 /(MeV cm) 3 (Chen et al., 2005).The differential flux is consequently integrated over the solid angle to obtain the omni-directional flux j (E; r). 4. Sort the flux output from all virtual satellites by L * (taking the average at the L * grid if overlap occurs for multiple satellite results), which is shown in Fig. 4c.The relative change in the omni-directional flux from pre-dropout to the minimum dropout time will be compared with the observed flux change as described below. Quantitative results Figure 4c shows the simulated omni-directional flux (converted from simulated phase space density) at 0.65 MeV after carrying out the above procedures.The reason that it lacks data in larger drift shells is because the conversion from PSD to the flux invokes interpolation over a particular energy grid (here is 0.65 MeV) that falls outside the available energy range associated with the prescribed (µ, K) combinations and no extrapolation is done in this study for the purpose of preserving the overall accuracy.Figure 4d displays the GPS-observed omni-directional flux superposed from 67 HSS events.Both the simulated and observed flux show a rapid decrease across a wide L * range within a few hours immediately after the magnetopause boundary moves inward. Although the magnitude of the simulated flux is higher than the observation within L * of 4.5-6.0, the simulation nearly captures the rapid flux decrease with the same timescale in the dropout time.Note that no source mechanism is introduced in the entire simulation, which may be sufficiently valid during the HSS dropout time period because generally no large competing acceleration (source) process takes place during the dropout time period (a similar assumption was made by Kim et al. (2010)).Such an exclusion of the source term in the model also explains why the observed flux returns to a higher level in the recovery phase than the simulated flux.Since the simulation reproduces the rapid flux dropout to a large degree despite the different magnitude in the flux than the observation, only the relative change in the dropout time (from the pre-dropout to the minimum dropout time) is examined in this study to investigate how much of the total radiation belt electron loss (as observed by the GPS) can be explained by drift loss to the magnetopause boundary coupled with outward radial diffusion (as implemented in the simulation).Figure 5 shows the flux at different L * locations from the simulation and GPS observations.We use the shaded regions spanning over the time axis to obtain the averaged flux for two "instances" (i.e., pre-dropout time and minimum dropout time).The relative flux change is parameterized by where "min" stands for the time of minimum flux dropout (i.e., 5 h after the epoch zero time, with ±5 h spanning to average for better statistics), and "pre" means the pre-dropout time (12 h before the epoch, with ±3 h spanning).By taking the ratio between the relative flux change in the simulation resulting from the magnetopause shadowing coupled with the outward radial diffusion, and the relative flux change in the observation resulting from all loss mechanisms, we can quantify the percentage contribution of the magnetopause shadowing plus the radial diffusion in depleting the electron flux during the dropout period. Figure 6a shows the simulated radial profile of the flux at the pre-dropout and minimum dropout times.After the sudden dropout, the flux peak (at L * ≈ 5) shifts inward, diffusing in both directions.The inner region flux is enhanced with the inward diffused flux, while the outer region flux is significantly decreased.This decrease is a result of, as implemented in the radiation belt modeling, the drift loss to the open drift shell outside the magnetopause caused by the inward motion of the magnetopause and the outward radial diffusion that is further enhanced due to a sharp gradient at the magnetopause boundary. Since the goal of this study is to quantify how much of the total loss in the radiation belt electrons is caused by the drift loss coupled with the outward radial diffusion, the relative reduction in the flux from pre-dropout to the minimum dropout time is calculated using Eq. ( 6) and depicted in Fig. 6b.Shown for comparison is the observed relative flux change during the same time period (dot line).The total electron flux observed by the GPS decreases by about 67 to 95 % for L * of 5.0 to 6.0 during the dropout time, with a larger drop in the outer region.The simulation demonstrates that the flux decreases by about 40 to 90 % in the same drift shell region, roughly following the same decreasing tendency while moving outward.This suggests that the loss mechanisms specified in the simulation, i.e., magnetopause shadowing plus outward radial diffusion, can approximately explain 60 to 99 % of the sudden electron loss during the superposed HSS event near the geosynchronous orbit (Fig. 6c).Outside L * of 5.0, the above loss mechanism can explain more than 93 % of the total dropout, but its contribution to the inner region (L * ≤ 5) is much less (60 %). Discussion Note that by comparing the flux quantity in a certain energy level, the adiabatic effect cannot be completely neglected. It can lead to a change in electron population observed at a particular spatial position when the magnetosphere varies.Nevertheless, the phase space density in the adiabatic coordinate system in Fig. 4a clearly demonstrates the decrease of electron content in the outer radiation belt, indicating that the electrons are truly lost due to the drift loss and outward radial diffusion.In actual fact, the adiabatic effect contributing to the flux variation during the dropout event is potentially minimized by studying the flux in the drift shell L * coordinate.Furthermore, even if any adiabatic effect remains because of the changing magnetospheric configuration, it makes, however, the same contribution in both the observation and simulation since the same magnetospheric magnetic field model (T89) is employed.Therefore comparing the relative flux change will rule out the same adiabatic effect in both the simulation and observation.Earlier observational works (Bortnik et al., 2006;Millan et al., 2010;Loto'aniu et al., 2010;Turner et al., 2012) conducted case studies of radiation belt dropouts during storm events and suggested that the combination of magnetopause shadowing and outward radial transport can explain the observed dropout near and outside the geosynchronous orbit.Numerical studies using a radial diffusion model with fixed boundary locations and time-dependent boundary conditions inferred from satellite measurements (e.g., Brautigam and Albert, 2000;Miyoshi et al., 2003;Shprits et al., 2006) were able to reproduce main-phase dropouts in the outer radiation belt energetic electron content.The study presented here, unlike the above case studies and simulations with fixed boundary locations, simulated an ensemble of HSS events (67 superposed HSS events from 2005 to 2008) with an explicit time-dependent, K-specific magnetopause boundary (i.e., last closed drift shell) in the radial diffusion model to capture the real loss out of the boundary.This study has made another significant further step in quantifying the relative contribution of the effect of magnetopause shadowing coupled with outward radial transport by comparing simulation results with the relative flux change observed by the GPS spacecraft.While previous studies only qualitatively suggested that the above combined effect could mainly explain the electron radiation belt dropouts without showing any percentage contribution to the observed dropout, this work is able to determine the percentage of its contribution.A contribution of 93-99 % is found, suggesting that the above coupled loss mechanism can be primarily responsible for the total electron loss near the geosynchronous position (L * > 5).Nevertheless, some additional loss mechanisms are still needed to fully explain the electron loss in the inner region (L * ≤ 5).This finding based on the numeric percentage is consistent with the qualitative conclusion from previous studies.Note that the diffusion coefficient D LL used in the 1-D radial diffusion model after Brautigam and Albert (2000) merely represents the diffusion contributed from magnetic field perturbations; however, studies have shown that the contribution from electric field perturbations can be significant (e.g., Ozeke et al., 2012;Tu et al., 2012).But the relative importance between the two is still under study.Therefore, we carried out sensitivity tests by decreasing/increasing the diffusion coefficient D LL by a factor of 2, 5 and 10 and found that a larger diffusion coefficient results in slightly more loss through the magnetopause boundary during the storm main phase and hence the relative contribution by the magnetopause shadowing coupled with the radial diffusion increases by a few percent.This indicates that our quantitative study is embedded with some uncertainty; however this does not change the conclusion that the magnetopause shadowing coupled with the outward radial diffusion is nearly responsible for the radiation belt electron dropout near the geosynchronous orbit. Conclusions Since the fundamental question regarding the primary mechanism responsible for the energetic electron flux depletion in the outer radiation belt remains controversial, this study has made significant steps toward quantifying, rather than qualitatively determining, the effect of magnetopause shadowing on the energetic electron dropouts during 67 HSS events by comparing with GPS dropout observations.Unlike previous radial diffusion simulations with a fixed boundary location, this study utilized a time-varying, K-dependent boundary to represent the magnetopause boundary, i.e., the last closed drift shell.Such a boundary setting allows for the explicit identification of flux loss due to magnetopause shadowing.Results indicate that during the small but representative storm events, the drift loss to the magnetopause (i.e, magnetopause shadowing) together with outward radial diffusion is mainly responsible for the electron loss, contributing approximately 93-99 % of the total loss near the geosynchronous orbit (L * > 5.0), but with the inner region (L * ≤ 5.0) requiring some additional loss mechanisms (only 60 % can be explained by the above coupled mechanism). Future studies will be directed to quantify relative contributions of other individual or mixed loss mechanisms, such as pitch angle and energy diffusions, which can be included within multidimensional models to represent wave-particle interactions (e.g., Beutier and Boscher, 1995;Albert et al., 2009;Su et al., 2010;Subbotin et al., 2010;Tu et al., 2013). Fig. 1 . Fig. 1.Superposed solar wind and magnetospheric conditions of 67 high-speed solar-wind stream (HSS) events that occurred during the period 2005-2008, from Morley et al. (2010a).The zero epoch time is taken at the time when the east-west solar wind flow (V y ) deflection reverses.The median Dst index minimized around −22 nT and the Kp approaches 4 at maximum.The increased solar wind velocity and density during the HSS events result in a compressed magnetosphere and thus a good example of studying the magnetopause shadowing. Fig. 2 . Fig. 2. The initial condition used in the radial diffusion simulations for different µ and K combinations.The initial conditions are obtained from averaging over quiet-time radiation belt data assimilation results in 2002 after running DREAM (the Dynamic Radiation Environment Assimilation Model; Reeves et al., 2012) using data from three LANL-GEO, one GPS, and the POLAR spacecraft. Fig. 3 . Fig. 3. (a) The color contour of the last closed drift shell (L * max ) as a function of K parameter (y axis).The L * max is obtained by numerically tracing magnetic field lines with the bisection technique on the midnight equator with a particular pitch angle using the T89 magnetic field model.Different pitch angles spanning from 20 (top trace) to 90 • (bottom trace) are used to calculate the last closed drift shell L * max and corresponding K parameter.The traces represent the result of different pitch angles.(b) The interpolated last closed drift shell for specific K values.This represents the magnetopause location in the radiation belt radial diffusion model. Fig. 4 . Fig. 4. (a) One example of the simulated phase space density at µ of 462.0 MeV G −1 and K of 0.03 G 1/2 R E in the superposed HSS event with the magnetopause shadowing explicitly accounted for.(b) The reconstructed PSD from 24 stationary "satellites" at the midnight equatorial plane.(c) The converted flux in L * space at energy level of 0.649 MeV following the procedure described in Sect.3.2.(d) The superposed GPS flux observations from 67 HSS events at the energy level of 0.649 MeV. Fig. 5 . Fig. 5.The flux at different L * locations from (a) simulation and (b) GPS observations.The two shaded regions represent two "instances" in an average sense: pre-dropout time (12 h before the epoch zero time, with ±3 h spanning), and minimum dropout time (5 h after the epoch zero time with ±5 h spanning), respectively. Fig. 6.(a) The radial profile of the 0.649 MeV simulated flux at two "instances": pre-dropout and minimum dropout time.(b) The relative flux change in the simulation and in the GPS observation during the storm main phase (i.e., the flux change from the pre-dropout time to the minimum dropout time).(c) The percentage contributed by the magnetopause shadowing (MS) plus the radial diffusion (RD) in the relative flux change.
7,569.8
2013-11-15T00:00:00.000
[ "Physics" ]
Enhanced THz extinction in arrays of resonant semiconductor particles We demonstrate experimentally the enhanced THz extinction by periodic arrays of resonant semiconductor particles. This phenomenon is explained in terms of the radiative coupling of localized resonances with diffractive orders in the plane of the array (Rayleigh anomalies). The experimental results are described by numerical calculations using a coupled dipole model and by Finite-Difference in Time-Domain simulations. An optimum particle size for enhancing the extinction efficiency of the array is found. This optimum is determined by the frequency detuning between the localized resonances in the individual particles and the Rayleigh anomaly. The extinction calculations and measurements are also compared to nearfield simulations illustrating the optimum particle size for the enhancement of the near-field. © 2015 Optical Society of America OCIS codes: (230.1950) Diffraction gratings; (240.6680) Surface plasmons; (250.5403) Plasmonics; 300.6470 Spectroscopy, semiconductors; (300.6495) Spectroscopy, terahertz. References and links 1. J. Krenn, A. Dereux, J. Weeber, E. Bourillot, Y. Lacroute, J. Goudonnet, G. Schider, W. Gotschy, A. Leitner, F. Aussenegg, and C. Girard, “Squeezing the optical near-field zone by plasmon coupling of metallic nanoparticles,” Phys. Rev. Lett. 82, 2590–2593 (1999). 2. S. Zou and G.C. Schatz, “Silver nanoparticle array structures that produce giant enhancements in electromagnetic fields,” Chem. Phys. Lett. 403, 62–67 (2005). 3. J. Aizpurua, G. W. Bryant, L. J. Richter, F. J. Garcı́a De Abajo, B. K. Kelley, and T. Mallouk, “Optical properties of coupled metallic nanorods for field-enhanced spectroscopy,” Phys. Rev. B 71, 235420 (2005). 4. H. Fischer and O.J.F. Martin, “Engineering the optical response of plasmonic nanoantennas,” Opt. Express 16, 9144–9154 (2008). 5. S. Zou, N. Janel, and G. C. Schatz, “Silver nanoparticle array structures that produce remarkably narrow plasmon lineshapes,” J. Chem. Phys. 120, 10871–10875 (2004). 6. F. J. Garcı́a de Abajo, “Colloquium: Light scattering by particle and hole arrays,” Rev. Mod. Phys. 79, 1267 (2007). 7. F. J. Garcı́a de Abajo, R. Gómez-Medina, and J. J. Sáenz, “Full transmission through perfect-conductor subwavelength hole arrays,” Phys. Rev. E 72, 016608 (2005). 8. T. W. Ebbesen, H. Lezec, H. Ghaemi, T. Thio, and P. Wolff, “Extraordinary optical transmission through subwavelength hole arrays,” Nature (London) 391, 667–669 (1998). 9. L. Martı́n-Moreno, F. J. Garcı́a-Vidal, H. J. Lezec, K. M. Pellerin, T. Thio, J. B. Pendry, and T. W. Ebbesen, “Theory of extraordinary optical transmission through subwavelength hole arrays,” Phys. Rev. Lett. 86, 1114 (2001). 10. J. Bravo-Abad, L. Martı́n-Moreno, F. J. Garca-Vidal, E. Hendry, and J. Gómez Rivas, “Transmission of light through periodic arrays of square holes: from a metallic wire mesh to an array of tiny holes,” Phys. Rev. B 76, 241102(R) (2007). #240819 Received 12 May 2015; revised 9 Aug 2015; accepted 28 Aug 2015; published 10 Sep 2015 (C) 2015 OSA 21 Sep 2015 | Vol. 23, No. 19 | DOI:10.1364/OE.23.024440 | OPTICS EXPRESS 24440 11. E. M. Hicks, S. Zou, G. C. Schatz, K. G. Spears, R. P. Van Duyne, L. Gunnarsson, T. Rindzevicius, B. Kasemo, and M. Käll, “Controlling plasmon line shapes through diffractive coupling in linear arrays of cylindrical nanoparticles fabricated by electron beam lithography,” Nano Lett. 5, 1065–1070 (2005). 12. Y. Chu, E. Schonbrun, T. Yang, and K. B. Crozier, “Experimental observation of narrow surface plasmon resonances in gold nanoparticle arrays,” Appl. Phys. Lett. 93, 181108 (2008). 13. B. Auguié and W. Barnes, “Collective resonances in gold nanoparticle arrays,” Phys. Rev. Lett. 101, 1 (2008). 14. V. Kravets, F. Schedin, and A. Grigorenko, “Extremely narrow plasmon resonances based on diffraction coupling of localized plasmons in arrays of metallic nanoparticles,” Phys. Rev. Lett. 101, 087403 (2008). 15. G. Vecchi, V. Giannini, and J. Gómez Rivas, “Surface modes in plasmonic crystals induced by diffractive coupling of nanoantennas,” Phys. Rev. B 80, 201401 (2009). 16. S. R. K. Rodriguez, A. Abass, B. Maes, O. T. A. Janssen, G. Vecchi, and J. Gómez Rivas, “Coupling bright and dark plasmonic lattice resonances,” Phys. Rev. X 1, 021019 (2011). 17. W. Zhou and T. W. Odom, “Tunable subradiant lattice plasmons by out-of-plane dipolar interactions,” Nat. Nanotech. 6, 423–427 (2011). 18. T. V. Teperik and A. Degiron, “Design strategies to tailor the narrow plasmon-photonic resonances in arrays of metallic nanoparticles,” Phys. Rev. B 86, 245425 (2012). 19. G. Pellegrini, G. Mattei, and P. Mazzoldi, “Nanoantenna arrays for large-area emission enhancement,” J. Chem. Phys. C 115, 24662–24665 (2011). 20. A. B. Evlykhin, C. Reinhart, U. Zywietz, and B. N. Chichkov, “Collective resonances in metal nanoparticle arrays with dipole-quadrupole interactions,” Phys. Rev. B 85, 245411 (2012). 21. V. G. Kravets, F. Schedin, A. V. Kabashin, and A. N. Grigorenko, “Sensitivity of collective plasmon modes of gold nanoresonators to local environment,” Opt. Lett. 35, 956–598 (2010). 22. P. Offermans, M. C. Schaafsma, S. R. K. Rodriguez, Y. Zhang, M. Crego-Calama, S. Brongersma, and. J. Gómez Rivas, “Universal scaling of the figure of merit of plasmonic sensors,” ACS Nano 5, 5151–5157 (2011). 23. W. Zhou, M. Dridi, J. Y. Suh, C. H. Kim, D. T. Co, M. R. Wasielewski, G. C. Schatz, and T. W. Odom, “Lasing action in strongly coupled plasmonic nanocavity arrays,” Nat. Nanotech. 8, 506–511 (2013). 24. A. H. Schokker and A. F. Koenderink, “Lasing at the band edges of plasmonic lattices,” Phys. Rev. B 90, 155452 (2014). 25. A. Bitzer, J. Wallauer, H. Helm, H. Merbold, T. Feurer, and M. Walther, “Lattice modes mediate radiative coupling in metamaterial arrays,” Opt. Express 17, 22108–22113 (2009). 26. B. Ng, S. M. Hanham, V. Giannini, Z. C. Chen, M. Tang, Y. F. Liew, N. Klein, M. H. Hong, and S. A. Maier, “Lattice resonances in antenna arrays for liquid sensing in the terahertz regime,” Opt. Express 19, 14653–14661 (2011). 27. J. Wallauer, A. Bitzer, S. Waselikowski, and M. Walther, “Near-field signature of electromagnetic coupling in metamaterial arrays: a terahertz microscopy study,” Opt. Express 19, 17283–17292 (2011). 28. A. Berrier, R. Ulbricht, M. Bonn, and J. G. Rivas, “Ultrafast active control of localized surface plasmon resonances in silicon bowtie antennas,” Opt. Express 18, 23226–23235 (2010). 29. A. Berrier, P. Albella, M. A. Poyli, R. Ulbricht, M. Bonn, J. Aizpurua, and J. Gómez Rivas, “Detection of deep-subwavelength dielectric layers at terahertz frequencies using semiconductor plasmonic resonators,” Opt. Express 20, 5052–5060 (2012). 30. IOFFE, http://www.ioffe.ru/SVA/NSM/Semicond/Si/basic.html. 31. Y.-S. Lee, Principles of Terahertz Science and Technology (Springer US, 2009). 32. T. Jensen, L. Kelly, A. Lazarides, and G. C. Schatz, “Electrodynamics of noble metal nanoparticles and nanoparticle clusters,” J. Clust. Sci. 10, 295–317 (1999). 33. H. C. van de Hulst, Light Scattering by Small Particles (Dover Publications, Inc., 1981). 34. V. Giannini, A. Berrier, S. A. Maier, J. A. Sanchez-Gil, and J. Gómez Rivas, “Scattering efficiency and near field enhancement of active semiconductor plasmonic antennas at terahertz frequencies,” Opt. Lett. 18, 2798–2807 (2010). 35. M. C. Troparevsky, A. S. Sabau, A. R. Lupini, and Z. Zhang, “Transfer-matrix formalism for the calculation of optical response in multilayer systems: from coherent to incoherent interference,” Opt. Express 18, 24715–24721 (2010). 36. L. Duvillaret, F. Garet, and J.-L. Coutaz, “A reliable method for extraction of material parameters in terahertz time-domain spectroscopy,” IEEE J. Sel. Top. Quant. 2, 739–746 (1996). 37. N. Ashcroft and N. Mermin, Solid State Physics (Holt-Saunders, 1976). 38. C. Jacoboni, C. Canali, G. Ottaviani and A. Alberigi Quaranta, “A review of some charge transport properties of silicon,” Solid State Electron. 20, 77–89 (1977). Introduction Conducting particles can resonantly absorb and scatter electromagnetic waves by the so-called localized resonances (LRs) when their dimensions are comparable to the wavelength. The ab-sorption and scattering efficiencies depend on properties like the size, shape, material and orientation relative to the polarization of the wave. The local electric field can be increased in the vicinity of a particle due to such LRs [1][2][3][4]. When these scattering particles are separated by less than a few wavelengths they can couple radiatively. The radiative coupling can be enhanced in periodic arrays of particles through Rayleigh anomalies (RAs), which are diffraction orders in the plane of the array [5]. The resonances in periodic arrays of particles arising from the enhanced radiative coupling of LRs through diffraction are termed Surface Lattice Resonances (SLRs) [6]. SLRs have a significant influence on macroscopic optical properties of the array such as the optical extinction, and offer a new range of design parameters for tuning the spectral response of scattering structures. The extinction of light by periodic arrays is enhanced at the SLR condition as a consequence of the in-plane scattered wave by the particles in the array. This enhanced extinction has been associated through Babinet's principle to the extraordinary transmission through the complementary system, i.e. hole arrays [6,7], where surface waves on the continuous metal film (surface plasmon polaritons) are responsible of the enhanced optical transmission [8][9][10]. Aside from reshaping the extinction spectrum, the electromagnetic fields are enhanced over large volumes at specific frequencies due to SLRs. SLRs are currently under intense investigation at optical frequencies in arrays of plasmonic nanoparticles because their controllable properties that can be exploited in numerous applications such as for surface enhanced Raman scattering, bulk refractive index sensing, solid state lighting and lasing [11][12][13][14][15][16][17][18][19][20][21][22][23][24]. SLRs have been also described at THz frequencies using metallic structures [25][26][27]. In this manuscript we demonstrate experimentally, and verify theoretically the diffractive coupling at THz frequencies of LRs in semiconductor scatterers when they are ordered in a 2dimensional periodic array. In particular, we investigate arrays of silicon particles and show how the THz extinction of these arrays correlate with the frequency detuning between the LRs of the particles and the RA of the array. The extinction efficiency is maximum for a detuning close to zero and not for the largest particles as it could be incorrectly expected. We also correlate the far-field enhanced extinction with numerical simulations of the local field enhancement, illustrating an enhanced local field for a small detuning. Semiconductors are very promising materials for THz applications due to their versatile nature that can be controlled by the doping and free charge carrier concentration. Therefore, this demonstration of the diffractive coupling of LRs in Si structures defines new methods to efficiently modulate the THz extinction and near field. The experiments and subsequential theoretical and numerical analysis presented in this manuscript consider samples where the Si structures are homogeneously surrounded by quartz, resulting in pronounced effects due to diffractive coupling and a better support of Rayleigh anomalies. For the design of optical elements based on diffractive coupling it might be desirable that the particles are accessible and placed at an interface of quartz and air. Under these conditions diffractive coupling occurs in both media, and not necessarily in phase. Although this weakens the reduced extinction at the RA condition and the enhanced extinction at the SLR frequency, a SLR will still be excited and the main mechanism of diffractive coupling will be similar to that of a sample symmetrically embedded in quartz. Samples description The investigated semiconductor structures consist out of 2D square arrays of 1.5 µm thick single-crystalline Si square tiles. The length d of the tile varies from d = 25 µm to d = 85 µm for the different samples, while the lattice constant Γ is kept constant at 100 µm. The fabrication has been done using standard optical lithography, as described in more detail in Refs. [28,29]. The layer containing the Si structures is bonded at both sides to an amorphous quartz substrate, which is transparent at THz frequencies. The bonding layer is benzocyclobutene (BCB) and has an estimated thickness of 3 µm. Optical microscope images of the samples are shown in Fig. 1 (a)-(e), for arrays with squares of 25 µm, 35 µm, 45 µm, 55 µm and 65 µm in length respectively. The last panel ( Fig. 1(f)) shows an unstructured Si reference layer. The images are taken after the lithography was done but before the last bonding step was carried out, in which the Si structures were bonded to the superstrate to form a symmetric sample. Intrinsic single-crystalline Si has a bandgap of 1.12 eV. At room temperature this energy exceeds the thermal energy k B T of the electrons. As a consequence most of the electrons will be confined to the valence band, and the material is transparent at THz frequencies. Under these circumstances Si behaves as a dielectric with a refractive index of n = 3.4 and is practically lossless [30]. A permanent metallic behavior has been introduced by implanting the Si with arsenic (As) atoms, with a concentration of free carriers in the order of N ≈ 10 19 cm −3 . The free charge carriers and the dimensions of the Si tiles are responsible for the LRs at THz frequencies in the structures. Experimental results The experiments have been carried out using a THz time-domain spectrometer (THz-TDS), which uses optical rectification in ZnTe of 100 fs pulses from a Ti:sapphire oscillator for the generation of broadband THz radiation. This setup uses electro-optic sampling of the fs pulses from the same oscillator in another ZnTe crystal for the detection of THz radiation [31]. THz-TDS measures THz electric field transients by varying the optical path length between the generation and detection optical pulses. In our measurements, we detect the THz transmission through the array of Si particles. The complex transmission t(ν) is obtained by Fourier transforming the THz transients, which provides the intensity spectra I(ν) = |t(ν)| 2 and the accumulated phase φ (ν) = arg(t(ν)). A sample without any silicon, i.e. only quartz and BCB, was used as a reference. For each array the transmitted intensity is normalized to this reference, and corrected for the filling fraction f of the Si in each unit cell ( f = d 2 /Γ 2 ) to determine the extinction efficiency The extinction efficiencies are shown in Fig. 2(a) as solid curves, for the arrays with particles ranging in length from 85 µm (black curve) to 25 µm (blue curve). The vertical dotted line indicates the RA condition at 1.6 THz, which corresponds to the wave-vector that is associated with the lattice constant in the samples. Three regimes can be identified: For large particles, 85 µm (black curve)-65 µm (magenta curve), the response of the arrays show mainly a very broad resonance. For particles in the range between 55 µm (cyan curve)-35 µm (green curve) a characteristic surface lattice resonance appears close to the RA condition. The resonance becomes broad again for the smallest particles of 25 µm (blue curve). The extinction of the arrays is a result of the induced electromagnetic field from the scatterers interfering destructively with the incident field at the position of the detector. The phase of this field that is reradiated by the scatterers, relative to the phase of the incident field, is shown in Fig. 2(b). This differential phase delay is calculated from the complex Fourier transforms of the time-domain transients as The phase spectra show a trend consistent with the extinction spectra. For the smallest and largest particles the curves are smooth, while for the particles with lengths in the range 55 µm (cyan curve)-35 µm (green curve) there is a pronounced local maximum in the phase. This maximum in phase appears at slightly higher frequencies than its corresponding maximum in extinction, and is consistently close to the RA condition. In the next section we compare these measurements with numerical calculations using the coupled dipole model. Coupled dipole model The scattering of small particles in an ensemble can be described by a Coupled Dipole Model (CDM) calculation. In this model the scattering particles are approximated as dipoles, which can interact radiatively with each other. The properties of a scatterer are described by the polarizability α, which relates the polarization and the local field as p = αE loc . For each particle in the ensemble this local field is the sum of the incident field E inc and the field E ens which is scattered from all the other particles in the ensemble. For each particle i we can write This scattered field can be calculated from the product of the polarization vector of each particle and the dipole-dipole interaction tensor G(r) [6], via a summation over all particles j different than particle i E ens In differential form G(r) is defined as The above equations can be re-written as a linear set of self-consistent equations where I is the identity matrix and matrix S contains all the dipolar interactions. For finite arrays these equations can be solved for P by calculating the inverse of the matrix A. The extinction efficiency is given by [32] with d the diameter of the scattering particles, k the wavenumber and n the number of particles in the lattice. For the polarizability α we use the modified long wavelength approximation (MLWA), which holds for ellipsoidal particles of subwavelength dimension. This polarizability can be written as [32] where m = 1, 2, 3 represents the principal axes of the ellipsoid with corresponding lengths d m and α static m represents the static polarizability of the particle. The term 2 3 ik 3 α static m corresponds to the dynamic depolarization, and the term 2k 2 d m α static m to the radiative damping. The static polarizability for an ellipsoidal particle of permittivity ε p surrounded by a homogeneous medium of permittivity ε s is given by [33] α static with V the volume of the particle and L m is the shape factor taking into account the shape of the particle. This factor is given by For spherical particles the shape factor is 1/3 for the three axes. In the calculations presented in this manuscript, we consider square arrays with a lattice constant of 100 µm of 20 × 20 oblate spheroids representing disk-shaped particles. The oblate spheroids have two equal principal axes defining their diameter and a shorter one defining the height of the disk (i.e. 1.5 µm). The shape factors for the particles are for the long axes in the range 0.01 < L m < 0.03. The resulting resonance in the spectrum of these particles will be referred to as a localized resonance (LR). As we see next, the approximation of the Si tiles as oblate spheroids particles in the CDM is sufficient to describe qualitatively the measurements. CDM Extinction efficiency The results of the CDM calculations are shown in Fig. 3. Figure 3(a) displays the extinction efficiencies of the arrays with solid curves, and the extinction of single particles with dashed curves. The spectra are vertically displaced for clarity as indicated by the horizontal dotted lines. The vertical dotted line indicates the Rayleigh anomaly condition at 1.5 THz. This frequency differs slightly from that of the experimental results. The difference can be attributed to the omission of the polymer bonding layer in the calculations. Results are shown for particles with lengths of 85 µm (black curve) to 25 µm (blue curve). The resonant extinction of the single particles can be ascribed to LRs resulting from the coherent oscillation of the free charge carriers at the surface of the Si particles. The localized resonances red-shift as the length of the particle increases, while the shape of the resonance remains similar. The extinction efficiency is in first order independent of the particle length and is slightly larger than 2. The extinction spectra for the arrays are much richer. For the largest particles (black to magenta curves) the resonance is blue-shifted from the LR towards the RA condition, but remains broad. As the LR approaches the RA condition (cyan to green curves), the extinction is enhanced, and the resonance becomes narrower. The extinction efficiency is enhanced the most when the LR of the individual particles overlaps with the RA condition of the array, which corresponds to particles with dimensions between 35 µm and 45 µm (green and red curves in Fig. 3(a), respectively). These trends of enhanced extinction match the experimental results shown in Fig. 2(a). For the smallest particles (blue curve), when the LR is at much higher frequencies than the RA, the extinction efficiency at the RA condition is not so pronounced and the overall extinction of the array is not very different from that of the individual particles. CDM phase The phase of the average polarization is shown in Fig. 3(b), and is calculated as φ = 1 n ∑ i arg(p i ). Solid curves represent the array including interaction, while dashed curves represent single particles. The phase of the latter undergoes a smooth transition centered around the LR resonance frequency as expected for a damped Lorentz oscillator representing the oscillation of the free charges. The phase of the arrays follows a similar trend, but has an additional modulation slightly red-shifted from the RA condition. As in the case of the experimental results (Fig. 2) the maximum in phase appears at higher frequencies than the maximum in extinction, and remains close to the RA condition. The mechanism behind this behavior in the phase is directly related to the diffractive coupling of the particles in the lattice, and can be understood in more detail when breaking down the total scattering into the different contributing components. The results are shown in Fig. 4 where each arrow represents a field component in the complex plane The modulus of the vectors represent the field amplitudes normalized to the incident field, which has an amplitude given by the radius of the circle; The angles in Fig. 4 represent their phase φ relative to the incident field. Figure 4(a) is the calculation for the single oblate spheroid with a length of 35 µm at 1.44 THz. This frequency corresponds to maximum extinction efficiency in the case of the array (see green curve in Fig. 3(a)). The incident field (gray arrow) drives the conducting electrons in the scatterer, which scatter a fraction of this field in the forward direction (red arrow). The magnitude and relative phase of this scattered field are determined by the complex value of the particle's polarizability. The coherent sum of the incident field and the scattered field results in a reduction and change in phase of the transmitted field, which is indicated with the blue ar- row. Figure 4(b) represents a similar calculation as in Fig. 4(a) but at 1.5 THz. This frequency corresponds to the RA condition in the case of the array. As can be appreciated in Fig. 4(a), there are no significant changes in the fields compared to those calculated at 1.44 THz. This similar behavior is due to the broad resonance and similar value of the polarizability at both frequencies. For the periodic lattice, an additional term is present to account for the diffractive coupling. This term corresponds to the contribution of the scattered field by the ensemble of particles to the local field E ens and it is indicated by the green arrows in Figs. 4(c) and 4(d), where the fields in Fig. 4(c) are calculated at the frequency of the SLR, i.e., maximum extinction by the array, and in Fig. 4(d) correspond to the frequency of the RA (see green curve in Fig. 3). For an array that extends to infinity the total scattered field at each particle is given by E ens = Sp, where S = ∑ i G(r i ) and p = α[E inc + Sp]. The magnitude and phase of the field resulting from this diffractive coupling are related to the periodicity of the lattice and therefore the detuning of the LR from the RA condition. The superposition of this field, that is diffracted in the plane of the lattice, with the incident field constitutes the local field (black arrows in Figs. 4(c) and 4(d)) at the position of the scatterers. The scattered field by the array depends on the local field at the scatterers and their polarizability and it is given by The calculated E sca is represented by the red arrows in Figs. 4(c) and 4(d) for the SLR and RA, respectively. In both cases, the phase difference between the local field and the scattered field is determined by the phase of the polarizability. The amplitude and phase of the diffracted field at the SLR condition (green arrow in Fig. 4(c)) change the local field such, that the field scattered by the particles (red arrow) is in anti-phase compared to the incident field (gray arrow), interfering destructively with each other. As a result, the extinction is maximized. The phase that is obtained from the extinction measurements is the phase of the scattered field. At the RA condition ( Fig. 4(d)) the wavelength equals the pitch (Γ = 2π/k). At this condition the diffracted field (green arrow Fig. 4(d)) at each scatterer has been delayed by an integer number of wavelengths. As a consequence, the diffracted field has approximately the same phase as the scattered field of the single scatterer of Fig. 4(b). The amplitude is such, that when the scattered field (red arrow in Fig. 4(d)) is added to the incident field, the resulting total field (blue arrow) lies still close to the unit circle. This can be understood as an amplitude in the far field which has not changed, i.e., low extinction, but only picked up a finite additional phase. The system as described above is characterized by a LR which is at the high-energy side of the RA, resulting in a sharp SLR. Such behavior is not observed when the LR resides at the lowenergy side of the RA. In the latter case the phase of E sca for the isolated particle will exceed π, and as a consequence the phase and magnitude of E ens will change as well. This change of phase and magnitude prevents E sca in the periodic array to be equal in magnitude and opposite in phase to E inc , therefor preventing the formation of a sharp SLR [13, 18]. FDTD simulations To get a better insight in the interpretation of the experiments, simulations have been performed using the finite-difference in the time-domain method (FDTD). The layout of the unit cell and dimensions of the particles are taken from the microscope images shown in Fig. 1. The permittivity of the Si is the same as for the CDM calculations. The surrounding medium is considered as quartz with minor losses ε s = 4 + 0.04i. For single particles, the simulation volume is surrounded by perfectly matched layers (PMLs) and the extinction cross sections simulated with FDTD have been normalized to the geometric cross sections of the Si particles, resulting in an extinction efficiency. The results for the periodic arrays are simulated using periodic boundary conditions (PBC) in the directions perpendicular to the propagation direction of the incident field and the extinction has been normalized to the filling fraction of the particles. The resulting extinction efficiencies for the arrays are shown in Fig. 5(a), while those for the single particles are shown in Fig. 5(b). The extinction spectra for the single particles show a pronounced red-shift as the particle length increases. The magnitude of the extinction efficiency is nearly independent of the particle length. The finite extinction at high frequencies is attributed to higher order resonances. This contribution to the extinction from multipolar resonances is obviously not present in the dipole calculations of the previous section. For the periodic structures the results agree very well with the experimental data. For the large particles (85 µm-65 µm) the spectral response is broad, and there is no pronounced resonance around the RA condition. Also for the smaller particles the resonance is weak. For the arrays with particle lengths in the range 55 µm-35 µm a pronounced enhancement of the extinction is visible, slightly red-shifted from the Rayleigh anomaly. The FDTD simulations overestimate the magnitude and underestimate the width of the resonances compared to the measurements. This occurs mostly for the smaller particles and can be attributed to the finite bandwidth and cross-section of the experimental THz beam, while the FDTD simulations have been carried out with plane-wave illumination. Discussion A resonance is characterized by its central frequency, magnitude and width. These parameters have been extracted from the experimental data (Fig. 2) and the FDTD simulations (Fig. 5) and are shown in Fig. 6. The red triangles in this figure correspond to the experimental data of the SLRs in the arrays, while the blue circles are the simulated SLRs and the crosses are the simulated LRs. The SLR frequency, defined as the frequency of maximum extinction and shown in This small change of the resonance frequency compared to that of the single particles (represented in the figure inset) when the size of the particles is varied, indicates that the diffractive coupling strongly modifies the collective response of the localized resonances. The extinction efficiency of the arrays (Fig. 6(b)) has a maximum value of 4 at a particle length of 35 µm, decreasing rapidly for larger and smaller particle lengths. The extinction efficiency of the single particles does not change significantly with the particle length. It is not practical to assign a full width at half maximum to the resonances due to the asymmetric line shape of the extinction spectra, with an extinction that remains significant at high frequencies in both simulation and experiment. The extinction does however drop to zero at low frequencies, and therefore the lower half width at half maximum (HWHM) is used to quantify the widths of the resonances. As can be seen in Fig. 6(c), for both the FDTD and the experimental results the width of the SLRs increases as the particle length is increased. The width of the LRs of the single particles has the opposite behavior. In order to describe these dependencies of the SLRs with the particle length and their physical origin, we introduce a dimensionless parameter, δ , that defines the frequency detuning between the RA and the LRs For the FDTD simulations this detuning has been calculated for each particle size using the data presented in Fig. 6(a). The calculated frequency detunings are displayed for clarity in the upper horizontal axes of Fig. 6. As can be appreciated in Fig. 6(b) the maximum extinction efficiency for the arrays occurs for a detuning close to zero, i.e. δ 0. Also when this condition is satisfied, the HWHM of the SLRs is minimum. These results illustrate that the SLRs leading to the enhanced extinction can be regarded as the result of the coupling of LRs in the individual particles to the RAs or diffracted orders in the plane of the array. This coupling is optimum for small detuning leading to the maximum in the SLRs extinction. SLRs are also characterized by narrow line widths, which are mainly the result of the reduced radiative damping by destructive interference of the scattered fields [5]. The same analysis has been done for the SLRs calculated with the coupled dipole model. This is shown in Fig. 7, where the results of a 20 × 20 lattice (solid curves) are compared with those of single scatterers (dashed curves). Even though the CDM approximated the rectangular tiles as the dipolar response of oblate spheroids, and ignored the presence of the bonding layer, there is a qualitative good agreement between the experimental and numerical results. The CDM predicts in Fig. 7(a) a frequency shift of the SLR which scales with the particle length (solid curve). This shift is much more pronounced for the resonance frequency of the single particles (dashed curve). For small particles the LR is at high frequencies and the RA condition asymptotically keeps the SLR at 1.5 THz. In the experiments and FDTD the larger particles are no longer in the limit of truly dipolar behavior. The CDM neglects any contribution from multipolar modes in these larger particles that couple through diffraction at the RA condition at 1.5 THz. The dependence of the extinction of the SLR on the particle length shown in Fig. 7(b) with the solid curve agrees very well with the experiments, reproducing the maximum around the particle length of 35 µm at which δ 0. The extinction efficiency of the single particles is practically independent of the particle length (dashed curve). The small increase of the extinction efficiency for smaller particles can be attributed to the material dispersion. The HWHM of the LR (dashed curve in Fig. 7(c)) scales inversely with the particle length. The solid curve in Fig. 7(c), describing the array, displays a completely different behavior. A reduction in the length of the particle reduces the width of the resonance, which is a direct result of the SLR approaching the RA condition, reaching a minimum HWHM when δ 0. This is in agreement with the experimental results. Near fields The enhanced extinction efficiency in the particle arrays is closely related to a redistribution of energy in the near field in the proximity of the particles. This is illustrated in this section with FDTD simulations. We compare single particles to periodic lattices of particles at their respective resonance frequencies. These frequencies are indicated in Fig. 5 with the colored circles for each of the particle lengths under consideration. Cross-sections of the near fields are shown in Fig. 8 for particles of 25 µm, 35 µm, 45 µm and 55 µm in length. The cuts are taken through the center of each particle in the plane defined by the wavevector and the polarization vector of the incident field. Panels 8(a)-8(d) in Fig. 8 show the near-field intensity enhancements in one unit cell of the periodic lattices for increasing particle length. The horizontal dark bars at z = 0 µm represent the particles. We see that the fields are enhanced up to two orders of magnitude at the edges of the particles, as a result of the accumulation of the charges that are driven by the incident THz electric field. The fields however are enhanced not only in the proximity of the particle, but also extend in the region below the plane of the array. This effect is the largest in panel 8(b) where the local field intensity of the particle of 35 µm in length is shown. This is also in agreement with the extinction measurements and simulations discussed in the preceding sections, where for the 35 µm particles the largest extinction efficiency was observed. The regions of reduced intensity enhancement in the upper halves of the figures are a result of destructive interference between the incident field and the fraction of the THz radiation that is back scattered from the particles. For the single particles at their respective LR frequencies (panels 8(e)-8(h) of Fig. 8) the field is enhanced mostly at the edges of the particle, but this enhancement is not as pronounced as for the particles in the array. Figure 9 shows the averaged intensity enhancement (AIE), defined as the simulated intensity enhancement at the resonance frequencies integrated and averaged over a full unit-cell (dimensions 100 µm × 100 µm × 100 µm), as a function of the length of the particles. Figure 9(a) shows the AIE for the arrays of particles at each SLR frequency. The AIE of the arrays reproduces the behavior of the far-field extinction shown in Fig. 6(b), with a maximum AIE exceeding 2.5 for the array of particles with a length of 35 µm. The simulation for the single particles, for which the results are shown in Fig. 9(b), does not show an optimum particle size. There is only a slight increase of the AIE as the particle length increases, which can be explained by the larger volume over which the field in enhanced. For comparison, the FDTD simulations dis-cussed above are repeated for particles made out of a perfect electric conductor (PEC). Results for the AIE at the SLR and LR frequencies are shown in Figs. 9(c) and 9(d), respectively. Due to the suppression of Ohmic losses in PECs and the larger scattering efficiency compared to the Si particles, the AIE is larger for the PEC particles. Using semiconductors with a larger carrier mobility than Si, like GaAs or InSb, should provide a larger AIE performing even better than PECs due to the plamonic behavior of these semicondutors in the THz frequency range [34]. Conclusions We have experimentally investigated surface lattice resonances in periodic arrays of intrinsically doped Si particles at THz frequencies. The mechanism of the coupling of localized resonances in the particles to diffractive orders in the plane of the arrays (Rayleigh anomalies) has been studied as a function of the size of the particles. A maximum extinction efficiency is found for a small frequency detuning between the localized reasonances and the Rayleigh anomaly. The experimental results are reproduced by numerical calculations using the coupled dipole model and electrodynamic simulations using the Finite Difference in Time Domain method. Simulations also show that the optimum extinction efficiency is correlated with a maximum near field intensity enhancement.
8,568
2015-09-21T00:00:00.000
[ "Physics" ]
Detecting structural information of scatterers using spatial frequency domain imaging. We demonstrate optical phantom experiments on the phase function parameter γ using spatial frequency domain imaging. The incorporation of two different types of scattering particles allows for control of the optical phantoms’ microscopic scattering properties. By laterally structuring areas with either TiO2 or Al2O3 scattering particles, we were able to obtain almost pure subdiffusive scattering contrast in a single optical phantom. Optical parameter mapping was then achieved using an analytical radiative transfer model revealing the microscopic structural contrast on a macroscopic field of view. As part of our study, we explain several correction and referencing techniques for high spatial frequency analysis and experimentally study the sampling depth of the subdiffusive parameter γ. Introduction The analysis of backscattered light from biological tissue can reveal extensive and valuable information on tissue metabolism and structure. More precisely, diffuse optical techniques enable determination of tissue oxygenation and perfusion states [1][2][3] and are used for tissue differentiation with respect to the physical state, 4 structural alignment, 5 and malignancy. 6 All of this information is accessible by measurement of the absorption coefficient μ a and the reduced scattering coefficient μ 0 s . With the aim to also quantify microscopic structural tissue properties, there is a growing attempt for measurement of subdiffusive light scattering, which is detected at very short distances to the point of incidence. [7][8][9][10][11][12][13][14][15][16][17][18] Based on the small number of scattering interactions, subdiffusive scattering is strongly influenced by the scattering phase function of the scattering particles. The phase function states the probability distribution for light to be redirected into the different scattering angles at each scattering interaction. Generally, this function is heavily dependent on the size, shape, and relative refractive index of the scattering particles. Therefore, subdiffusive reflectance contains additional information on the micro-and ultrastructure of refractive index fluctuations in tissue and may provide insights into the underlying size distribution of scattering particles. 10,19 As microstructural properties are often linked to the dysplastic state of tissue, 20 there is the aspiration to also use diffuse optical techniques for noninvasive and possibly inter-operative histopathologic differentiation and margin assessment. In our subsequent phantom study, we apply spatial frequency domain imaging (SFDI) to well-defined optical scattering phantoms to retrieve their microscopic scattering contrast. [7][8][9] Similar to a previous study by Kanick et al., 7 we make use of high spatial frequency analysis to image this microscopic and subdiffusive scattering contrast on a macroscopic scale through quantification of the phase function parameter γ. 19,21,22 In contrast to this previous study, we demonstrate subdiffusive imaging on a single heterogeneous yet well-defined tissue phantom and also reveal microscopic contrast for the case of closely matched macroscopic absorption and scattering properties. After a description of our experimental setup in Sec. 2, we present our theoretical evaluation approach in Sec. 3. In this context, we also provide useful instructions for experimental reference and correction mechanisms on spatial frequency domain (SFD) reflectance data. In Sec. 4, we present our subdiffusive imaging experiments and demonstrate both mapping and depth sensitivity of the phase function parameter γ. Section 5 presents our conclusions Experimental Setup A sketch of our SFDI setup is given in Fig. 1. The projection unit is composed of a halogen light source, which couples into a flexible light guide for subsequent wavelength selection using a filter wheel. The wheel has a selection of eight coated bandpass interference filters for peak transmission at 500, 550, 600, 650, 700, 750, 800, and 900 nm and FWHM of 10 nm (Edmund Optics). Sinusoidal intensity modulation at spatial frequencies (0 mm −1 ≤ f ≤ 0.85 mm −1 ) is achieved by a digital micromirror device (0.7 XGA VIS, Discovery 4100 Development Kit, Vialux, Germany) and obliquely projected onto the sample at an angle of 35 deg with digital precorrection for contortion effects. Diffuse reflectance is captured from a 38 mm × 38 mm imaging area by a cooled CCD camera (QSI 640i, Quantum Scientific Imaging) at a numerical detection aperture of 0.09. Our tissue phantoms are made of transparent epoxy resin with suspended titanium dioxide (TiO 2 , Larit reinweiß, Lange & Ritter, Germany) or aluminum oxide (Al 2 O 3 , T530SG, Bassermann Minerals GmbH, Germany) scattering particles. A detailed description of the phantom fabrication process and the optical and physical properties of the epoxy material, as well as further analysis on the employed scattering particles, can be found in Ref. 23. To probe the subdiffusive differences of the two particle types, two pure phantoms were prepared with either TiO 2 or Al 2 O 3 scattering particles. In order to closely match the reduced scattering value in both phantoms, the weight concentration of Al 2 O 3 particles in the first phantom was ∼40 times larger than that of the TiO 2 particles in the second phantom. The large difference in weight concentration is based on the much lower scattering efficiency and higher anisotropy of the Al 2 O 3 scatterers owing to their larger diameter d and lower refractive index n. According to data from the distributing company, the median diameter of the Al 2 O 3 particles is d 50;Al 2 O 3 ¼ 1.5 AE 0.4 μm, while we determined the mean diameter of the TiO 2 particles to bed TiO 2 ¼ 250 AE 30 nm using scanning electron microscopy. A similarly large difference exists for the refractive index values n with n TiO 2 ¼ 2.7 AE 0.2 24 and n Al 2 O 3 ≈ 1.77. 25 As is discussed further in Ref. 23, both particle types exhibit variations in shape and often come in angled and elongated nonspherical forms. Phase function modeling using the Mie theory is, therefore, inappropriate. After suspension of the scattering particles in the liquid epoxy resin and subsequent solidification, all phantoms were polished using a series of increasingly fine sandpapers (grit P160-P2500). The smoothness of the phantoms' surfaces was required to reduce the effect of specular surface reflections. While correcting for this effect, as will be described in Sec. 3, we chose not to use crossed linear polarizers in order for the scalar light propagation model to fully agree with our measurement setting. 26 Theory and Evaluation Procedure The model we apply to our experimental data is a solution to the radiative transfer equation for oblique projection of sinusoidal intensity patterns. This solution was derived by Liemert and Kienle assuming a semi-infinite scattering medium and can be found in Ref. 27. A recent improvement of the solution approach is given in Ref. 28. On the basis of computation time, we limit the computational precision to the order N ¼ 17. The presence of two very distinct scattering particle types (TiO 2 and Al 2 O 3 ) with very different sizes and refractive indices renders precise yet universal phase function modeling difficult. Using forward calculations with goniometrically measured phase function data of the two scattering particles for λ ¼ 750 nm, 23 we find that the Reynolds-McCormick scattering phase function may serve as an acceptable yet well-established model function to approximate the γ-reflectance characteristic of both particle types. 29 The employed Reynolds-McCormick model phase function takes the two input parameters g 0 and α and, allows for more variability than the related Henyey-Greenstein function, which is a special case of the former for α ¼ 0.5. Owing to both the approximate nature of our phase function model and uncertainties in our experimental phase function data, our evaluation remains subject to a potential absolute error in γ of ∼0.1. This error could be determined by numerous forward calculations using both measured and model-based phase function data. The interested reader may also find a comprehensive analysis on the influence of the model phase function on the quantification of the subdiffusive parameter γ in our recent study in Ref. 22. To best approximate the subdiffusive scattering properties of our scattering particles, we set the input parameter g 0 ¼ 0.9 in our Reynolds-McCormick model function and use the following empirical relationship for the input parameter α to allow for fitting of γ independent of any other phase function parameter. E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 3 2 6 ; 4 7 7 This equation is based on a large set of forward calculations and a high accuracy fit (R 2 ¼ 0.9999) of the αðγÞ relationship. To enable rapid derivation of absorption, reduced scattering, and the subdiffusive phase function parameter γ, a precomputed look-up table of SFD reflectance values is used by the trustregion-reflective fit algorithm (MATLAB ® , MathWorks). The error introduced by the use of the discretized look-up table is negligibly small as could be confirmed by selective evaluation without precomputed values. The four dimensional look-up table contains values for the 12 employed spatial frequencies in the range 0 mm −1 ≤ f ≤ 0.85 mm −1 for many possible combinations of μ a , μ 0 s , and γ. Each of these dimensions is discretized into 40 parameter steps with a logarithmic spacing of μ a in the range 5 × 10 −4 mm −1 ≤ μ a ≤ 5 × 10 −2 mm −1 and linear spacing of both μ 0 s and γ with 0.5 mm −1 ≤ μ 0 s ≤ 2.5 mm −1 and The derivation of the phantoms' optical properties is not based on raw experimental imaging data, but follows a series of computational postprocessing steps. Some of these steps are of special importance for high spatial frequency analysis. In the following, we will elaborate on three postprocessing steps that account for absolute referencing of the reflectance intensity and correction of the imaging data for system related defocusing and specular surface reflections. The first step is the common and well-known demodulation principle. In this step, the three evenly phase-shifted images R 1 ðx; yÞ, R 2 ðx; yÞ, and R 3 ðx; yÞ are turned into the actual SFD reflectance image R AC;raw ðx; yÞ. E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 3 2 6 ; 1 1 1 R AC;raw still represents a raw image with pixel values measured in counts. In the next step, these raw values need to be transferred into actual reflectance intensities, which are independent of both the intensity and spatial heterogeneity of the projection system and the spatial heterogeneity of the detection system. This step is often approached by measurement of a spatially homogeneous reference phantom with very well-known optical properties or reflectance intensities. In so doing, measurement inaccuracies from other imaging modalities may be transferred and the underlying scattering phase function of the reference phantom has to be known precisely. In order not to rely on other imaging modalities and possible phase function uncertainties, we follow a slightly different approach by measuring the optical properties of a very homogeneous and intensely scattering epoxy phantom (μ 0 s ≈ 15 mm −1 ). The only assumptions we make on this phantom are the spatial homogeneity of its optical properties along with its predetermined refractive index values. 23 We then determine the phantom's optical properties (i.e., μ a , μ 0 s , and γ) at every imaging point using the product of light source intensity and detection sensitivity as an additional multiplicative fit variable a. In a next evaluation step, the derived optical properties are averaged and assumed constant for all imaging points, which leaves the factor aðx; yÞ as a single and robust fit variable in a second reevaluation of the phantom. This fit variable aðx; yÞ quantifies the ratio between R AC;raw and the actual reflectance value R AC as predicted by the theoretical model. E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 6 3 ; 4 5 5 R AC ðx; y; fÞ ¼ aðx; yÞR AC;raw ðx; y; fÞ: (3) All subsequent measurements are multiplied by the obtained map aðx; yÞ, serving as an intensity reference that quantifies both the light source intensity and the detection efficiency at the same time. Note that aðx; yÞ is independent of spatial frequency. One further postprocessing step corrects for optical defocusing in the spatial frequency projections, which is especially required for high spatial frequencies and oblique projection geometries. We achieve this by measuring the SFD reflectance from an anodized plate of aluminum. Due to the absence of volume scattering, the reflectance from the aluminum originates solely from the metal surface and the very thin anodization layer, which is typically <25 μm thick. Using this reflectance measurement, the amount of defocusing at a spatial frequency f can be quantified by the ratio bðx; y; fÞ, where E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 6 3 ; 2 7 0 bðx; y; fÞ ¼ R DC;Al ðx; yÞ R AC;Al ðx; y; fÞ ; and R DC;Al corresponds to the aluminum reflectance at spatial frequency zero, which is commonly obtained by averaging R 1 , R 2 , and R 3 at any spatial frequency. For very sharp projections and low spatial frequencies, b ≈ 1 and no correction is required. For high spatial frequencies, b may become much larger based on a decrease in R DC;Al . We perform postcorrection for projection defocusing by multiplying R AC ðx; y; fÞ of all subsequent measurements with bðx; y; fÞ. In a last correction step, we take into account specular surface reflection. Surface roughness related specular reflections have a potentially very strong impact on high spatial frequency analysis. The use of crossed linear polarizers can diminish this effect while complicating the theoretical modeling of the underlying light propagation. 26 Through the use of a nonscattering transparent epoxy phantom with closely matched surface roughness, we were able to quantify the specular reflections and to subtract them from our actual phase measurements. In addition to this correction step, we chose a rather large projection angle in our measurement setup to further reduce the influence of specular surface reflections. At the end of the above postprocessing steps, every computed reflectance image with a pixel resolution of 2048 × 2048 pixels was binned with bin sizes of 8 × 8 pixels to reduce the overall data volume and to enhance the signal-to-noise ratio. Especially for high spatial frequency analysis, it is recommended to perform this binning step only after computation of the AC-reflectance value, as binning may otherwise add to the demodulation of the phase data. Both our experimental setup as well as our evaluation approach have been repeatedly verified by comparative studies employing various other measurement modalities, such as spatially and time resolved reflectance spectroscopy on a large set of well-defined tissue phantoms. Experimental Results Our initial experiments relate to two pure scattering phantoms with either TiO 2 or Al 2 O 3 as the suspended scattering particles. These two homogeneous phantoms are hardly distinguishable to the naked eye yet have some differences in their reduced scattering and absorption properties. As previously mentioned, both particle types differ strongly in their scattering phase function based on the differences in particle diameter and refractive index. The two phantoms and all further phantoms were characterized with respect to μ a , μ 0 s , and γ using SFDI at 12 linearly spaced frequencies in the range 0 mm −1 ≤ f ≤ 0.85 mm −1 . Figures 2(a)-2(c) show the result of our measurements on the pure scattering phantoms averaged over five phantom positions versus wavelength λ. The given error bars correspond to the standard deviation of the positional average. Figure 2(a) reveals similar absorption values in the visible to near-infrared wavelength range for both tissue phantoms. The absorption is found to be very small in both phantoms and is only slightly increased for the Al 2 O 3 phantom based on the intrinsic absorption of Al 2 O 3 scatterers and its higher weight concentration. Figure 2(b) shows the different wavelength dependences of μ 0 s for both phantoms. The two dashed curves in Fig. 2(b) are a fit of the power law μ 0 s ðλÞ ∝ λ −α to the reduced scattering values yielding α Al 2 O 3 ¼ 0.58 and α TiO 2 ¼ 1. 15. The stronger decline of μ 0 s ðλÞ for the TiO 2 phantom relates to the smaller size of the TiO 2 scattering particles and their stronger refractive index wavelength dependence. Owing to the different wavelength dependence of the reduced scattering value of both pure scattering phantoms, an almost matched reduced scattering value can be found at the wavelength λ ¼ 750 nm in Fig. 2(b). In addition to the overall small differences in absorption, this wavelength is well suited for our next step, in which we aim to perform subdiffusive imaging in the near absence of macroscopic absorption and scattering contrast in a combined heterogeneous phantom. Relating to the wavelength dependence of scattering, the microstructural differences of the two pure phantoms become especially apparent in Fig. 2(c), showing the obtained γ values for both phantoms. The larger size of Al 2 O 3 particles relates to both more forward peaked and reduced high angle scattering leading to less subdiffusive backscattering and a larger γ value. We compare our results for γ (dashed curves) to those obtained by goniometric measurements on the same scattering particles in a similar refractive index medium. These comparative values are given by the two solid curves in Fig. 2(c), with the shaded areas representing their measurement errors. 23,30 In spite of common features in the compared data curves, we find absolute deviations for both particle types. For TiO 2 , we attribute the systematic deviation of Δγ ≈ 0.1 to the approximate nature of our phase function model and residual specular surface reflections, which lower the measured γ values. The larger deviation for the Al 2 O 3 particles may have its cause in differences in the effective particle size distributions due to slow sedimentation of the bigger-sized Al 2 O 3 particles during phantom curing and the surface insensitivity of the goniometric measurement approach. Using the derived optical properties of the two pure scattering phantoms from Figs. 2(a)-2(c), it is possible to create a laterally heterogeneous phantom composed of both phantom materials with only microscopic subdiffusive scattering contrast at λ ¼ 750 nm. We achieved this by engraving the name of our institute into the back side of the Al 2 O 3 phantom with a milling machine and a rounded milling head of 4 mm diameter. Using a syringe, the on average 2-mm-deep inscription was then filled with TiO 2 suspended epoxy resin with optical properties and TiO 2 particle concentration equal to that of the TiO 2 reference phantom [see Fig. 2(d)]. After hardening and repeated surface polishing, the lateral structuring of the so obtained phantom is hardly recognizable with the eye based on the small macroscopic scattering and absorption contrast in the visible range. Indeed, according to Fig. 2(b), the composed phantom has hardly any macroscopic scattering contrast at λ ¼ 750 nm. We used this wavelength to perform SFDI on the heterogeneous phantom in an almost complete absence of μ a and μ 0 s contrast. Figures 3(a)-3(e) give SFD reflectance images [i.e., grayscale maps of R AC ðfÞ] at various spatial frequencies for λ ¼ 750 nm and illustrate the increasing microscopic scattering contrast with spatial frequency. These reflectance images were scaled by our intensity reference and corrected for image blurring and specular surface effects, as described previously. The increased subdiffusive backscattering of TiO 2 scatterers gives rise to a strong contrast at spatial frequency f ¼ 0.35 mm −1 . Radiative transfer model based analysis finds the largest absolute derivative of R AC with respect to γ near this spatial frequency and reveals that only five percent of the SFD reflectance signal is based on photons traveling deeper than 1 mm for the given phantom optical properties. 22 We denote this penetration depth by d 5 and provide all corresponding depth values in the caption of Fig. 3. To best compare the achieved microscopic imaging contrast of Figs. 3(a)-3(e), the gray-scale coloring was adjusted for every image such that the ratio of pixel values corresponding to the brightest and darkest color is constant among all reflectance images. A very interesting effect can be observed in Figs. 3(c) and 3(d). In these images, the contrast of the engraved letters mostly stems from a bright contour at the inner border of the engraving. This contour is ∼0.5 mm wide and explicable by a slight bias for subdiffusive light in the boundary area to reach the phantom surface on the TiO 2 side. By fitting of our radiative transfer model to the experimental data, optical parameter maps for μ a , μ 0 s , and γ were obtained and are given in the top row of Fig. 4. As expected from the optical properties of the pure reference phantoms [Figs. 2(a)-2(c)], the laterally structured phantom has very little absorption and reduced scattering contrast at λ ¼ 750 nm, as can be seen in the left and central parameter maps of Fig. 4. The apparent change in optical properties on the lower and upper left corners of the images is a boundary artifact based on the finite projection area, which extends the imaging field of view by only 10 to 20 mm to each side. 31 The third parameter map in Fig. 4 depicts the strong γ contrast relating to the different types of scattering particles and, thus, demonstrates the capability of high-frequency analysis for macroscopic imaging of quantified microscopic scattering properties. Three line profiles in the bottom row of Fig. 4 further quantify the obtained contrast in the three parameter maps. The line profiles correspond to the single pixel lines indicated by the dashed lines in the parameter images above. The quantification of γ is noticeably impaired by surface roughness as can be seen by the scratches in the γ map. The apparent increase in γ at the left end of the imaging area is an artifact caused by residual specular surface reflections in connection with the oblique projection geometry. The optical properties displayed in Fig. 4 agree well with those measured for the pure reference phantoms (Fig. 2), even though our model based assumptions for a semi-infinite turbid media do not reflect the phantom's lateral and axial structuring. The absorption map together with its line profile in Fig. 4 depict an almost complete absence of absorption contrast. This corresponds to the relatively long optical path lengths required for such absorption intensities to have a meaningful impact on the reflectance signal. The parameter sensitivity toward μ 0 s is mostly given by intermediate optical path lengths (i.e., intermediate spatial frequencies), which have dimensions similar to the depth and width of our engraving. Therefore, we observe the obtained μ 0 s contrast at wavelengths other than λ ¼ 750 nm to be diminished (not shown) as compared to the expected contrast based on the homogeneous phantoms. This also reflects the model's assumption of a semi-infinite and homogeneous sample geometry and its inability to correctly describe the remaining macroscopic scattering heterogeneity. 1 On the contrary, we always find the obtained γ maps also for wavelengths other than λ ¼ 750 nm to retain most of the contrast anticipated from our reference measurements. This is because the corresponding short photon paths are mostly limited to the inside or the outside of the engraved region. When using γ maps for the study and assessment of microscopic tissue structure, it is important to know the corresponding sampling depth of the obtained γ values. To answer this question, we took an experimental approach by adding additional scattering layers on top of the engraved phantom [see Fig. 5(a)]. These layers are made of the same epoxy phantom material and incorporate TiO 2 scatterers in concentrations very similar to that of the phantom engraving. By iteratively adding layers of increasing thickness onto the phantom, the γ contrast obtained from further reflectance measurements is more and more reduced and eventually disappears. One thereby finds the depth regime that mostly accounts for the quantification of γ. Figure 5 shows the obtained γ maps at λ ¼ 700 nm for no added layer [ Fig. 5 layer are coated with a microscopy immersion index-matching fluid to reduce Fresnel reflections both on the surface and between the layer and phantom. The fluid greatly reduces the noise in the γ maps but gives rise to a small interference effect as seen by the bow pattern in Figs. 5(b)-5(e). As expected, the γ contrast of the inscription diminishes with increasing effective layer thickness and is even canceled at the layer thickness of dμ 0 s ¼ 1.18 as can be seen in Fig. 5(e). When comparing the coloring (i.e., the γ values) within the "ilm"inscription of Figs. 5(b) and 5(c), a slight decrease in γ value can be observed, which we attribute to the added interface between the cover layer and phantom. In spite of the used index-matching fluid, the added interface and the finite thickness of the oil layer have a small influence on the backscattering intensity. However, it can be observed in Figs. 5(c)-5(e) that γ remains almost constant in the engraved area independent of the surface layer thickness, as the surface layer is composed of the same scattering particles. By contrast, the lower subdiffusive backscattering of the outer Al 2 O 3 particles gets more and more masked by the TiO 2 cover layer and is even hidden at the effective thickness of dμ 0 s ¼ 1.18 (Fig. 5(e)). As a conclusion from this cover layer experiment, we find the sampling depth of γ to lie in the range of 0.78 < dμ 0 s < 1.18. This finding is consistent with the basic conception for ballistic light to lose its directionality after a propagation distance of roughly d ¼ 1∕μ 0 s . An even smaller sampling depth of γ images may be achieved by sole consideration of very high spatial frequencies. Conclusions Through our phantom study, we have not only demonstrated the feasibility to fabricate laterally structured microscopic scattering contrast in tissue simulating phantoms, but also described its measurement using high spatial frequency analysis in SFDI. The use of high spatial frequency analysis allows for mapping of the phase function parameter γ and can, thus, provide macroscopic images of the underlying microscopic structure. In addition to the control of the phantoms' macroscopic absorption and reduced scattering values, the use of two different scattering agents allowed us to design distinct and well-defined subdiffusive scattering properties. By measurement of γ, these subdiffusive scattering properties, which ultimately relate to the size and the refractive index of the scattering particles, could be quantified. The presented measurement approach has potential application for in vivo tissue analysis, where the quantification of subdiffusive scattering may provide important insights into the tissue microstructure, which is otherwise only accessible through histology. Dysplastic tissue alterations often come along with major changes in the cellular structure, and it is highly desirable to be able to detect these changes without the need for tissue biopsies. Subdiffusive reflectance imaging represents one promising approach toward such in situ measurements. In view of its potential biomedical application, the strong influence of specular surface reflections even for our almost ideally flat phantoms underscores the need to find strategies to correct for these imaging artifacts in nonideal imaging situations. In a previous study, we reported on the importance of correct phase function modeling for precise γ quantification. 22 For this phantom study, the Reynolds-McCormick phase function was used to model the subdiffusive reflectance characteristics of the employed scattering particles. In view of similar studies on biological tissue, a fractal size distribution of Mie scatterers may provide a useful model for microscopic tissue scattering. In this case, the retrieved γ values could favorably relate to the fractal dimension and, thus, to the size distribution of the underlying scatterers. 19,[32][33][34] With respect to tissue histology, this information may allow for valuable insights into the microscopic tissue structure averaged over the tissue surface to a depth of about 1∕μ 0 s .
6,675.2
2015-04-12T00:00:00.000
[ "Physics" ]
5G planar branch line coupler design based on the analysis of dielectric constant, loss tangent and quality factor at high frequency This study focuses on the effect of different dielectric properties in the design of 3-dB planar branch line coupler (BLC) using RT5880, RO4350, TMM4 and RT6010, particularly at high frequency of 26 GHz, the fifth generation (5G) operating frequency. The analysis conducted in this study is based on the dielectric constant, loss tangent and quality factor (Q-factor) associated with the dielectric properties of the substrate materials. Accordingly, the substrate that displayed the best performance for high frequency application had the lowest dielectric constant, lowest loss tangent and highest Q-factor (i.e., RT5880), and it was chosen to enhance our proposed 3-dB BLC. This enhanced 3-dB BLC was designed with the inclusion of microstrip-slot stub impedance at each port for bandwidth enhancement, and the proposed prototype had dimensions of 29.9 mm × 19.9 mm. The design and analysis of the proposed 3-dB BLC were accomplished by employing CST Microwave Studio. The performance of scattering parameters and the phase difference of the proposed BLC were then assessed and verified through laboratory measurement. Methods Selecting the best substrate to be incorporated into the design is crucial, especially at higher operating frequencies used in 5G wireless communication applications. Therefore, in this study, the analysis of a single-section planar BLC with different substrates was conducted. Questions arise regarding which substrate among the available high-performance substrates offers the best performance for the 3-dB BLC design at the designated frequency. The analysis in this study is based on the dielectric constant, loss tangent, Q-factor and their relationship to BLC performance. Analysis of different substrates. The characteristics of different substrates can affect the overall performance of the design. Four different substrates were selected for analysis, including RT5800, RO4350, TMM4 and RT6010, which were chosen due to their excellent performance at higher frequencies. Each of the four substrates has a different dielectric constant and loss tangent, while the thickness of the substrate was fixed at 0.254 mm. The properties of each substrate are summarized in Table 1. RT5880, which is made of glass microfiber reinforced polytetrafluoroethylene (PTFE) composite, displayed the lowest dielectric constant (2.2) among all the chosen high laminating frequency substrates, the lowest loss tangent (0.0009), and a negative thermal coefficient of − 125 ppm/°C 19 . Since RT5880 has a low dielectric constant, it is suitable for high frequency applications because the electrical losses and dispersion is considered to be minimal 19 . Meanwhile, the RO4350 substrate, which is a woven glass reinforced hydrocarbon ceramic laminate displayed second-lowest dielectric constant (3.66), the highest loss tangent (0.0037), and the highest positive thermal coefficient of + 50 ppm/°C 20 . This substrate provided tight control of the dielectric constant and displayed low loss 19 . The third substrate was TMM4, which is composed of a ceramic thermoset polymer composite with a dielectric constant (4.7), the second-lowest loss tangent (0.0020), and a low thermal coefficient of + 15 ppm/°C 21 . The electrical and mechanical properties of TMM4 laminates combine many of the benefits of both ceramic and traditional (PTFE) microwave circuit laminates 21 18 . Furthermore, this substrate also displayed low loss with loss tangent of 0.0023 22 . Generally, when selecting a dielectric material during the design process, two parameters are considered, including the dielectric constant and loss tangent. The loss tangent, tan δ defines the measure of signal loss as the signal propagates through the transmission line, and can be expressed as (1) 23,24 : where ε ′ r and ε ′′ r are the real and imaginary part of the complex relative permittivity, ε * r , respectively. Meanwhile, ω and σ are angular frequency and conductivity, respectively with conditions of ε ′′ r ≥ 0 and ε ′ r ≫ ε ′′ r . The real part of ε * r , which is ε ′ r associated to the ability of a material to store the incident electromagnetic (EM) energy through wave propagation, while, the imaginary part of ε * r is denoted by ε ′′ r related to the degree of EM energy losses in the material. Thus, ε ′ r and ε ′′ r are also known as the dielectric constant, ɛ r and the loss factor, respectively. At high frequencies that were considered in this proposed work, the substrate's loss tangent, tan δ can be simplified to ε ′′ r /ε ′ r . It is also known as the dissipation factor that describes the angle difference between capacitance current and voltage. Hence, a lower loss tangent is required in the substrate to ensure low dielectric loss and low dielectric absorption 25 . These two parameters, ɛ r and tan δ are directly related to the Q-factor due to the dielectric, Q d that can be expressed as (2) 26 : where the ε eff , λ 0 and α d are the effective dielectric constant, the wavelength in the air and dielectric loss, respectively. The ε eff can be defined by (3) 23 : where h and W m are the thickness of the substrate and the width of the microstrip transmission line, respectively, while, the dielectric loss, α d can be expressed as (4) 26 : Thereafter, an analysis of the Q-factor associated with the material's dielectric properties, Q d was performed through calculation implementing (2)-(4) to observe the effect of different substrates, including RT5880, RO4350, TMM4 and RT6010, which have a different dielectric constant, ɛ r and loss tangent, tan δ. The width of the microstrip transmission line, W m that correspond to 50Ω was used in this analysis. The relationship between Q d and the different ɛ r of RT5880, RO4350, TMM4 and RT6010 substrates, and the relationship between Q d and tan δ are depicted in Figs. 1 and 2. As presented in Fig. 1 It is important to note that Q d generally decreases as the value of ε r increases as depicted by the proportional correlation between Q d and ε eff , which is calculated using Eq. (2), where RT5880 has the highest Q d and the lowest ε r . However, this trend does not apply to RO4350 that had the second-lowest ε r and the lowest performance of Q d . Hence, by referring back to Eqs. (2) and (4), setting aside the dielectric constant, ε r , Q d is inversely proportional to dielectric loss, α d . Consequently, this α d is determined by ε r and tan δ as expressed in Eq. (4). Since α d is proportional to tan δ, and Q d is inversely proportional to α d , thus Q d is inversely proportional to tan δ. Meanwhile since α d is a function of ε r , the influence of tan δ towards Q d is greater compared to ε r , which can be observed from the plot in Fig. 2. Referring to Fig. 2, the value of Q d decreases as the value of tan δ increases. Therefore, even though RO4350 has the second-lowest ε r , it has the highest tan δ among the substrates, which led to the lowest Q d performance. Generally, this behavior can be explained by the characteristics of the ε r that is influenced by ionic or electronic polarization, which generates α d in the presence of electromagnetic wave 27 . The increasing value of ε r provides a higher α d value as the electric field intensity inside the dielectric layer increases 23 . RT5880 and RT6010 have polytetrafluoroethylene (PTFE) in their composition, while TMM4 has a polymer with low thermal www.nature.com/scientificreports/ conductivity, an excellent coefficient of thermal expansion (CTE), and low processing temperature, which results in low α d 28 as presented in Table 1. While, glass is a good thermal and homogeneity insulator, it also displays a high dielectric loss 28 . To obtain a lower α d value and maintain the advantages of glass, glass-reinforced ceramics can be used as displayed by R04350 28 . In any event, the dielectric loss of R04350 is still higher than RT5880, TMM4 and RT6010. Following this analysis of the four substrates, further analysis was performed by employing a single-section planar 3-dB BLC design. Figure 3 presents the design of the single-section 3-dB BLC. The common microstrip equation is denoted as (5), which was used to compute W o , W m1 and W m2 , where W o and W m2 refer to the characteristics of 50Ω, while W m1 refer to those of 35Ω 23 : where Z 0 is the characteristic impedance. The guide wavelength, λ g was then determined by (7) 29 : Analysis of BLC using different substrates. where c and f are the speed of light and design frequency, respectively. The properties for each substrate were used to design the 3-dB BLC, and the dimensions of the designed couplers were computed and optimized, which are summarized in Table 2. The performance of each BLC was then assessed based on S-parameters, phase difference and bandwidth. The performance of each BLC was then assessed based on S-parameters, phase difference and bandwidth via simulation through the use of Computer Simulation Technology (CST) Microwave Studio software. Transient Solver tool was utilized with frequency range setting between 20 to 30 GHz and open boundary condition to calculate the energy transmission between various ports of the design structure. Figure 4 illustrates the reflection coefficient performance, S 11 of the designed BLC with different substrates, which revealed that the S 11 of the BLC designed with the RT5880 substrate was less than − 10 dB within a frequency range of 20.54-30 GHz. Meanwhile, the BLC design that employed the RO4350 substrate showed the performance of S 11 was below − 10 dB across 21-30 GHz. In addition, the use of TMM4 and RT6010 offered S 11 values that were less than − 10 dB in the ranges of 21.1-30 GHz and 22.55-30 GHz, respectively. Hence, the best S 11 performance with the relatively broadest bandwidth and lowest S 11 at 26 GHz, which is shown by the design that employed RT5880, which has the lowest ε r and lowest tan δ among all four substrates was expected to have the lowest loss. Figure 5 shows the transmission coefficient of S 21 when different substrates were used in the design of the BLC. Similar S 21 performance of − 3 dB with ± 1 dB deviation were obtained for RT5880, RO4350, TMM4, and RT6010 for slightly different frequency ranges of 21.14-30 GHz, 21.9-30 GHz, 23.18-30 GHz and 24.28-30 GHz, respectively. Compared to S 11 performance, BLC design with RT5880 displayed the widest frequency range of 8.86 GHz with S 21 performance of − 3 dB ± 1 dB. Meanwhile Fig. 6 depicts the coupling output, S 31 that specifies the ratio of input power, P 1 to the coupled power, P 3 for BLC design that utilized different substrates. The www.nature.com/scientificreports/ results of our analysis indicated that the performance of S 31 was -3 dB ± 0.9 dB within a frequency range of 20-30 GHz when RT5880 substrate was used in the design, while, the coupling performance was -3 dB ± 1 dB when the RO4350 substrate was used in a range of 20-28.74 GHz. Furthermore, similar performances of S 31 were achieved when the design utilized TMM4 and RT6010 substrates, which were − 3 dB ± 1 dB in a frequency range of 20.34-28.62 GHz and 21.28-27.07 GHz, respectively. Hence, a coupling coefficient of 3-dB with the lowest deviation across the widest frequency range was achieved by the BLC design utilized onto RT5880 substrate. The next important analysis is associated with S 41 performance, which involves the responses obtained from the BLC design with different substrates as depicted in Fig. 7. In this design, the lowest isolation performance was set to be 10 dB. As shown in Fig. 7, the performance of S 41 was less than − 10 dB within a frequency range of 20-30 GHz for the design that employed all substrates. In this analysis, the lowest S 41 performance at 26 GHz shown by the design that employed RT5880. Therefore, the analysis proceeded to consider the phase difference between output ports. In this design, the deviation of the phase difference between the output ports was set to ± 2° from the ideal of 90°. Based on the phase difference analysis shown in Fig. 8, a BLC phase difference of 90° ± 2° are demonstrated by designs that employed RT5880, RO4350, TMM4 and RT6010 substrates across slightly different frequency ranges of 24.52-30 GHz, 25.52-29.17 GHz, 25.5-28 GHz and 24.81-27.73 GHz, accordingly. Similarly, as in the analysis of S 11 , S 21 , S 31 and S 41, the design with RT5880 displayed the best phase performance across the widest frequency range, which is likely because it has the lowest ε r and lowest tan δ. The performances of S 11 , S 21 , S 31 , S 41 and the phase difference between output ports are summarized in Table 3. The Q-factor associated with the material's dielectric properties, Q d that was obtained through the analysis of those dielectric properties is presented in Table 3 for further comparison and analysis. Table 3 shows that the widest bandwidth performance of 5.48 GHz (21.1%) was achieved when the RT5880 substrate was used in the BLC design. Referring to Table I www.nature.com/scientificreports/ (12.4%). Even though a lower tan δ significantly contributes to a higher Q d compared to ε r , results indicated that the ε r is a primary factor in the determination of optimal bandwidth performance with an inversely proportional relationship. The ε r of a material represents the ability of that material to store electrical energy in the presence of an electrical field, whereas, when the frequency increases, the losses in the substrate begins to reduce the ability of the dielectric material to store energy. Therefore, it can be concluded that the bandwidth performance increases as the dielectric constant decreases, while the high dielectric constant substrate may lose its ability of storing energy. Thus, based on the results of our analysis, the substrate with a low dielectric constant and a low tan δ , which contribute to the respective high bandwidth and high Q-factor is the most suitable for 5G applications at high frequencies, and in this case, a design frequency of 26 GHz that uses the RT5880 substrate was selected. www.nature.com/scientificreports/ Design of 3-dB BLC with microstrip-slot stub. This section discusses the proposed wideband 3-dB BLC design, as depicted in Fig. 9, with the implementation of a microstrip-slot stub for bandwidth improvement over that of conventional BLC designs, as shown in Fig. 3 by using CST Microwave Studio with the utilization of Transient Solver tool, frequency range setting between 20 to 30 GHz and open boundary condition. The best substrate was RT5880 based on the analysis of its dielectric properties, and was thus chosen for the design. The proposed microstrip-slot stub impedance was placed at each port at a distance, L 1 from the BLC. By tuning these microstrip-slot stub impedances, better matching can be achieved to ensure maximum power is transferred from the source, and a minimum signal is reflected from the load, which consequently enhances the bandwidth 30,34 . Generally, the input impedance of the stub, Y in can be written as (8) 31 : where Y 0 and θ stub are the stub admittance and the electrical length of the stub, respectively, and the θ stub can be expressed as (9) 31 : where ω 0, L s and V pstub are the angular frequency, the length of stub and the phase velocity of the stub, respectively. By comparing Y = ωC to Eq. (8), the length of the stub, L s can be obtained in (10) 31 : where Z stub is the characteristic impedance of the stub. It was stated that junction discontinuities can be avoided when the length of stub impedance is half the wavelength 28 . However, the parameters still need to be optimized to achieve optimal performance. To achieve optimal performance, a stub with a higher impedance is required 32 . Furthermore, stub impedance can form reflection zeroes at equal distances on both sides of the ports 30 . The distance of the stub impedance of the proposed BLC design is defined as L 1. Referring back to the common matching technique that employs the stub 23 , the load impedance, Z L representing the BLC can be expressed as (11): where Y L , R L and X L are the load admittance, the real part of load impedance and the imaginary part of load impedance, respectively. Therefore, the impedance at a distance, L 1 from the load (BLC) is given in the (12) and (13): and Let the admittance of stub impedance at a distance, L 1 be expressed as (14): where parameters G and B can be defined by (15) and (16), respectively, by using (13) and (14): (13) 23 , Therefore, the value of t can be expressed as (18): Thereafter, by assuming R L = Z 0 and by using t = tan βL 1 = tan 2π L 1 , the distance of stub impedance from BLC, L 1 can be determined using (19): A narrow slot line is then employed at the ground plane underneath the microstrip stub to form microstripslot stub impedance because the use of the slot line can improve the bandwidth performance due to its slowwave characteristic. The implementation of slot-line on the ground plane disturbs current distribution and this disturbance changes the characteristics of the transmission line, such as capacitance and inductance, to produce the slow-wave characteristics, which can increase the phase velocity delay. The characteristic impedance of the microstrip-slot stub can be determined through the use of the microstrip-slot line impedance, Z m−s equation as expressed in (20) 33 : Obtaining the initial dimensions through calculation, this proposed BLC was simulated and optimized accordingly. The optimized dimensions of the coupler, as depicted in Fig. 9 were W o = 0.8 mm, W m1 = 1.09 mm, W m2 = 0.8 mm, W m3 = 0.7 mm, W stub = 0.18 mm, W slot = 0.15 mm, L 1 = 0.65 mm, L s = 0.85 mm and length of each branch, λ/4 = 2.12 mm. The dimensions of the proposed BLC are summarized in Table 4. The next objective is to verify the performance of the proposed BLC. Then, the proposed design was realized by employing the Roger RO5880 substrate with dielectric constant, ɛ r of 2.2, a substrate thickness, h of 0.254 mm, and an overall size of 29.9 mm × 19.9 mm. Figure 10 shows the fabricated prototype of the proposed BLC with slotted-stub impedance. Measurement of 3-dB BLC with microstrip-slot stub. The measurement of the proposed 3-dB BLC with microstrip-slot stub fabricated prototype was conducted using a vector network analyzer (VNA) to verify its performance. Prior to the measurement, the two-port network calibration procedure of VNA is necessary to remove its errors. The calibration was performed using the calibration standards involving the open, short, match, and through 35 . Following the completed calibration procedure, the measurement of the proposed BLC prototype was carried out with the setup as depicted in Fig. 11. Referring to the measurement setup, the selected ports were connected directly to the VNA, while the unused ports were terminated with 50 Ω SMA termination. Thereafter, a comparison was made in terms of the simulated and measured S-parameters and phase characteristics. www.nature.com/scientificreports/ Figures 12,13 and 14 depict the simulated and measured performance of the proposed BLC, which operated well from 20 to 28.7 GHz and 22 to 30 GHz, respectively. As shown in Fig. 12, the simulated and measured S 11 and S 41 values were less than − 12 dB and − 11 dB, respectively. The value of − 10 dB and below used as the specification to indicate a good transmission signal from the input port to the output port, where almost 90% of the signal is being transmitted . Meanwhile based on the results presented in Fig. 13, the simulated and measured transmission coefficients at the coupling port (S 31 ) displayed a ± 1 dB deviation from the ideal value of 3 dB, while, the simulated and measured transmission coefficients of S 21 depict the performance of − 3 dB ± 0.8 dB and − 3 dB ± 0.9 dB, respectively. Meanwhile, the plotted responses in Fig. 14 indicate that the simulated and measured phase differences between output ports were 90° ± 3° and 90° ± 4°, respectively. These S-parameters and phase difference performance are summarized in Table 5 to provide a clear comparison. www.nature.com/scientificreports/ Based on the data in Table 5, the proposed BLC with microstrip-slot stub impedance at the ports' transmission line appeared to result in better performance of S 11 , and S 41 at a bandwidth was improved by approximately 2.52 GHz compared to initial single-section BLC design. Comparable transmission coefficients of S 21 and S 31 were observed between the proposed BLC and initial BLC designs. However, the phase difference between the output ports of the proposed BLC has deviated slightly more (± 1°) than initial BLC designs, but it was still within a reasonable performance range with respect to phase difference. Furthermore, performance of the simulated and measured BLC with microstrip-slot stub impedance were consistent with one another, along with an operating frequency that was slightly shifted. One of the main reasons that have been found affecting the measurement results was a small discrepancy in the width of the microstrip-slot stub impedance. To prove this, simulation on different widths of the microstrip-slot stub impedance was performed, analyzed, and discussed in the next sub-section. Parametric analysis on different widths of the microstrip-slot stub impedance. Parametric analysis on different widths of the microstrip-slot stub impedance concerning its microstrip stub width, W stub and slot line width, W slot was performed via the use of CST Microwave Studio with a similar setting as in analysis and design in the section of Methods. Initially, the parametric analysis was started by fixing W stub to its optimal dimension of 0.18 mm and varying W slot between 0.15 mm and 0.55 mm. The effect of this varied W slot was observed through S-parameters and phase difference as depicted in the following Fig. 15. The function of slot implementation is to broaden the bandwidth due to its slow-wave characteristics. Concerning the bandwidth performance, it shows that from the plotted graphs in Fig. 15, the broadest bandwidth was provided by the smallest value of W slot (0.15 mm) compared to the largest value of W slot (0.55 mm). Besides, the smallest amplitude imbalance and phase imbalance were offered by 0.15 mm W slot . Thus, the optimal W slot dimension is 0.15 mm for this proposed coupler design. Any discrepancy from this optimal width will lead to deviation in the results of S-parameters and phase difference, in which the deviation trends can be observed through the plotted graphs. By comparing the plotted varied W slot graphs to the summarized measured results in Table 5 and the assumption of fixed W stub at 0.18 mm, it can be estimated that the fabricated coupler could have 0.35 mm W slot instead of 0.15 mm. Afterward, the next concern is the effect of the varied W stub towards the www.nature.com/scientificreports/ performance of the proposed BLC by fixing W slot to its optimal dimension of 0.15 mm. W stub was varied from 0.18 mm to 0.30 mm in this parametric analysis, which the effects on S-parameters and phase difference are shown in Fig. 16. The addition of stub impedance in the design is to improve the matching, which consequently enhances the bandwidth performance compared to the design without stub impedance. Hence, W stub increment from 0.18 mm to 0.30 mm can be seen affecting the matching and isolation of the coupler, which noted through the plotted S 11 and S 41 in the respective Fig. 16 (a) and (c). Meanwhile, degradation also can be noticed for S 31 and phase difference between output ports presented in Fig. 16 (b) and (d), correspondingly. Whilst, minimal effect due to W stub variation can be observed for S 21 . Thus, smaller W stub is better compared to larger W stub with the optimal dimension of 0.18 mm. Then with the fixed 0.15 mm W slot , the plotted varied W stub graphs were compared to the summarized measured results in Table 5. Thus, it can be predicted that the fabricated coupler could not have W stub discrepancy from its optimal 0.18 mm. Hence from this analysis, the deviation observed from the measurement results of the proposed coupler compared to the simulation can be due to the fabricated coupler has slightly wider W slot (0.35 mm) than its optimal width of 0.15 mm. Comparison of the proposed 3-dB BLC with microstrip-slot stub to other designs. Nonetheless, concerning its amplitude deviation, phase deviation, and operating frequency, the proposed design is compared to other coupler designs 37-39 using different techniques. By referring to Table 6, the proposed design has comparable amplitude imbalance, phase imbalance, and bandwidth with the design based on lumped-elements and fabricated using integrated passive devise (IPD) technology on glass substrate proposed by Cayron et al. 37 . Another coupler 38 that fabricated using IPD chip-level technology on gallium arsenide (GaAs) based substrate has higher amplitude imbalance but better phase imbalance compared to the proposed design. While two coupler designs based on the respective substrate integrated waveguide (SIW) and stripline demonstrated higher amplitude imbalance and phase imbalance with narrower bandwidth compared to the proposed design. Hence, by denoting this comparison, the good planar microstrip coupler design with a well-chosen substrate of RT5880 www.nature.com/scientificreports/ that has a low dielectric constant, very low tan δ , and high Q-factor as shown by this proposed design can offer very well wideband performance even though planar technology faces significant losses at high frequency. Conclusion Based on the analysis of dielectric materials that lower loss tangent, tan δ contributes to a higher Q-factor due to dielectric properties, Q d , while a lower dielectric constant, ɛ r results in greater bandwidth performance. Thus, to ensure a device designed at high frequency for 5G application is perform well, the substrate must be selected based on it having a low dielectric constant, ɛ r , a low loss tangent, tan δ and a high Q-factor due to dielectric properties, Q d . Hence, the substrate that displayed the best performance, which was RT5880 due to its lowest ɛ r of 2.2, lowest tan δ of 0.0009 and highest Q d of 1302.79 was selected. Its use in the design was presented with a proposed wideband 3-dB BLC with microstrip-slot stub impedance and overall dimensions of 29.9 mm × 19.9 mm. The design and optimization were conducted using CST Microwave Studio, which is an electromagnetic (EM) simulator. The performances of transmission coefficients, reflection coefficients and phase characteristics of the designed coupler were obtained and analyzed. Its wideband performance at a design frequency of 26 GHz was proven via measurements of the fabricated prototype's performance in the laboratory.
6,379.8
2020-09-30T00:00:00.000
[ "Engineering", "Physics" ]
Virtual environments for the treatment of claustrophobia Virtual Reality (VR) is a new technology that is having a great impact on various areas of health science. The use of VR is of special interest in the treatment of psychological disorders. Reasonable data exist on the effectiveness of this technology in the treatment of different anxiety disorders. The present paper explains in detail a psychological program for the treatment of claustrophobia using VR, whose effectiveness has already been demonstrated. [Botella et al.,1998]. In addition, we describe the technical aspects of the software and point out several limitations VR still presents in the area of psychological treatment. Color images and video clips are included with the CDROM version. Introduction Since the beginning of the 1960s the international scientist community has been working in the virtual reality (VR) field, and has recognized this tool as a highly powerful man-computer interface with broad technical and scientific applications.The clinical field also recognizes the enormous potential presented by the technology.Several research groups have already shown its benefits and possible applications.According to Greenleaf [1995], a pioneer in the application of advanced technologies in the health field, VR currently presents applications in numerous areas, including simulation and planning of surgical procedures, therapy, diagnosis, education, telemedicine, rehabilitation, architectural design of sanitary devices, and so on.The potential of VR in the clinical disciplines is clear.For instance, in August 1997 an entire issue of the journal Communications of the ACM was devoted to current developments and future possibilities of VR in the health and psychological treatment and rehabilitation fields. Virtual Reality and Psychological Treatments The applications of VR in the psychological treatment field are currently in their infancy.The following represent a overview of pioneering studies: 1. Antecedents.An antecedent is the case of special glasses which alter the depth perception, used by Schneider [1982] in order to magnify the height sensation during the process of in vivo exposure.Antecedent works that recommend the future use of VR for psychological treatments include [Tart, 1991] in general, and [Knox, Schacht and Turner, 1993] for performance anxiety.2. Spiders Phobia.A group at the University of Nottingham has developed a VR system for the treatment of a specific phobia, the spiders phobia.The patients wear a Head Mounted Display (HMD) by means of which a virtual spider can be "visualized".Its realism is gradually increased until the patient´s tolerance level allows him/her to cope with it in the real world [Vince, 1995].The group of Hoffman [Carlin, Hoffman and Weghorst,1997] reveals, in a case study, the utility of a VR application which consists of software for the treatment of the spiders which is used in conjunction with augmented reality techniques.These authors report on the success achieved in a patient whose phobia was extremely limiting.3. Acrophobia.The Kaiser-Permanent Medical Group of California has developed an experimental system to assess the utility of VR for the treatment of acrophobia.In this system the patient must pass trough a deep gully by crossing a suspension bridge and a narrow board.This system was used with 32 patients and resulted in a 90% success rate.Dr. Lamson, responsible for this project, states that the patients have the sensation of having coped with this fear and overcome it.In his own words: "it is an excellent tool which allows the patient to build a strong sensation of confidence" [Vince, 1995].The group of Rothbaum and North of Clark Atlanta University has published the first reports (a case study and a controlled study) on the utility of software they designed for the treatment of acrophobia [Rothbaum, Hodges, Kooper, Opdyke, Williford and North, 1995a b;North and North, 1994;North and North, 1996].In 1992 they developed the VREAM (Virtual reality development software package and libraries) software to generate a virtual environment for the treatment of acrophobia.They created a setting with an external lift which is adjusted to different heights with the patient being placed on a balcony of each floor. The patient pointed out that he felt a high degree of immersion in the virtual environment, and within 8 sessions he felt relaxed in a height level corresponding to the fifteenth floor.4. Fear of flying.This same group has designed software for the treatment of fear of flying.In a case study [Rothbaum, Hodges, Watson, Kessler and Opdyke, 1996] they report on the successful utilization of the system.The treatment was carried out during 6 sessions of approximately 35-45 minutes.The patient was a 42 year old woman suffering from a serious fear of flying.5. Agoraphobia.The North group [Rothbaum et al., 1995a] has carried out work on agoraphobia by creating agoraphobic environments and testing them with students. The subjects exposed to the virtual exposure improved in a significant way, whereas changes were not observed in the control group.6. Claustrophobia.Our own group has designed software for the treatment of claustrophobia, and we can already affirm that the claustrophobic context is able to activate a high degree of anxiety in the patients.Moreover, the patients can overcome the phobia by means of the virtual exposure [Botella, Baños, Perpiñá, Villa, Alcañiz and Rey, 1998;Botella, Perpiñá, Baños, Ballester and Quero, in press].In order to complete the information regarding to works which are being carried out on the treatment of anxiety disorders by means of VR, we shall resort to the recent paper of Strickland, Hodges, North and Weghorst [1997].The first book focused solely on VR psychotherapeutic treatments has been recently published: "Virtual reality therapy: An Innovative Paradigm" [North, North, and Coble, 1997].In it additional programs for the treatment of agoraphobia and fear of flying in combat are described.The first edition of the book has rapidly gone through a first printing, and a second volume is about to appear. In summary, even though the number of studies is small, there exists clear evidence of the utility of VR for the treatment of specific phobias.However, it is also clear to us that some problems still exist, the most important being the following [Botella, et al., 1998]: 1.It is necessary to extend the application fields even further and confirm whether VR is effective in other more general psychological disorders.2. Many of the research reports are focused on mere case studies, which follow only a simple AB design, with consequent generalization problems deriving from this kind of design.3. Most of the treated patients are subclinical subjects, that is, people whose problem is not so serious or so troublesome to ask for psychological treatment.4. For all we know, there only exists one published controlled study [Rothbaum et al., 1995a], and although a waiting list control group was used, there was not a placebo group.5.The published works so far only offer pre-treatment versus post-treatment data; they do not report anything regarding to recovery, stability, post-therapy advances, relapses and so on.It is, therefore, absolutely necessary to obtain follow-up data.6.In some works a "pure" VR treatment has not been applied.For example, in the study of Rothbaum et al., [1996] on fear of flying, techniques for the management of anxiety were previously used (cognitive restructuration, thought stopping, and active relaxation) with the subsequent pulling effect of the treatments, causing difficulty in the interpretation of the results.7. Given that the works published up to now are focused on anxiety disorders, it is also necessary to point out the lack of comparison between achieved effectiveness with VR and that achieved by the current non-VR treatment.Specifically, it would be well to know to what degree VR is effective if we compare it to "in vivo" exposure or to imagination exposure. VR Treatment for Claustrophobia As it is concluded in the works reported up to now, evidence on the utility of VR for the treatment of specific phobias has started to appear.However, it is necessary to go on obtaining data regarding to the effectiveness of this new tool and to confirm whether it is also effective in other psychological disorders.In this work we shall focus on claustrophobia. The choice of claustrophobia is justified by the special characteristics of this phobia.Such characteristics have made it be considered as functionally equivalent to agoraphobia, but with a smaller rank of avoidance [Barlow, 1988;Öst, 1987;Booth and Rachman, 1992].Moreover, claustrophobia, by itself, may be very limiting and cause a notable impairment in life since closed places are commonly experienced in daily life (lifts, planes, tunnels, cabins, medical diagnosis procedures such as CAT scan, MRI, etc.) PSYCHOLOGICAL ASPECTS In the design of the software special attention was paid not only to the general guidelines which should be followed in the treatment of this problem but also, and in a very special way, to the different elements that might facilitate the immersion and the "real" interaction of the patient in the virtual scenery.To graduate the levels of difficulty of the "claustrophobic" environment, two different settings were created: the first was a house and the second was a lift.In each one of them several different environments existed which permitted the construction of performance hierarchies with degrees of increasing difficulty.We now describe them in some detail. Setting one: the house 1) A room of 4 x 5 m.This room has an exit door to a external terrace or a little garden of 2 x 4 m.In this room there is also a large window with a blind.When the door and the window are open, a blue sky can be seen and the sound of the birds can be heard (see Figure 1 on the CDROM 1 ).The door, the window and the blind can be opened and closed in three stages (see Figure 2, Figure 3, and Video clip 1).In this same room is another door to a second room or second claustrophobic setting (see Figures 4 and 5). 2) This second room measures 3 x 3 m.There are neither furniture nor windows, and the ceiling and floor are darker with a wood texture to give a greater sensation of closure.In the center of the room there is a lectern with several buttons by means of which the patient can interact with the virtual environment.The door of this room can be closed by the patient in three stages and, once closed, it can be blocked if he/she decides to do so by means of one of the buttons situated on the lectern (see Figures 6 and 7). Finally, there is a wall in the room which can be displaced by the patient, making a loud noise (also with different possibilities of advance or closing by means of one of the buttons which is situated on the lectern) and keeping the patient enclosed in 1 m wide space (see Figures 8 and 9, and Video clip 2). Setting two: the lift 1) In this setting there is a wide entry where with a large window.This large window can not be opened but through it one can see plants outside.From this entry the patient has access to the lift by pressing a button (see Figure 10 ). 2) The lift is designed to offer different possibilities of the claustrophobic threat by taking into account various parameters (size, placing and possibility of blocking the lift): 2.1) The lift is situated on the ground floor, large in size (1m x 2m) and with the door open.The patient can be inside the lift with the door open and looking at the space of the entry.He can enter and exit whenever he wants (see Figure 11). 2. 2) The lift is closed and working.The patient presses a button and the lift closes and starts working (see Figure 12 and Figure 13).The lift can move to different floors and the patient can get off on any floor. 2. 3) The lift is blocked.The patient can block the lift by pressing a button and, from that moment, he will not be able to get out in any way during a period of time, which is random and predetermined by the system.The aim is to simulate an average. 2.4) A small-sized lift in which one of the walls advances with a loud noise and encloses the patient in a space of 1m x 1m (see Figure 14).All the possibilities allowed in the big lift are also able to be realized in the small lift. TECHNICAL ASPECTS Initially we used a Silicon Graphics Indigo High Impact computer graphics workstation as hardware.Then in order to extend a wide commercial use of this technology, we created a virtual reality system used in clinical psychology adaptable to an inexpensive PC.For the present study a Pentium Pro computer with 128M of RAM and an AccelEclipse graphic card with 32 Mg was used as the hardware platform.The operating system was Windows NT and the virtual reality program was World Up (Sense8).VR peripherals included a V6 HMD (Virtual Research) and a 3D mouse. Virtual reality technology provides a new interactive form between user and computer where the user is not a mere outside observer of images in a monitor, but an active participant inside the 3D images, called virtual scenes or virtual environments (VE).He/she can interact with the objects of the scene.We think that this interaction can be more effective due to the incorporation of technical methods -algorithms -which allow images to have a greater quality of visual realism.The algorithms are not included in the typical virtual reality applications software.The use of global illumination techniques -radiosity -in the development of virtual environments also improves the level of realism. To obtain realistic solutions, radiosity solutions of the VE were calculated by using LightScape v.3.0 software (LightScape Technologies) and textures generated by the radiosity solutions were mapped on the geometry of the VE. RADIOSITY AND VIRTUAL REALITY With the introduction of raster display technology, several simple Local Illumination Models were developed for displaying computer generated images of surfaces, in particular the Gouraud [Goraud, 1971] and Phong models [Phong, 1975].These algorithms use local illumination models which depend only on the characteristics of the surface in question and the light sources; information regarding other objects in the environment is ignored.The Phong and Gouraud models assume all light sources to be infinitely small, with isotropic intensity distributions.When computing illumination effects using local shading models, no information about the position and characteristics of other surfaces in the environment are taken into account: objects appear shaded as if they were floating in empty space.Complex (but often necessary) illumination effects such as shadows and accurate interreflections are not modeled.Many early works using local illumination were directed towards interactivity, and the benefits of these models are demonstrated by their numerous modern hardware implementations, enabling the display of hundreds of thousands of shaded polygons per second.This hardware speed-up is used in virtual reality applications, resulting in real time image refresh ratios (20-25 image/second).Unfortunately, because they are not based on concrete physical processes and quantities, these models do not produce realistic results.The appearance of images can be improved by using techniques such as texture mapping and bump mapping which modify the reflectivity and normal of a surface respectively, but even so, local illumination models fall far short of being visually realistic. To model complex illumination effects such as shadowing and interreflections, global illumination models are needed.The two most common global illumination models are ray tracing and radiosity.Complex illumination effects are best simulated by the radiosity method.This method is capable of modeling the interreflections of light between diffuse surfaces, and can generate effects such as color bleeding (light reflecting off a colored surface) and penumbral shadows. Visual realism of radiosity images has been tested by Meyer et al. [Meyer, Rushmeier, Cohen, Greenberg and Torrance, 1986] on static images, showing that in most cases observers detected no difference between real images and radiosity images.Unfortunately, this visual realism comes at a price.Radiosity is a notoriously computationintensive task with solutions for complex environments often taking many hours or even days to generate.Recent research has focused on algorithmic improvements with the aim of generating fast radiosity solutions without compromising accuracy.The most beneficial of such improvement was the introduction of hierarchical algorithms [Hanrahan, Salzman and Aupperle, 1991].Recent proposals involve clustering techniques applied to radiosity solutions [Smits, Arvo and Greenberg, 1994].These approximate radiosity solutions enable complex environments to be calculated in a more reasonable time. Radiosity is a view-independent technique because illumination is calculated over the entire environment, rather than only for the surfaces in direct view of the camera.Thus after calculating a radiosity solution, images from various viewpoints can be displayed interactively on a high performance graphic workstation.This brings the possibility of using recent virtual reality techniques to immerse the user in a very realistically illuminated virtual environment [Airey, Rohlf and Brooks, 1990]. To introduce radiosity into virtual environments no special algorithm had been used, but a standard radiosity program, Lightscape v3.0, was employed for its efficiency in catching scenery textures with radiosity.Later, these textures were organized into the geometry of virtual environment. The following steps have been completed: a) Given an environment, one applies the radiosity program LightScape v. 3.0 (Lightscape Technologies).b) Taking the result of the previous application of radiosity program, one catches the desired textures.LightScape v. 3.0 (Lightscape Technologies) has the "Mesh To Texture" option.With this option the user can select, within the environment resulting from the application of the radiosity algorithm, the surfaces to which he/she wants to apply texture.The process takes place through a succession of pages within a dialog, which lets you make choices to determine how the conversion will be done.c) The extracted textures are put into the corresponding geometries of the virtual environments using WorldUp (Sense8). Then the advances obtained with this method should be apparent in the visual realism of virtual environments. In the images of Figures 1 though 14 the visual appearance is rather poor, since programs of virtual environments like WorldUp (Sense8) have low image quality.Nevertheless, it is possible to make use of certain technologies to obtain shades (Figures 10 and 11) in order to improve visual realism.But as we can see in the image of Figure 15, the realism obtained with the preceding method is better. Conclusions As we have pointed out previously, we have already probed the effectiveness of this software, and we can state that the claustrophobic context is able to activate a high degree of anxiety in the patients and that, through virtual exposure, is effective in overcoming the phobia.Moreover, this result was achieved by using virtual exposure as a unique technique, that is, with no combination of other psychological treatment techniques.Therefore VR appears to be very useful in a therapeutic perspective [Botella, Baños, Perpiñá, Villa, Alcañiz y Rey, 1998;Botella, Perpiñá, Baños, Ballester y Quero, in press]. We would not like to finish the therapy without "going back into reality". We have defended the possible contributions that VR may have in the field of psychological treatments.However, as we pointed out at the beginning of this work, we should not confuse facts and fiction, and the available facts point out the convenience of taking into account the clear limitations which, up to now, the new tool has been presenting. 1.The virtual world is still rudimentary.It reminds us of the cinema at its beginnings.Those first films, jerkey and without sound, have a certain similarity to these simple and still quite artificial textures.More work is necessary to achieve a higher degree of realism and determine which psychological factors or other kind of factors have influenced the "reality judgement" which the person realizes. 2. As pointed out before, we think that it is fundamental to go on working in order to achieve more effective ways to introduce the "self" into the virtual world.3. VR also has clear limits.Although you can "live it" lots of times, it is only "an adventure" from which you always come back into the "real world", and it may be hard "to come back".Taking this into account, it also appears necessary to limit the possible harmful effects of VR misuse.4. Associated with the previous point, it has been pointed out that virtual walks may produce secondary effects, basically disorientation and neurovegetative symptoms (sickness, nausea, etc.), and, to a lesser extent, some ocular-motor perturbation. However, methods minimizing and controlling these alterations have been developed [Stanney and Kennedy, 1997].The problems do not always appear.The lack of negative collateral effects has been reported in an explicit way a [Riva et al., 1997].This is a topic that is being submitted to controlled research at this moment [Viire, 1997].We hope that these efforts, in a short time, will provide a way for future systems to be considered as completely "safe".5.It is necessary to collect further data regarding the effectiveness as a sole tool versus its effectiveness used in conjunction with other common therapeutic procedures, such as "in vivo" exposure or imagination exposure.6.It is still an expensive tool, although if it proves itself and the resulting demand is high, the price might drop.7. It is a new tool and practically all the work has yet to be done.A fundamental aspect of future work is to construct a theoretical frame that permits predictions and organization of the results. In our opinion VR has a great future, and the applications that have appeared up to now are only the beginning of an enormous development.As we previously mentioned, it is difficult to think of an application, which cannot be created.It is only a matter of time and money.The important points are then (1) In which fields should we work?, (2) Which applications make more sense?and (3) Which applications are more useful, have more impact, or benefit more people?[Inman et al., 1997].We are all affected by the challenge of what kind of psychological cyberspace is better to create. Dr. Cristina Botella Arbona, the lead author, has been a psychologist since 1978 (University of Valencia).She received her PhD at the University of Valencia in 1983.She became a Senior Lecturer at the University of Valencia in 1986.Then in 1992 she became Professor (Psychological treatments) at Jaume I University.She has been head of the Psychological Assistance Service at the Jaume I University since 1993.Her research has focused on psychological treatments for emotional disorders.She currently heads the "PREVI" research group dedicated to the study of VR applications to the treatments of psychological problems such as claustrophobia, agoraphobia, fear of flying, and body image disturbances in eating disorders.@psb.uji.es
5,135.2
1998-01-01T00:00:00.000
[ "Computer Science", "Psychology" ]
Improved ECG-Derived Respiration Using Empirical Wavelet Transform and Kernel Principal Component Analysis Many methods have been developed to derive respiration signals from electrocardiograms (ECGs). However, traditional methods have two main issues: (1) focusing on certain specific morphological characteristics and (2) not considering the nonlinear relationship between ECGs and respiration. In this paper, an improved ECG-derived respiration (EDR) based on empirical wavelet transform (EWT) and kernel principal component analysis (KPCA) is proposed. To tackle the first problem, EWT is introduced to decompose the ECG signal to extract the low-frequency part. To tackle the second issue, KPCA and preimaging are introduced to capture the nonlinear relationship between ECGs and respiration. The parameter selection of the radial basis function kernel in KPCA is also improved, ensuring accuracy and a reduction in computational cost. The correlation coefficient and amplitude square coherence coefficient are used as metrics to carry out quantitative and qualitative comparisons with three traditional EDR algorithms. The results show that the proposed method performs better than the traditional EDR algorithms in obtaining single-lead-EDR signals. Introduction Respiratory signals are important physiological signals commonly used in clinical monitoring. ey are used in the detection of sleep apnoea and in stress tests; moreover, they play an important role in the clinical diagnosis of diseases [1]. Respiratory signal detection methods can be divided into two main categories. e first is to detect the air flow from the human nose, and the second is to detect thoracic deformation or the change in thoracic impedance caused by respiration [2]. Both methods require additional sensors and may interfere with natural breathing. e idea of soft sensors is a one of the solutions to overcome the issues of detecting respiratory signals. Soft sensor is an inferential model that uses easily accessible variables to estimate the variables, which are difficult to be obtained. At present, soft sensors have been widely adopted in industrial processes [3]. e Luenberger observer [4] used state differential equations, with which the dynamic behaviour of the bioprocess is described with a mechanistic model. Yan et al. [5] proposed a framework of data driven soft sensor modeling based on semisupervised regression to estimate the total Kjeldahl nitrogen in a wastewater treatment process. Obtaining respiratory signals from the ECG is a typical application of soft sensor in the medical field. e ECG signal is obtained noninvasively using a few electrodes and recorded conveniently without interfering natural breath. Respiration affects ECG signals mainly through mechanical interactions and respiratory sinus arrhythmia (RSA) [6]. Mechanical interaction is caused by the displacement of the electrodes relative to the heart and the change in thoracic impedance caused by variations in lung volume [7]. RSA is caused by breath-induced changes in the autonomic nervous system, which in turn causes changes in the heart rate. Heart rate increases during inspiration and decreases during expiration [8]. Respiration affects the heart rate and ECG in the aforementioned two ways, and such a signal modulation phenomenon forms the theoretical basis for obtaining respiratory signals from ECGs, called ECG-derived respiratory signals. Owing to the advantages of the EDR algorithm, scientists have conducted multiple studies in this field. Most EDR methods are divided into two categories [9]. One is the EDR method based on the morphological characteristics of the ECG signal. e other is by directly processing the ECG signal. Vargas-Luna et al. [10] obtained the EDR signals through the R peak amplitude of ECG signals. Bailón et al. [11] proposed an EDR method based on singular value decomposition of the intervals between the R peaks of ECG signals. Chazal et al. [12] obtained EDR signals by calculating the area under the QRS complexes. e EDR methods are based on a single morphological characteristic that provides a rather unsatisfactory accuracy and robustness. Nemati et al. [13] proposed a data fusion method for estimating respiratory frequency based on Kalman filtering, which involves many other physiological signals, and only the respiratory rate can be obtained. Widjaja et al. [14] used kernel principal component analysis to calculate the QRS complexes in the ECG signal and considered the eigenvector as the EDR signal. is method performs well but requires manual deletion of ectopic QRS complexes that involve considerable calculations. To resolve the limitations of the existing methods and realize an accurate and fully automatic EDR signal obtaining method, an improved EDR algorithm based on EWT and KPCA is proposed. e ECG signal is decomposed to obtain the low-frequency component. Multiple signal decomposition methods, such as wavelet approaches or empirical mode decomposition (EMD) [15], are available at present. However, the disadvantages of this method cannot be ignored. Traditional adaptive wavelet approaches often use prescribe scale subdivision scheme, which is hard to achieve an ideal adaptability. For example, the wavelet packets used a constant prescribe ratio, leading to a limited adaptability. e Brushlet method [16] decomposed the signal on Fourier spectrum, and it is also based on a prescribe subdivisions. EMD shows an ideal adaptability, but its main issue is that it lacks mathematical theory. EWT incorporates the advantages of the above two methods. It not only has rigid mathematical basis but also can decompose signal adaptively. After using EWT to decompose the ECG signal into five modes, three modes with the lowest frequency are selected to form a new signal. Meanwhile, the R peak positions are determined using the Pan-Tompkins algorithm to help locating the QRS complexes. en, the new signal is sampled based on the position of the QRS complexes. However, a few ectopic samples are captured during sampling. To address this challenge, a method based on variance is developed to delete ectopic samples automatically. Finally, to capture the nonlinear relationship between respiratory and ECG, the processed samples are processed using KPCA and preimaging to obtain the EDR signal. e radial basis function (RBF) is adopted as the kernel function of KPCA; hence, considerable calculations are required when selecting the parameters of the RBF kernel function [17]. erefore, the parameter selection algorithm is improved in this study to reduce the calculation load. Our contributions in this paper are as follows. (1) e EDR algorithm framework of EWT + KPCA is proposed to overcome the disadvantages of traditional EDR algorithm based on morphological characteristics of ECG signals, but also captured the nonlinear relationship between respiratory and ECG. (2) A new method based on variance to automatically delete the abnormal samples is introduced during sampling procedure. (3) e selection of RBF kernel parameters in KPCA algorithm is improved to reduce the computational requirement. e remaining sections of this paper are organized as follows. In Section 2, the EDR algorithm based on EWT and KPCA is described in detail. In Section 3, the proposed method is compared with three traditional EDR algorithms. e qualitative and quantitative experimental results are presented. e results are discussed in Section 4, and conclusions are presented in Section 5. Methodology e EDR method proposed in this study is divided into two parts and shown in Figure 1. Part 1 involves the decomposition of the ECG signal based on EWT. e ECG signal is decomposed into five modes with different spectral sizes based on EWT; three modes with the lowest frequencies are selected to form a new signal. Part 2 describes the steps for obtaining the EDR signal based on the KPCA. First, the Pan-Tompkins algorithm is used to find the R peaks of the ECG signal, and then the QRS complexes are located. Second, the new signal formed by the three modes is sampled based on the locations of the QRS complexes, while some ectopic samples are deleted automatically. ese samples serve as the input matrix for KPCA. ird, the input matrix is mapped to a higher-dimensional space through KPCA. e eigenvalues and eigenvectors of the kernel matrix are calculated. Finally, the eigenvector corresponding to the maximum eigenvalue is selected for preimaging to obtain the EDR signal. Decomposition of ECG Signal Based on EWT. In general, the human respiratory rate is approximately 0.1-0.5 Hz. To extract the modes with a low-frequency ECG signal completely and adaptively, EWT is used to decompose the ECG signal. e low-frequency modes are reconstructed to form a new signal. EWT is a mode decomposition algorithm proposed by Gilles [18]. e main concept is to extract the different modes of a signal by designing an appropriate wavelet filter bank, including a low-pass filter and several band-pass filters. e low-pass filter is used to extract the approximate component, and the band-pass filter is used to extract the component details. e number of decomposed modes is selected adaptively in the traditional EWT algorithm. Different ECG signals may be decomposed into different numbers of modes, which affects the following calculation. To unify the number of decomposed modes while ensuring the performance of EWT, the number of decomposed modes is set to five based on the experiments. e specific steps to implement the EWT algorithm are as follows: (1) e local maxima in the spectrum of the ECG signal f(t) are obtained and sorted out in decreasing order after normalization. Next, the first six local maxima are selected, and the boundaries of each mode ω n (n � 1, 2, . . . , 5) are defined as the center of two consecutive maxima. (2) After determining the boundaries, the empirical scaling function φ n (ω) and empirical wavelet ϕ n (ω) are constructed using the Littlewood-Paley-Meyer wavelet [19]. Here, φ n (ω) and ϕ n (ω) are expressed as where τ n � cω n n(0 < c < 1). Here, β(x) should satisfy the following condition: Since numerical functions satisfy the above condition, we choose β(x) � x 4 (35 − 84x + 70x 2 − 20x 3 ) according to [18]. (3) e different modes of f(t) are obtained by φ n (ω) and ϕ n (ω). e detail coefficient W ε f (n, t) and approximate coefficient W ε f (0, t) are defined as Computational Intelligence and Neuroscience where ∧and ∨ refer to the Fourier transform and its inverse transform. From equations (1)- (5), the empirical mode f k (t) can be obtained as After the three steps of processing, the ECG signal is decomposed into five modes. Figure 2 shows the results of the time domain and frequency domain of a 10 s ECG signal after EWT. As shown in Figure 2(b), the spectra of the five modes are sorted out in increasing order. To extract the low-frequency part of f(t) completely and adaptively, the first three modes are selected to form a new signal. e new signal f s (t) is shown in Figure 3. Figure 3 shows that f s (t) only preserves the low-frequency part and abandons the high-frequency part of f(t), which prevents the influence of high-frequency noise on subsequent calculations. Here, f s (t) serves as the input for the following KPCA algorithm. EDR Signal Acquisition Based on KPCA. KPCA is a generalization, proposed by Scholkopf et al. [20], of principal component analysis in high-dimensional feature space. In KPCA, the data are mapped to a high-dimensional feature space that is nonlinear to the input space. Using KPCA, the EDR acquisition algorithm can describe the nonlinear interaction between the ECG signals and respiratory signals. e steps of KPCA in the proposed method are described in detail in this section. Before performing the KPCA algorithm, the input matrix X should be determined. e evaluation of X consists of the following steps: (1) e first step is detection of R peaks: the positions of all the R peaks in f(t) are obtained using the Pan-Tompkins algorithm [21], denoted as . e parameter n is the number of R peaks in f(t). e Pan-Tompkins results are shown in Figure 4. (2) e second step is sampling of the signal f s (t): after the detection of the R peaks, a fixed window is selected to sample the signal f s (t). In this study, x i is regarded as the window center, and f s (t) is sampled in the range of 40 ms before and after x i . en, the samples are used to construct the matrix X ′ with dimensions m × n, where m is the length of the fixed window and n is the number of R peaks. Because the sampling interval of the ECG signal in this study is 4 ms, the value of m is fixed at 21. (3) e third step is deletion of ectopic samples: as shown in Figure 5(a), there might be some ectopic samples in X ′ that affect the accuracy of subsequent calculations. erefore, an adaptive method based on variance is proposed to delete ectopic samples automatically. e specific steps are as follows: (1) First, α i n i�1 is denoted as the result of sampling, and X ′ can be written as X ′ � [α 1 , α 2 , . . . , α k , . . . , α n ]. e average sample is defined as ( . . , a n − α mean ], and the variance of each column vector in Y is calculated. e results are expressed by the vector (3) It is assumed that the ectopic samples are α p and the normal samples are α q , and in the equation, the condition, v p ≫ v q , is satisfied. e ectopic samples are removed according to this property, and an input matrix X without ectopic samples is obtained. e outline of the matrix X is shown in Figure 5(b). After the input matrix X is determined, KPCA is introduced. e essence of KPCA is to solve the following equation: where λ and v are the eigenvalues and eigenvectors of matrix C, respectively. Here, X � [x 1 , x 2 , . . . , x k , . . . x r ], where r is the number of samples in X, and an implicit nonlinear mapping is defined as φ. en, the mapped data of x k in the high-dimensional feature space F can be defined as φ(x k ). In equation (8), C is the covariance matrix of φ(x k ), which is defined as Equation (8) is equivalent to the following equation: When v � r i�1 α i φ(x i ) and the RBF is introduced as the kernel function k(x, y), After the kernel function is determined, equation (10) can be written as Computational Intelligence and Neuroscience where α is the vector constituted by parameter α i and K is the kernel matrix corresponding to k(x, y). To extract the principal component, the projection of a test point φ(x) on the eigenvector V^k is calculated using e aforementioned computation is carried out in the high-dimensional feature space F, whereas the construction of the EDR signal is based on the first eigenvector of the input space. e eigenvalues and eigenvectors obtained in F cannot be directly used for constructing the EDR signal. To solve this problem, a limited number of eigenvectors can be used to find approximations of the data in the input space. is process is called 'preimaging' [22]. erefore, the first eigenvector of the input space is reconstructed by preimaging the first eigenvector of F. e EDR signal can be obtained by performing cubic spline interpolation on the reconstructed first eigenvector of the input space. During the process of KPCA, the parameter σ 2 must be carefully selected so that KPCA can deliver a better performance. First, σ 2 is roughly determined as var(z), which is denoted as σ 2 � var(z). Parameter z represents the vector transformed by X. en, σ 2 is further tuned in the range, (0, σ 2 × 100), with a step of σ 2 /10. e steps are as follows [14]: (1) KPCA is applied to the range (0, σ 2 × 100) for σ 2 to obtain the eigenvalues denoted as c � (c 1 , c 2 , . . . , c i ) (2) Here, d � c 1 − (c 2 + · · · + c i ) and is calculated for each σ 2 . en, σ 2 is selected, which achieves the maximum d Although the aforementioned method can determine the appropriate σ 2 , it requires high computational effort because the eigenvalues of the kernel matrix are calculated for each σ 2 . However, d reaches its maximum early and decreases monotonically thereafter, as shown in Figure 6(a). us, the calculation of the monotonically decreasing part is redundant. erefore, in this study, d is determined when the aforementioned two steps are done. When d reaches its maximum, the selection process of σ 2 is terminated, as shown in Figure 6(b), and the σ 2 corresponding to the maximum of d is selected for subsequent calculations. Figure 6 shows an example of a curve graph of d. As shown in the figure, if not terminated at the maximum d, the algorithm calculates the eigenvalues of the kernel matrix 1000 times. If terminated at the maximum d, only 180 calculations are required. In this way, not only the accuracy of σ 2 is ensured, but also the computational effort is reduced. Results In this section, the proposed EDR methods are compared with three traditional EDR methods, including KPCA-based [14], R-peaks-interval-based [23], and R-peaks-amplitudebased [24] EDR methods. e experimental results and the metrics of morphological similarity are presented to evaluate the performance of the aforementioned EDR methods. Material. e ECG signals and reference RESP signals were provided by the Fantasia database and Shantou Institute of Ultrasonic Instruments Co., Ltd. (SIUI). e Fantasia database [25] was collected from healthy subjects in a supine posture at a sampling rate of 250 Hz. Morphological Similarity Metrics. To measure quantitatively, the morphological similarity between the EDR signal and the reference respiratory signal, the correlation coefficient (C) and magnitude squared coherence coefficient (MSC) were introduced [24]. C is expressed as where m is the length of the EDR signal and R ref and R EDR represent the reference RESP and EDR signals, respectively. e MSC is defined as where P xx (f) and P yy (f) represent the power spectral densities of x and y, respectively, and P xy is the cross-power spectral densities of x and y. e spectra were calculated using Welch's method, a periodic Hamming window, and an overlap of 50%. Experimental Results. To compare the proposed method to the three traditional EDR methods in an intuitive manner, some of the experimental results are presented in this section. As shown in Figure 7, the EDR signal obtained by the algorithm in this study has a competitive similarity to the reference respiratory signal. As Figures 7(f ) and 7(g) show, the proposed method performs well in extracting some of the RESP signals with poor quality. Figure 8 shows that the proposed method performs better than the three traditional EDR methods. In addition, the proposed method achieves a better performance than the three traditional EDR methods in extracting poor-quality RESP signals. Figure 8(d) shows that the proposed method maintains a relatively high morphological similarity with the poor-quality RESP signals. However, the EDR signals obtained by the three traditional EDR methods differ significantly from the reference RESP signals in terms of morphology. In addition to the aforementioned qualitative comparison results, the performances of the proposed method and three traditional EDR methods were evaluated using C and MSC. Figure 9 shows the experimental results of the four EDR methods based on the Fantasia database and SIUI. e box plots in Figure 9 specify the median values and interquartile ranges (IQRs). As shown in Figure 8 and Table 1, the EDR methods based on a single morphology characteristic of the ECG signal (RR-interval-based and RPA-based EDR algorithm) show poor results for C and MSC; they also have the disadvantage of poor robustness. Although the KPCA-based EDR method is good, there is still a gap between it and the proposed method in terms of accuracy and robustness. In general, the proposed method is superior to the three traditional EDR methods. To measure the performance of the proposed method for different age groups, young (21-34 years old) and elderly (68-85 years old) subjects are chosen for the experiment. e experimental results are shown in Figure 10. As shown in different age groups, methods based on single ECG signal morphological characteristics have the disadvantages of poor robustness and low accuracy. e proposed method exhibits better performance in samples of different ages. Preprocessing. In this study, the ECG signal was decomposed into five different modes, and the three modes with low frequencies were selected to construct a new signal for EDR signal acquisition. No denoising algorithm is introduced in the proposed method. is is because the RESP signal with a relatively low frequency causes a baseline drift to the ECG signal. e correction of baseline drift by the denoising algorithm affects the extraction of the RESP signal to a certain extent. e influence of high-frequency noises is Computational Intelligence and Neuroscience 7 Computational Intelligence and Neuroscience also mitigated because the three modes with lower frequency are chosen for subsequent calculation, as mentioned in Section 2.1. e Effect of the Number of the Extracted Modes. In this paper, EWT is introduced to adaptively decompose the ECG signal into 5 modes, and low-frequency modes are reconstructed to form a new signal. In this section, we carried out experiments on the effect of the number of modes extracted from EWT. Here, 30 ECG signals with a length of 10 s were randomly selected and are decomposed into 4, 5, and 6 modes, respectively. C and MSC are introduced to evaluate the performance. e results are presented in Figure 11, revealing the better performance when the number of modes extracted from EWT is 5. e data reveal that the EDR method based on the morphological characteristics of ECG signals has a higher computational speed than the EDR method using KPCA, but at the expense of accuracy and robustness. ere are two main reasons for this difference in computational complexity: Computational (1) e EDR algorithms based on the morphological characteristics of ECG signals only process the specific characteristics (such as R wave amplitude Figure 7: Comparison of the EDR signal obtained by the proposed method and the reference RESP signal. In each picture, the subplot above is the EDR signal, and the subplot below is the reference RESP signal. Conclusions e proposed EDR method based on EWT and KPCA shows good performance than the traditional EDR methods in the extraction of EDR signals from single-lead-ECG signals. e ECG signal is decomposed into five different modes through the EWT, and a new signal is formed by constructing the three components with a low frequency. en, the new signal is sampled to form the input matrix based on the location of the QRS complex, and an ectopic sample removal method is used to delete the ectopic samples. Subsequently, KPCA is used to obtain the eigenvectors and eigenvalues. Finally, the EDR signal can be obtained by processing the results using preimaging and cubic spline interpolation. After the selection method of the parameters of the RBF kernel in KPCA is improved, the computation time is significantly reduced, alleviating the problem of high computational effort in the EDR method with KPCA to a certain extent. Experimental results show that the proposed method performs better than the three traditional EDR methods and is suitable for monitoring respiration through single-lead-ECG signals without additional sensors. Data Availability Part of the data used in this paper can be found in the website https://physionet.org/about/database/, and the other part is the data provided by the SIUI, which is not open to the public because it involves privacy. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this study. 12 Computational Intelligence and Neuroscience
5,562.2
2021-10-15T00:00:00.000
[ "Computer Science" ]
Crowdsourcing for NLP Crowdsourced applications to scientific problems is a hot research area, with over 10,000 publications in the past five years. Platforms such as Amazons Mechanical Turk and CrowdFlower provide researchers with easy access to large numbers of workers. The crowds vast supply of inexpensive, intelligent labor allows people to attack problems that were previously impractical and gives potential for detailed scientific inquiry of social, psychological, economic, and linguistic phenomena via massive sample sizes of human annotated data. We introduce crowdsourcing and describe how it is being used in both industry and academia. Crowdsourcing is valuable to computational linguists both (a) as a source of labeled training data for use in machine learning and (b) as a means of collecting computational social science data that link language use to underlying beliefs and behavior. We present case studies for both categories: (a) collecting labeled data for use in natural language processing tasks such as word sense disambiguation and machine translation and (b) collecting experimental data in the context of psychology; e.g. finding how word use varies with age, sex, personality, health, and happiness. We will also cover tools and techniques for crowdsourcing. Effectively collecting crowdsourced data requires careful attention to the collection process, through selection of appropriately qualified workers, giving clear instructions that are understandable to non-?experts, and performing quality control on the results to eliminate spammers who complete tasks randomly or carelessly in order to collect the small financial reward. We will introduce different crowdsourcing platforms, review privacy and institutional review board issues, and provide rules of thumb for cost and time estimates. Crowdsourced data also has a particular structure that raises issues in statistical analysis; we describe some of the key methods to address these issues. No prior exposure to the area is required. Introduction Crowdsourced applications to scientific problems is a hot research area, with over 10,000 publications in the past five years. Platforms such as Amazons Mechanical Turk and CrowdFlower provide researchers with easy access to large numbers of workers. The crowds vast supply of inexpensive, intelligent labor allows people to attack problems that were previously impractical and gives potential for detailed scientific inquiry of social, psychological, economic, and linguistic phenomena via massive sample sizes of human annotated data. We introduce crowdsourcing and describe how it is being used in both industry and academia. Crowdsourcing is valuable to computational linguists both (a) as a source of labeled training data for use in machine learning and (b) as a means of collecting computational social science data that link language use to underlying beliefs and behavior. We present case studies for both categories: (a) collecting labeled data for use in natural language processing tasks such as word sense disambiguation and machine translation and (b) collecting experimental data in the context of psychology; e.g. finding how word use varies with age, sex, personality, health, and happiness. We will also cover tools and techniques for crowdsourcing. Effectively collecting crowdsourced data requires careful attention to the collection process, through selection of appropriately qualified workers, giving clear instructions that are understandable to non-?experts, and performing quality control on the results to eliminate spammers who complete tasks randomly or carelessly in order to collect the small financial reward. We will introduce different crowdsourcing platforms, review privacy and institutional review board issues, and provide rules of thumb for cost and time estimates. Crowdsourced data also has a particular structure that raises issues in statistical analysis; we describe some of the key methods to address these issues. No prior exposure to the area is required. His current research includes machine learning, data mining, and text mining, and uses social media to better understand the drivers of physical and mental well-being. Lyles research group collects MTurk crowdsourced labels on natural language data such Facebook posts and tweets, which they use for a variety of NLP and psychology studies. Lyle (with collaborators) has given highly successful tutorials on information extraction, sentiment analysis, and spectral methods for NLP at conferences including NAACL, KDD, SIGIR, ICWSM, CIKM, and AAAI. He and his student gave a tutorial on crowdsourcing last year at the Joint Statistical Meetings (JSM) Ellie Pavlick is a Ph.D. student at the University of Pennsylvania. Ellie received her B.A. in economics from the Johns Hopkins University, where she began working with Dr. Chris Callison-?Burch on using crowdsourcing to create low? cost training data for statistical machine translation by hiring nonprofessional translators and post-editors. Her current research interests include entailment and paraphrase recognition, for which she has looked at using MTurk to provide more difficult linguistic annotations such as discriminating between fine-grained lexical entailment relations and identifying missing lexical triggers in FrameNet. Ellie TAed and helped design the curriculum for the Crowdsourcing and Human Computation course at Penn. Learning Objectives Participants will learn to: • identify where crowdsourcing is and is not useful • use best practices to design MTurk applications for creating training sets and for conducting natural language experiments • analyze data collected using MTurk and similar sources • critically read research that uses crowdsourcing Topics • Taxonomy of crowdsourcing and human computation. Categorization system: motivation, quality control, aggregation, human skill, process flow. Overview of uses of crowdsourcing • The human computation process. Design of experiments, selection of software, cost estimation, privacy/IRB considerations. • Designing HITs. Writing clear instructions, using qualifications, pricing HITs, approving/rejecting work. • Quality control. Agreement-?based methods, embedded quality control questions, applying the EM algorithm to find the correct label. When to invest extra funds in quality control versus when to collect more singularly labeled data • Statistical analysis of MTurk results. Accounting for the block structure and non random sampling of the data • Case Studies in NLP. Word sense disambiguation, machine translation, information extraction, computational social science
1,338.4
2015-01-01T00:00:00.000
[ "Computer Science", "Economics", "Linguistics" ]
Intelligent IDS: Venus Fly-Trap Optimization with Honeypot Approach for Intrusion Detection and Prevention Intrusion Detection Systems and Intrusion Prevention Systems are used to detect and prevent attacks/malware from entering the network/system. Honeypot is a type of Intrusion Detection System which is used to find the intruder, study the intruder and prevent the intruder to access the original system. It is necessary to build a strong honeypot because if it is compromised, the original system can be easily targeted by the attacker. To overcome such challenges an efficient honeypot is needed that can shut the attacker after extracting his attack technique and tools. In this paper, a Venus fly-trap optimization algorithm has been used for implementing the honeypot system along with Intrusion Detection System. Venus plants are a type of carnivorous plants that catch their prey intelligently. By adopting this feature we make an effective honeypot system that will intelligently interact with the attacker. A new fitness function has been proposed to identify size of the attacker. The effectiveness of the proposed fitness function has been evaluated by comparing it with state of the art. For comparison, remote-to-local attacks, probing attacks and DOS attacks are performed on both proposed and existing models. The proposed model is significant to catch/block all the intruders which were caught by the art and also the proposed model reduces the time of interaction between the attacker and honeypot system thereby giving minimum information to the attacker. Introduction Security against cyber-criminals have become a very important issue as more and more new technologies are being invented. The attackers are finding new vulnerabilities to exploit the data or cause harm to the system. A vulnerability in a system/network can occur due to faulty design, coding error, improper protocol or due to backdoor function. To prevent the attacks [1] on any system/network, there is a need to understand and improve the security of the network/system. Hence, security tools such as intrusion detection systems (IDS) [3], firewalls [2], etc. help us to prevent most of the malicious activities from entering into the network/system. A firewall [2] is the most widely used security tool for safeguarding against attackers on the internet. It is a physical device or software installed in the network/system which will check the incoming and outgoing network/system traffic for blacklisted and white-listed IP addresses and do the required action. All the blacklisted IP addresses are blocked by the firewall and all white-listed IP addresses are allowed to make the connection. But bypassing the firewall by either using IP spoofing or sending the malicious data in the data part of the packet can be an easy attempt to violate the firewall. These attacks are not identified by the firewall as it only checks the header part of the packet. On other hand, network Intrusion Detection/Prevention System [3] is also a device or software used to identify/ prevent malicious activity from entering the network. They scan the header as well as the data part of packets entering the network for malicious activity. Intrusion prevention systems are an extension to the IDS [18], they can block the attacker or drop the malicious packets without alerting the administrator. Based on deployment in the network, design structure IDS [3] is divided into different types. Since we want to analyze all the network traffic entering the LAN we use Network Intrusion Detection System (NIDS). There are two types of intrusion detection systems based on the detection technique they are: -Signature based detection: The signature based detection system just searches for the previously defined signatures in the packets based on the rules generated by using signatures. Limitations for using these detection system is that it takes a lot of storage space and the database needs to be updated always with all possible permutations of the signature. -Anomaly-based detection: The anomaly based detection system have a previously defined behavior of data packets and if the packet deviates from this behavior it is identified as an attack by this system. Since there are many types of protocols involved for data transmission, it is really hard to classify the malicious and non-malicious data. These systems can't give 100% accuracy due to design issues. Even if there is a 2% false positive rate the IDS system will generate 200 false alarms for every 10,000 packets scanned and there will be around 12,000 packets entering through a 10 mbps bandwidth connection per second. It is not easy to handle all these false alarms. Though we can achieve 100% detection accuracy for known attacks using signature-based IDS [3]. It has it's own drawbacks like if the signature of the attack deviates even slightly from the defined one this type of IDS can't detect it and also it increases the delay as it has to search for the signature of the packet in all the rules to classify a packet as good or bad. For this purpose, we intent to use honeypot. Honeypot [4] is another security tool kept as bait to lure attackers away from original systems towards the honeypot system by providing some dummy information and trap for them and to learn their techniques. Many models have been proposed using IDS and honeypot in combination to improve the security strength of a network [5][6][7] [8]. Kulkarni et al. [14] have created a new honeypot system called honeydoop. Honeydoop is a honeypot which uses IDS to identify the IP address on which the attacker is interested and creates a virtual honeypot with that IP address. It redirects the attacker to the newly created honeypot. The basis of their model is that the on-demand allocation of the honeypots at the right time and at the right place would make the network more secure and harder to sneak. But the problem with honeydoop is that the unknown attacks are not identified at all, requires a lot of virtual machines if there are a lot of attacks each performed on different IP address and also the false positive attacks of IDS are redirected to honeypot which might cause loss of important connections. Babak et al. [15] have given a similar model of redirecting the attacker towards honeypot using routers for further analysis of the attacker. Their main aim was to reduce the false positive rate of the IDS. If it was a false alarm then traffic would again send to its original location. But there might be loss of some packets when the original user is redirected to the honeypot. Though the traffic at honeypot is reduced, the traffic at IDS has not. Georgios et al. [16] had created SweetBait which uses Sweetspot (a low interaction honeypot), Argos (a high interaction honeypot), HIDS, NIDS and NIPS systems for intrusion capture and containment. The main aim of their project is to automatically identify signatures of zero-day worms without human intervention which will reduce the damage caused by zero-day worms, reduce false alarms of IDS, continuously refine the worm signatures to provide automated signature revision. The worms aggressiveness is predicted by continuously monitoring its activity level which helps to sort the signatures in IDS based on the urgency level. Bio-inspired algorithms like genetic algorithms, particle swarm optimization algorithms, etc. have been used to improve the performance of IDS [17]. Vajiheh et al. [18] had proposed a new hybrid classification algorithm using Artificial Bee Colony algorithm and Artificial Fish Swarm algorithms for anomaly detection. Their model has improved the performance of IDS by decreasing false positive rate but computational overhead and time complexity is almost similar to other approaches. Li [19] has described an approach for using genetic algorithms in IDS. For identifying the complex anomalous behaviors, he had used both temporal and spatial information of the network connections for generating IDS [18] rules. Although there are many models in the art [20], here, a new model which uses Venus flytrap optimization [10] has been proposed with a new fitness function for identifying the size of an attacker. Venus Flytrap scientifically known as Dionaea muscipula is a carnivorous plant that captures insects. The Venus plant leaves contain two heart-shaped lobes each containing 2-3 hairs on its surface as shown in Fig. 1 [23]. On the surface of these lobes, the plant secretes honey like enzyme to attract the insects. When any prey comes in contact with the hairs present on lobes, it causes the trap to get into a semi-closed state and if the prey moves it will stimulate the hair again which will make the trap tighter and the trap goes into a completely closed state where the prey is digested. Semi Lehtinen [9] has provided the first mathematical cost-benefit model using the carnivorous behaviour of the Venus flytrap plants. He has analyzed the dynamics of prey capture, costs, and benefits of catching and digesting prey. Ruoting et al. [16] have done mathematical modeling on the opening and closing behavior of Venus plants. They have analyzed the time taken by the trap to open, close and also time taken by the plant to transition from one state to another (open state, semi-closed state, closed state) mathematically. Ruoting et al. [13] have also mathematically explained the opening and closing mechanism of Venus plants. Venus plant's behavior as an optimization technique has also been used by Gowri et al. [10], by mimicking the rapid closure behavior of Venus flytrap to capture the prey. The authors have proposed a type of Venus flytrap Optimization algorithm which was applied in [11] [12]. Venus plants enter into a semi-closed state when the trigger hairs are touched once. When it is triggered again within 30 s of the first touch, it enters into a completely closed state. This behavior is called the rapid closure behavior of Venus plants. In their model when the hair has been touched, some charge is generated causing the Venus plant to enter into a completely closed state. The sum of charge generated during the first touch and after the second touch within certain time should be greater than some threshold, and the threshold is met only when the hair is touched twice within a certain period (30 s in the case of Venus plants trap to close). In this paper, we have improved Venus flytrap optimization algorithm [10] by proposing a new fitness function that can be used in network security to analyze the attackers who are worth catching by the honeypot [4]. The rest of the paper is structured as Sect. 2 presents related work. In Sect. 3 the preliminary details are presented. The proposed method is presented in Sect. 4. In Sect. 5 the experimental results and in Sect. 6 conclusion and future scope is presented. Intelligent Intrusion Detection System The prey selection of Venus flytrap is mimicked in proposed algorithm. The algorithm has been made so that the honeypot can catch attackers who seemed to be potential, that is interacting with those attackers might give us some new information about the vulnerability/attack tools. The proposed network architecture is shown in Fig. 2. The process has been divided into 3 phases. IP Blacklisting and White-Listing Phase First, the data packets coming from the internet will be checked at the firewall for the blacklisted and white-listed packets. If the packets are found to be blacklisted then those packets are either dropped or blocked by the firewall. Else the packets are allowed inside the network/system. By using the firewall we are blocking all the unwanted connections from the internet. The firewall contains a rule set of blacklisted (which are to be blocked) and white-listed (which are to be allowed) IP addresses. Whenever a packet comes to the firewall, it checks in the IP header part of the packet for the rules. If any of the rules are matched then it does accordingly. After the firewall, based on the destination IP address of the packet, it goes to either the intrusion detection phase or the honeypot interaction phase. For example, let us consider the situation shown in Fig. 3 Intrusion Detection Phase In the intrusion detection phase each of the incoming packet is checked for malicious content in both header part and payload part. If no malicious content is found in a packet then only that packet is sent to its destination IP address located in the local area network. If any packet contains malicious content the fitness of that packet is calculated based on the fitness function f(x) shown in Eq. (1) and if the calculated fitness f(x) for a packet is found to be greater than lower bound X1 and less than the upper bound X2 then the connection to which that particular packet belongs is redirected to the honeypot interaction phase. If the fitness is found to be greater than or equal to X2 (which represents big attack) then the administrator is alerted about the attack and the connection is blocked after entering the details of that packet into log file. And if the fitness is less than or equal to X1 (which represents small attack) then the connection is blocked after entering the details without alerting the administrator. IDS contains 5 components they are: The fitness scores are given based on network vulnerabilities which can be changed as per the network requirements. Here the scores are set as per our network. We have given high priority to U2R and R2L than DOS, probing because in DOS, probing attacks the attacker generally doesn't interact with the system, it only tries to send unlimited requests in case of DOS or get system information in case of probing. So, they are not preferred over U2R or R2L attacks. Most of the packets use TCP or UDP protocol for normal message transmission. ICMP is mostly used to send error messages by network devices like routers. So, we gave ICMP least score. The Score of source IP is obtained by searching the log file if the attacker's IP address is already present in the log file then he is a known attacker to us. He might know some information about the security of our network from the previous attack so we give him high priority over the new attacker. If the destination IP is admin/office system then it is given high priority over the normal user, since admin systems might contain valuable information. For the location of the intruder if the attacker is an insider(that is attacker is a local user) we give high score than external attacker as he might know some vulnerabilities and he need not go through the firewall. For example, let us consider the situation in Fig. 4. Here the attacker having source address 192.168.100.199 (we consider attacker is unknown) is using RemoteToLocal attack on destination address 192.168.43.199 (we consider user at destination as normal user) using TCP protocol through destination port 21. At IDS (192.168.43.1) the fitness value of this connection is calculated as following, Score of type of attack = 3; Score of protocol = 3; Score of destination IP address = 1; Score of source IP address = 1; (1) f (x) = (score of type of attack) + (score of destination IP) + (score of protocol) + (score of source IP) + (score of location of intruder) Fig. 4 Example of intrusion detection phase Score of location of intruder = 1; Fitness f(x) = 3 + 1 + 3 + 1 + 1 = 9; And let X1 = 7, X2 = 14. Then the connection from attacker system to 192.168.43.199 is blocked and the attacker is redirected towards honeypot system having IP address 192.168.43.213 (as X1 < f(x) < X2). If f(x) is less than or equal to 7 then we just block the connection from attacker to local user. Else if f(x) is greater than or equal to X2 then we block the connection from attacker to local user and also alert the administrator. Honeypot Interaction Phase In the Honeypot interaction phase every packet is considered as malicious. The fitness of the packet is calculated based on the fitness function g(x) shown in Eq. (2) and if the calculated fitness g(x) for a packet is found to be greater than lower bound X1 and less than the upper bound X2 then the connection to which that particular packet belongs is allowed to interact with the honeypot. If the fitness is found to be greater than or equal to X2 (which represents big attack) then the administrator is alerted about the attack and the connection is blocked after entering the details of that packet into log file. And if the fitness is less than or equal to X1 (which represents small attack) then the connection is blocked after entering the details without alerting the administrator. Honeypot contains 5 components they are: -Packet Decoder: Here the incoming packets are decoded into readable format and sent to next component. The flow chart of the process is given in Fig. 6. The fitness of the attacker at honeypot is calculated using Eq. (2), where scores are obtained from Tables 1 and 2. Here type of the attack is not checked. Attacker interacts with the honeypot through the open ports, in our system we have kept FTP (21), HTTP (80), IRC (6667) ports open to lure the attacker. We can also provide more services like SSH, TELNET, etc. but for now, we are using these three services. As we interact with the attacker the no.of packets sent and received, time of interaction will keep on increasing. The attacker might compromise the honeypot if he keeps on interacting with the system so we use some parameters to know when to stop interacting like no. of packets sent and received, duration of the attack. For example, let us consider the situation in Fig. 5. Here the attacker having source address 192.168.100.199 (we consider attacker is unknown) is performing an attack on destination address 192.168.43.213 (honeypot system) using TCP protocol through destination port 21. At honeypot (192.168.43.213) the fitness value of this connection is calculated as following, Score of destination port = 3; Score of protocol = 3; Score of destination IP address = 1; Score of source IP address = 1; Score of location of intruder = 1; No.of packets sent and received = 2; Duration of attack in seconds = 0; Fitness g(x) = 3 + 1 + 3 + 1 + 1 + 2 + 0 = 11; And let X1 = 10, X2 = 200. Then the connection from the attacker system to 192.168.43.213 to the honeypot system is allowed. If g(x) is less than or equal to X1 then we just block the connection from the attacker. Else if g(x) is greater than or equal to X2 then we block the connection from the attacker and also alert the administrator. Here we allow the connection (as X1 < g(x) < X2) and allow honeypot to interact with attacker. As time passes and interaction goes on, the duration of the attack and no.of packets sent and received increases which increases the fitness of the attacker. Once the fitness of the attacker reaches X2 the connection with the attacker is blocked and the admin is alerted. The connection can also be stopped by the attacker or by the honeypot before fitness reaches X2 then we just log the attacker details without alerting the admin. Components Used All the experiments are performed using the following components, Honeypot System "HoneyRJ" a low interaction honeypot has been used for the experiment. It requires an eclipse IDE (release version 4.11) to run. A system with Kali Linux OS with pre-installed eclipse IDE as Honeypot System has been utilized. NIDS "Snort", a signature based IDS has been used with a system having Ubuntu Linux OS. We can easily install Snort in any linux machine using the following command line, sudo apt-get install snort And to run Snort in NIDS mode and log packets the following command is used, Local Area Network Virtual box on the IDS system to simulate a LAN connected to a switch is used. Firewall An ip-tables firewall, which is an inbuilt firewall for all Linux machines is utilized. The following is the syntax for appending a rule into iptables to block an incoming connection, Attacker Malicious pcap files are used for testing the IDS. For testing HoneyRJ different attacks are performed using a system with Kali linux OS (as it contains all the penetration testing tools). Snort IDS is used at switch for listening on mirror port in NIDS mode. So, whenever snort identifies an attacker with fitness greater than X1 and less than X2, we will redirect that attacker to HoneyRJ using ip-tables by port forwarding. Testing IDS The graph in Fig. 7 shows the range of the fitness values vs the type of attacks in which the priority order is DOS attacks, U2R attacks, sniffing attacks, probing attacks, unspecified attacks and Fig. 8 shows the same, but the priority order of the attacks is changed to U2R attacks, R2L attacks, DOS attacks, probing attacks, unspecified attacks. Set the X1 value to 7 and X2 value to 14 at snort for redirecting attackers which captures most of the harmful attacks but again these values can be changed based on administrator preference. Scores for the type of attack is given based on the vulnerability of the system/network which we want to protect. The pcap files of MACCDC [21] are used to test the fitness scores whose Table 1 Fig . 9 Attack versus fitness graph for sample data shown in Fig. 10 output is shown in Figs. 9 and 10. The U.S. National CyberWatch Mid-Atlantic Collegiate Cyber Defense Competition (MACCDC) is a unique experience for college and university students to test their cybersecurity knowledge and skills in a competitive environment. Testing Honeypot HoneyRJ is an open source low interaction honeypot written in Java for implementing the proposed honeypot algorithm. It provides only two services FTP and IRC. We have added HTTP, Sample Client First protocol, Sample Server First protocol services as well to this so that it can provide more services. It has GUI built into it which makes it more userfriendly. We can start, stop, pause individual service or all the services using the GUI as shown in Fig. 11. When HoneyRJ starts, it open the ports 21 (FTP), 6667 (IRC), 80 (HTTP), 65001 (Sampl Client First Protocol), 65000 (Sample Server First Protocol) and starts listening to these ports for attacks. When an attacker tries to make connection, HoneyRJ will calculate fitness. If the fitness is greater than X1(10) and less than X2(150) then the reply message is sent based on the interaction module in HoneyRJ and if it's fitness is not in the range of (X1,X2) then the connection is rejected/blocked. The following are the interaction modules which are present in the HoneyRJ. -FTP service Interaction: FTP service runs on dedicated port 21. So, when a user connects to HoneyRJ through 21 this module will start to interact with the attacker. The interaction process is shown in Fig. 12. Here the attacker is 192.168.43.232 and his fitness after the connection has ended is 47. It has increased from 14 to 47. The interaction was stopped because the attacker has entered into quit connection state in the interaction module. -IRC service Interaction: IRC service runs on dedicated port 6667. So, when a user connects to HoneyRj through 6667 this module will start to interact with the attacker. The interaction process is shown in Fig. 13. Here the attacker is 192.168.43.232 and his fitness after connection has ended is 27. It has increased from 13 to 27. The interaction was stopped because the attacker has entered into quit connection state in the interaction module. -HTTP service Interaction: HTTP service runs on dedicated port 80. So, when a user connects to HoneyRJ through 80 this module will start to interact with the attacker. The interaction process is shown in Fig. 14. Here the attacker is 192.168.43.232 and his fitness after connection has ended is 24. It has increased from 12 to 24. The interaction was stopped because the attacker has entered into quit connection state in the interaction module. -Sample Client First Protocol Interaction: Sample Client First protocol service is given port 65000. So, when a user connects to HoneyRJ through 65000 this module will start to interact with the attacker. The interaction process is shown in Fig. 15. -Sample Server First Protocol Interaction: Sample Server First protocol service is given port 65001. So, when a user connects to HoneyRJ through 65001 this module will start to interact with the attacker. The interaction process is shown in Fig. 16. The Sample Server First Protocol, Sample Client First Protocol are testing protocols used in the HoneyRJ software for testing the interaction modules. As you can see in interaction modules, HoneyRJ is interacting with the attacker at the same time fitness is being calculated, so that we know whether to continue interaction or block it. The following attacks have been performed on the honeypot system to test the working of the proposed model: you to find out precisely how a data transmission (like a Google search) occurred from your computer to another. Quite simply, the traceroute outputs a list of the systems on the network that are involved with specific internet activity. So, by using this attack the attacker can discover a route to another host. Terminal Command: Nmap -A 192.168.100.199(IP Address of Honeypot Machine). -Remote System Access: Remote System Access is used to remotely operate your system from another system using your login credentials from any location. But this feature is being exploited by the attacker to access your computer by guessing the login credentials or by using a brute force attack. This is due to the use of weak/common login credentials. Table 3 shows the no.of packets captured when we test the HoneyRJ with our fitness function and without our function. And the graphical representation of the same is shown in Fig. 17. As it is clear from the figure less interaction can be visible with the attacker when the proposed optimization algorithm is used as compared to the existing model. When an attack is performed whose fitness is below X1, the attacker will be allowed to interact with the honeypot system using the existing models, but using the proposed optimization technique the attacker will be stopped immediately as shown in Fig. 18. Here, a Script Scanning Attack is performed which will not harm the system but will give information about the vulnerabilities of the system/network to the attacker which might be useful to perform an active attack. Interacting with this type of attack will not provide us with any useful information as in this attack empty/request/acknowledge packets are sent to identify Table 3 Fig. 18 Graph showing the change in fitness value for Script Scanning attack for proposed (red) and existing (blue) models the vulnerabilities. When an is performed attack whose fitness is in the range of X1 and X2 then the interaction process of the proposed optimization technique and existing model is similar, until the fitness value reaches X2. When the fitness value reaches X2 our proposed model will stop the attacker and alert the administrator while the existing model will keep on interacting with attacker until the attacker stops the interaction as shown in Fig. 19. A Remote System Access attack has been performed through open port 21. In proposed model, the interaction has been stopped because further interaction might cause damage to the honeypot system or the attacker might compromise the honeypot system and use it as a bot to attack other systems. Conclusion In this paper, Venus flytrap optimization technique has been adopted for the honeypot system. To perform this, a new fitness function is proposed which uses features like destination IP address, source IP address, destination port number, protocol, type of attack, no.of packets sent and received, duration of attack and location of the intruder. The interaction is established with only the effective attackers, skipping the small and the large attacks. As a result of several experiments, it is observed that the proposed model is performing well than the existing model. In proposed model, attacks such as nmap scanning, script scanning were blocked and attacks like remote system access were allowed to interact for some time whereas, in the existing model all the attacks were allowed to interact with the honeypot system until the attacker manually disconnects from the system. When we compared the no.of packets exchanged between honeypot and attacker, the proposed model was able to get information about attacker with less data exchange than the existing models. The interaction process is improved and the honeypot system is used effectively without wasting time on small prey. It is able to protect itself before an attacker causes serious damage to it. By redirecting the attacks to the honeypot system, we are able to safeguard the original system and also get to learn Fig. 19 Graph showing the change in fitness value for Remote System Access attack for proposed (red) and existing (blue) models more details about the attacker. Though the proposed model show us good results, it can be improved further by adding more features to obtain the size of an attacker more accurately. Author Contributions All the authors contributed equally in this research. Funding Not applicable. Data Availability Not applicable. Conflict of interest The authors declare that they have no conflict of interest. Informed consent Informed consent was obtained from all individual participants included in the study. Code availability Not applicable. Hanumanthu Bhukya is an Assistant Professor, Department of Computer Science and Engineering, Kakatiya Institute of Technology & Science Warangal. He has published several research articles in peerreviewed International journals and conferences. His research focuses on data analysis, web security and information security. He is a member of ISTE.
6,944
2021-07-24T00:00:00.000
[ "Computer Science" ]
Book review. E. ZWIRNER (Schapdetten bei MOnster) und K. (16). Measurement errors depend on instruments ZWIRNER (KiSln) and researchers; in the book one finds detailed instructions on how to reduce them. But there is Grundfragen der Phonometrischen Linguistik no mention at all of sampling procedures, which is 3. stark erweiterte und erg~inzte Auflage, bis 2. all the more remarkable as one of the authors is a Aufl. unter dem Titel: Grundfragen der Phonomemathematician and at the time of the second editrie. viii + 320 S., 14 Abb., 3 Tab., gebunden, 1982, tion (1966) the sampling theory was already very SFr . /DM 120.-, US $ 72.00, ISBN3-8055-2370-X. well developed. The argument in favour of statistics and meaThe appearance of the third edition of a sciensurement was very necessary in the thirties, and, tific book after 46 years is a useful opportunity to even today, it is not superfluous because respeculate both on its role in the history of its searchers must clean two ditches in their careers: a discipline and on its merits relative to the present jump from non-mathematics to mathematics and, state of the science. We do not want to treat its what is even more difficult, a jump from determincontents in detail since it has been reviewed several istic to probabilistic thinking. (Unfortunately times; rather we want to discuss some methodounlike lengthy passages from the articles in the logical problems, appendix it is precisely H. Paul's image of the Phonometry was established in the thirties by marksman taking aim, used to represent the variaE. Zwirner, who, until recently, was its most prombility of pronunciation, that is not included in the inent exponent. At the time of the birth of phoextensive historical chapter 2.) These problems nometry the promotion of stat|stics in our disciwere clarified at the very birth of phonometry, but pline was almost sacrilegeous, but nowhere in linover the years no essential deepening has been guistics can one find better arguments for the made. Both in theory and in methodology, the justification of one's own approach than in the discipline has remained at its initial level and has third and the fourth chapter of the book under yielded no contribution to statistics, to measurereview. The principal ideas became common propment theory or to the theory of language. erty in linguistics, and we can sum them up as What are the aims of this discipline; what is its follows: the wave properties of speech sounds are theoretical and methodological background; what linguistically uninteresting, such sounds being releare its methods? vant for linguistics only as empirical correlates of According to Z & Z the aim is "die Festlegung theoretical linguistic concepts (phonemes). Though derjenigen z~ihl und messbaren Verh~iltnisse, die phonemes are theoretically established fixed entidie Beschreibung und Unterscheidung von Spraties, in reality their correlates show strong variachen zu fOrdern geeignet sind" (21) and "die Phobility (shared by all things in the world). There are nometrie kann und will nur der Inbegriff aller three sources of sound variability: production ertechnischen Untersuchungsmittei des Sprechens im rors, measurement errors (173-176) and sampling Dienst der Sprachwissenschaft sein: sic stellt die errors. Production errors are caused by the vagueder Sprache und dem Sprechen angemessenen ness of articulation points which are rather Messund Z~ihlverfahren bereit" (23). These state,.. " . . . . .~ . . . . 1 , vl~-,~ that can be ascei,a, .eu ,,,.,j as centre ments show that the discipline wants to remain points (150). A sound cannot be determined in descriptive while, at the same time, yielding methterms of the position of organs, but merely by ods (not instruments) for linguistics. As can easily means of the statistical variation of this position be seen, measurement and description are re- many diseases. Dr. Blake does not profess that this bvochure touches much on the hand as a diagnostic factor in diseases of the nervous system, although he gives plates of the hand in acromegaly in the adult, in congenital cretinism, and in Mongol idiots, and he mentions the main en griffe of Duchenne. The shortening of the second phalanx of the little finger, with lateral displacement of the terminal phalanx, is interesting as being associated with certain forms of cretinism. Although fault may be found with the way in which this booklet is issued, it shows acute power of observation and cannot fail to be of use in clinical work. Practical Uranalysis and Urinary Diagnosis. By Charles W. Purdy, M.D. Fourth Edition. Pp. xvi., 365. Philadelphia: The F. A. Davis Company. 1898.?The fact that is stated in the preface to this book, that three large editions of it have been exhausted in three years, and that it has been extensively adopted as a text-book in the United States, speaks for its value as a practical guide to the study of the urine. This issue retains the essential features of the former editions, but has been thoroughly revised, and the chapters on the chemistry of the urine have been largely re-written, whilst a number of new illustrations have been added. It may be regarded as a trustworthy guide to the subject. We differ, however, from Dr. Purdy in his appreciation of the relative value of the clinical tests for albumin, and think the salicylsulphonic test might have been given. The book is well printed in excellent type, and the illustrations are good. Observations on Cardio-Vascular Repair. By W. Bezly Thorne, M.D. Pp. 26. London: J. & A. Churchill. 1898.? This paper, which was written for the 1898 meeting of the British Medical Association, deals with heart failure associated with arterial disease, and the therapeutic value of saline baths and exercises in such cases. Dr. Thorne adopts the view that defective metabolism of the body may cause disease of the arteries, and upon the vascular changes cardiac derangement may follow. Two skiagrams are given to show that the heart diminishes in size after resisted exercises. Although we believe, in some measure, in the therapeutic value of saline baths and exercises, the rapid diminution in the size of the heart does not appear to us to have been proved. We cannot see further evidence in these skiagrams. It appears to us that in the photograph in which the heart appears the smaller, there has been longer exposure to the X-rays. Pp. 18. London : Henry J. Glaisher. 1899.?The author tells us that he is " engaged exclusively in the treatment of heart affections," and that he is "Physician in Ordinary to the Imperial Ottoman Embassy." We therefore conclude that the only maladies of the Sultan's official representatives in this country are cardiac?and this, to say the least, is extraordinary. Dr. Kingscote informs us that the effects of asthma "consist essentially in spasmodic contractions of the voluntary muscles of inspiration, and of the involuntary muscles which surround the bronchioles." Both these statements are misleading if not inaccurate. The next astonishing statement is, that " It is pretty generally conceded that the origin of asthma is to be found in the irritation of one or . . . many ramifications of the vagi; whether it be from . . . Meckel's ganglion . . . through the recurrent laryngeal ... or through irritation of the . . . abdominal and pelvic viscera." From which it may be gathered that the author's knowledge of the anatomy and physiology of the vagus is not great. The author's " consideration of the amount of space taken up ... by the enlarged heart " leads him to the conclusion that "the following may be instanced as direct effects of intermittent pressure." Of "the following" we may quote " brachial pain, due to perineuritis," " pain from perineuritis of the intercostals," "interference . . . through pressure on the thoracic duct," as not having a tittle of evidence in their favour. The main thesis of the paper is that a largely dilated heart, " say the size of a small football (not at all an unknown condition) . . . flops in the direction of gravity," and " can hammer " the vagus nerves on the bony spine and so cause asthma ! It has rarely been our misfortune to read such a tissue of unwarrantable inferences from clinical phenomena. The progress of medicine is retarded by such publications. Golden Utiles of Surgical Practice. By E. Hurry Fen wick. Fifth Edition. Pp.71. Bristol: John Wright & Co. [1898.] Many of these rules are undoubtedly valuable, and some are not to be found in ordinary text-books; but every now and again we find one of very doubtful worth, or which conveys very imperfect information. For instance, all the directions we find given for dealing with hemorrhage are, " Always tie both ends of a divided artery in a wound." This is most important; but, surely, we might have a few rules about venous bleeding and secondary hemorrhage. Again, we are told never to hesitate or delay in opening an abscess in the loin, due to rupture or injury of the kidney; but why hesitate or delay in opening any abscess in the loin ? Then the author lays it down as a rule that we are not to open a collection of pus anywhere near a large artery without first using a stethoscope. The meaning of this rule is obscure. Probably the idea is that by excluding a bruit we may exclude the possibility that we have, not an abscess, but a ruptured or traumatic aneurysm to deal with ; but we should not like to exclude this possibility because no bruit was present. Before we gave the book to a student we should like to have some rules removed and others more fully elaborated. But probably no surgeon would be quite satisfied with such a series of rules drawn up by any other surgeon, and there is certainly much useful information in this waistcoat-pocket booklet. Drs 1898.?This is a new edition of the author s work entitled Lectures on Massage and Electricity in the Treatment of Disease, and contains a fair account of the methods of massage and the use of electricity in the treatment of various diseases, but it lacks conciseness. Its bulk is due in large measure to mere padding ; for 25 Vol. XVII. No. CC. instance, there is an account of Latham's theories of the constitution of proteids, and a long verbatim report of several of Charcot's cases of hysteria. The book is full of such superfluous matter. One does not need sketchy and incomplete descriptions of every disease in which massage may be useful. By reason of this the work is far inferior to such a treatise as Murrell's on the same subject. Life is too short for one to read such a watered account of what after all is only one method of treatment. The Erection of a Consumptive Sanatorium for the People. By Dr. Nahm. Translated by William Calwell, M.D. Pp. 52. Belfast: Mayne & Boyd. 1898.?The interest which has recently been aroused in England in the question of the openair treatment of consumption makes it all the more needful that English physicians should be familiar with the practices which have existed in many health-resorts in Germany since the establishment of Brehmer's Curhaus at Gorbersdorf in 1859. It seems now to be a fashionable doctrine that "we can create a local climate through the arrangement of the buildings," and hence that we should endeavour to carry out a climatic treatment of consumption in all countries, at any elevation and at any temperature. Surely the climatic question in the treatment of phthisis is dwindling down to something like vanishing point. The Properties and Uses of Pure Glycerin. By M.D., M.R.C.S. Pp. 49. [1898.] ?Some years ago, at a time when pure glycerine was not such a common commodity as at present, Price's Patent Candid Company issued a pamphlet intended to draw attention to the many applications of this useful substance to medicine and pharmacy. This has been re-written, and the origin, properties, and uses of glycerine are well reviewed in half-a-dozen brief chapters, concluding with a bibliography embracing the period 1854?1897. ?Reprinted from the Smithsonian Report, this will serve a good purpose, as Dr. de Schweinitz gives a clear and popular account of the germ theory of disease and of the system of serum-therapeutics. Attention is first drawn to the living micro-organisms, and then to those chemical products known as ptomains, enzymes, and toxins, and the specific differences of these bodies are clearly explained. The nature and method of action of antitoxins are discussed, and the paper concludes with an enumeration of the useful purposes to which germ activity is now applied. book to be recommended. Without the safeguard of experience, it is likely to be productive of more harm than good. Works of this kind are not needed. Students should not be allowed to acquire their knowledge after the methods here set forth, and to the expert and operating gynaecologist the book is an impertinence. The Pocket Formulary for the Treatment of Disease in Children. By Ludwig Freyberger, M.D. Pp. xv., 208. London: The Rebinan Publishing Company, Limited. 1898.?There are included in this work many useful formulae, but why the author has put the equivalents of the solid and not the liquid measures in the metric system does not appear to be very clear. The work is not merely a collection of prescriptions, but a sort of materia medica in brief, each drug having its properties, use, therapeutics, dose, and incompatibles clearly put. Useful hints are given how the taste of nauseous drugs may be disguised. It will be found a useful book. Informes Rendidos por los Inspectores Sanitarios de Cuartel y por los de los Distritos al Consejo Superior de Salubridad, [1896], 1897. 2 vols. Mexico : Imprenta del Gobierno, en el ex-Arzobispado. 1898.?Previously to the year 1898 the city of Mexico, having a population of 350,000, was in a deplorably insanitary condition. There was no drainage, unless the open sewers in the town could be called so; the streets were polluted with every description of filth and garbage, and there was but little potable water. The natural consequence was that zymotic affections flourished unchecked, and the mortality from all causes reached the high figure of 58.6 per thousand.' Of these the most formidable was typhus fever, which has been endemic in the city and neighbourhood since 1545, and in 1895 was responsible for nearly 2,000 deaths. Other diseases of the same class were proportionately fatal, notably smallpox and measles; but the former affection has not been so bad of late because vaccination has been most effectually carried out, there being no conscientious objectors in Mexico. At last this heavy mortality attracted the attention of the authorities, and they determined to grapple with the difficulty: for this purpose the city was divided into eight wards, and a medical man and a sanitary inspector were appointed to make a house-to-house visitation in each and report to the Council of Health on the state of things in every district. They recommended inter alia that a system of underground drainage should be carried out in all parts, that a number of workmen should be regularly employed in keeping the streets in a proper sanitary condition, and that water should be brought from a distance, filtered, and distributed in pipes throughout the city. Medico-Chirurgical Transactions. Vol. LXXXI. London: Longmans, Green and Co. 1898.? Such a deservedly high reputation belongs to the work of this society, that it is hardly necessary to do more than to say that this present volume contains many papers of great interest. Perhaps their chief 364 REVIEWS OF BOOKS. characteristic is that they all deal with subjects of direct practical importance in medicine and surgery. Transactions of the Medical Society of London. Vol. XXI. London: Harrison and Sons. 1898.?The present volume contains some interesting and important papers, among which we would especially notice Cases of Operation on Pancreatic Cysts, by Mr. Alban Doran, Dr. H. G. Rolleston, Mr. G. R. Turner, and Dr. J. D. Malcolm; also a paper on "Rectal Surgery," illustrated with numerous drawings, by Mr. Thomas Bryant. The reports on the discussions at the Society's meetings are, as usual, full and instructive. Transactions of the Association of American Physicians. Vol. XIII. Philadelphia: Printed for the Association. 1898.? Medical subjects of present interest are dealt with in these papers, which uniformly maintain a high level of excellence, and represent a large amount of original work and observation. Exigencies of space forbid our discussion of individual papers, and we must be content with commending the book to the careful perusal of our readers. Many of the contributions are very well illustrated. Transactions of the Miohigan State Medical Society. Vol. XXII. Grand Rapids: Published by the Society. 1898.? The volume is an account of the thirty-third annual meeting of the Society, held at Detroit, and consequently contains papers and discussions on most branches of "medicine"? papers differing in merit, of necessity, but many of which can be read with interest and profit. The printing has not been carefully corrected. "Program" is an Americanism we do not mind accepting, as we already have " diagram," but not so with " French capitol" (p. 24), " hair lip " (p. 13), and " preperation " (p. 192). An amusing lapsus occurs on p. 13, where it is said of Celsus (who was not an American, or we should have thought less of it) that " in wounds of the intestines, he performed gastrorrhaphy." Transactions of the South Carolina Medical Association. Charleston: Lucas & Richardson Co. 1898.?In the opinion of the President the general appearance of the Transactions could be greatly improved. That is so, and the proof-reading also ; we noticed " Opthalmology" (pp. 3 and 7), "Diptheria" (p. 25), and "parisites" (p. 187). The book should have a cloth cover ; and if the sheets must be wired, it should not be done through their sides. Among the many good papers in this record of the forty-eighth meeting we were specially attracted to one on " Serum Diagnosis of Typhoid Fever," by Dr. Robert Wilson, jun., who reflects the general opinion when he says that the Widal test is one the usefulness of which " no physician who lays claim to scientific attainment can afford to ignore." The Transactions of the Edinburgh Obstetrical Society. Vol. XXIII. Edinburgh: Oliver and Boyd. 1898.?The valedictoryaddress of Dr. Alexander Ballantyne, contained in this volume, makes a pleasing reference to Spencer Wells and to the general work of the Society during the present year. It is perhaps not unnatural that some remarks should have been made on professional secrecy, as the case of Kitson v. Playfair and Wife ,had recently been before the public, and the retiring president's words may be read with profit. Among the papers is one by Dr. John Moir, a veteran practitioner of go years, on the induction of premature labour, another by Dr. J. W. Ballantyne on a vitelline placenta in the human subject, and a description of a sireniform foetus. The other communications are of great value. Saint Thomas's Hospital Reports. Vol. XXVI. London: J. & A. Churchill. 1898.?Besides a number of papers dealing chiefly with cases of interest that have occurred in the hospital, there are the usual full reports of the medical and surgical sides and of the various special departments. A full abstract ?of cases of special importance is also given. Dr. Payne has a very interesting contribution on some old physicians of St. Thomas's Hospital, with portraits of Dr. Wharton and Dr. Mead. A new department for physical exercises has been added to the hospital, and should prove a very excellent departure, which might well be copied by other hospitals. Tuberculosis. Vol. I., No. 1. October, 1899. London: 20 Hanover Square.?This modest-looking new journal, issued by the "National Association for the Prevention of Consumption and other forms of Tuberculosis," is destined to perform good service in disseminating the proceedings of the association. Sir Hermann Weber's views on the production and prevention of tuberculosis would alone give the journal every title to a friendly welcome. Last year Sir Henry strongly advocated the payment of medical men for their services at hospitals : this year the matter is consigned to oblivion, and possibly wisely dropped; but it is replaced by the author's views on payments by patients, to which he has been converted. This is a feather in the cap of Mr. Sidney Holland, of the London Hospital, who has always supported this system ; but we should not be surprised if this was next year replaced by some other plan for establishing our hospitals on a more sound financial basis. With regard to local matters, we find no reference to the Bristol Hospital Sunday Fund, which has been two years in existence on its present basis. We suggest to Sir Henry that he should apply for a properly-audited balance-sheet of any institution included in hisannual, and that he should not be led away by the names of titled people as patrons. The International Directory of Booksellers and Bibliophile's Manual. Rochdale: James Clegg. 1899.?This is a carefullycompiled work, containing in alphabetical order the names and: addresses of booksellers, publishers, book-collectors, etc., all over the world. There is other information, such as articles on book-plates, copyright registry, sizes of books, etc. The need', of such a book must be limited, but it will be found useful to librarians and lovers of books. The only relief from the "seriousness" of the work is the "Book-Lover's Lexicon,"' wherein a few newly-coined words will be found. The " book borrower who carries off your choicest treasures and resents any suggestion as to their return as the deadliest insult" may be known to many of us, but there must be few who know that such an one is correctly described as a " bibliopokomist."
5,108.6
1899-12-01T00:00:00.000
[ "Linguistics" ]
Realizing unconventional quantum magnetism with symmetric top molecules We demonstrate that ultracold symmetric top molecules loaded into an optical lattice can realize highly tunable and unconventional models of quantum magnetism, such as an XYZ Heisenberg spin model. We show that anisotropic dipole-dipole interactions between molecules can lead to effective spin-spin interactions which exchange spin and orbital angular momentum. This exchange produces effective spin models which do not conserve magnetization and feature tunable degrees of spatial and spin-coupling anisotropy. In addition to deriving pure spin models when molecules are pinned in a deep optical lattice, we show that models of itinerant magnetism are possible when molecules can tunnel through the lattice. Additionally, we demonstrate rich tunability of the effective models' parameters using only a single microwave frequency, in contrast to proposals with $^1\Sigma$ diatomic molecules, which often require many microwave frequencies. Our results are germane not only for experiments with polyatomic symmetric top molecules, such as methyl fluoride (CH$_3$F), but also diatomic molecules with an effective symmetric top structure, such as the hydroxyl radical OH. I. INTRODUCTION Lattice models of exchange-coupled quantum mechanical spins such as the Heisenberg model have long served as paradigmatic examples of strongly correlated manybody systems [1,2]. The exquisite tunability and precise microscopic characterization of ultracold gases makes them promising candidates for exploring quantum magnetism. However, the most prominent platform for ultracold gas quantum simulation, neutral atoms loaded into optical lattices, has difficulty reaching the regime where quantum magnetism is manifest [3][4][5]. The reason for the difficulty is that the short-range interactions experienced by neutral atoms require two atoms to occupy the same lattice site in order to significantly interact. For twocomponent (effective spin-1/2) atoms, effective models of quantum magnetism emerge when on-site interactions U are significantly larger than the tunneling amplitude t between neighboring lattice sites, pinning the atoms in a Mott insulator phase with one atom in each lattice site. Effective spin interactions are then mediated by a superexchange process [1] which requires virtual tunneling to doubly occupied sites. Because the resulting effective spin couplings scale as t 2 /U with t U , the temperature scales required to see the onset of magnetism are extraordinarily small. Systems which feature long-range interactions can generate effective spin-spin interactions which are not mediated by tunneling, and so can display coherent internal state many-body dynamics even without quantum degeneracy in the motional degrees of freedom. Such long range effective spin couplings have been realized using trapped ions [6][7][8], Rydberg atoms [9], and magnetic atoms [10], and have been proposed for other platforms, such as atoms in optical cavities [11]. In this work, we focus on the realization of long-range effective spin * e-mail<EMAIL_ADDRESS>interactions with polar molecules, as has been recently demonstrated experimentally [12,13]. A unique feature of dipolar realizations of quantum magnetism, as polar molecules in optical lattices provide, compared to nondipolar systems (e.g., trapped ions) is that dipolar interactions are anisotropic. Anisotropic interactions do not conserve the internal (e.g., rotational) or the spatial angular momentum separately, but only their sum. By mapping the internal angular momentum of a molecule, in particular its rotational angular momentum, to an effective spin, the dipole-dipole interaction hence generates the possibility of unconventional models of quantum magnetism which do not conserve the total magnetization. As we will show in this paper, such models feature tunable degrees of both spin and spatial anisotropy. The exchange of internal and external angular momentum projection by dipole-dipole interactions requires two pairs of internal states which are nearly degenerate in energy (on the scale of dipole-dipole interactions) and also have dipole-allowed transitions between them. We show two such scenarios in Fig. 1. The first scenario is that we have two pairs of internal states, call them (n, m) and (n , m ) with energies E n +E m E n +E m nearly degenerate. Further, we assume that at least one of the latter states is not a member of the former pair of states [49], see Fig. 1(a). Such a two-particle near-degeneracy with nonradiative dipole coupling is generally called a Förster resonance, and such resonances have been fruitfully applied to control the interactions in Rydberg atoms [14,15]. Additionally, such resonances may occur at isolated points in the spectra of 1 Σ polar molecules, those with no orbital or spin angular momentum [16,17]. In this work, we instead exploit a resonant process such as is shown in Fig. 1(b). Here, two particles in the same internal state n are transferred to a different internal state n which is dipole-coupled to the first and brought into resonance by external fields. In contrast to the Förster resonance, this latter type of resonance involves only two single-particle states, and so naturally leads to a description in terms of a spin-1/2 system. Such Resonant dipolar processes which exchange internal and external angular momentum (a) An example of a Förster resonance which involves 4 different internal states satisfying the resonance condition En + Em E n + E m . Singleparticle dipole-allowed transitions |n → |n and |m → |m drive the interaction-induced two-particle transition |nm → |n m . (b) The resonances utilized in this work involve singleparticle levels |n and |n which are nearly degenerate and posses a single-particle dipole-allowed transition |n → |n that may vanish upon time averaging. Interactions cause a two-particle transition |nn → |n n , changing the net rotational projection of the molecules. resonances are a generic feature of magnetic dipoles, and lead to phenomena such as spontaneous demagnetization of spinor Bose gases [18]. In contrast, electric dipoles have parity and time-reversal selection rules which would appear to preclude a dipole-coupled resonance such as is shown in Fig. 1(b). A key finding of this work is that resonances like Fig. 1(b) with an electric dipole transition are also a generic feature of symmetric top molecules (STMs) with microwave and static field dressing, even in the presence of hyperfine or other detailed molecular structure. Polyatomic STMs, such as the methyl fluoride molecule, CH 3 F, shown in Fig. 2(a), have a high degree of symmetry in their rotational structure which makes them behave as "electric analogs" of pure magnetic dipoles [19]. Further details on STMs and their interactions with external fields is provided in Sec. III. In the present work, we show that two isolated internal states of an STM tuned near a resonance of the form shown in Fig. 1(b) form an effective spin-1/2 which is governed by a model with tunable anisotropy in both the spatial and spin-component dependence of the effective spin couplings. In contrast to related proposals, such as the realization of spin-component anisotropic XYZ Heisenberg models using bosonic atoms in excited p-orbitals of an optical lattice [20] or in a synthetic gauge field [21,22], our spin couplings are nonperturbative in the particle-particle interaction strength and are not mediated by tunneling through the lattice. Hence, magnetic phenomena in our proposal can be realized even in the absence of motional quantum degeneracy for the molecules. The ability to observe coherent manybody dynamics in a non-degenerate sample of ultracold molecules is important, as cooling of molecules is difficult and the fully quantum degenerate regime has not yet been reached [3,23]. Additionally, our microwave dress-ing proposal applies for present ∼ 500nm optical lattice spacings, in distinction to proposals for 2 Σ molecules, where appreciable couplings require trapping at quite small lattice spacings [24]. Finally, our proposal requires only a single microwave frequency, in contrast to many proposals for 1 Σ molecules where multiple frequencies are required [17,[25][26][27]. This paper is organized as follows. Sec. II provides a phenomenological analysis of how the resonances shown in Fig. 1(b) lead to effective quantum spin systems, e.g. XYZ Heisenberg spin models, which do not conserve magnetization. In Sec. III we present an overview of symmetric top molecules and their coupling to external fields. In particular, Sec. III B discusses the interaction of symmetric top molecules with microwave radiation that is near-resonant with a rotational transition. We then focus on how to engineer the external fields to obtain tunable resonances like those in Fig. 1(b), and analyze the dipole-dipole interactions between microwave-dressed states. Section IV derives effective many-body models of quantum magnetism which are applicable near the fieldinduced level crossings. Finally, in Sec. V, we conclude. Some more technical details of the dipole-dipole interactions in the microwave-dressed basis states are given in the appendix. II. PHENOMENOLOGICAL REALIZATION OF AN XYZ SPIN MODEL In this section, we provide a phenomenological analysis of how a resonance such as that shown in Fig. 1(b) leads to an effective XYZ spin model for a collection of STMs pinned in a quasi-2D lattice geometry, as in Fig. 2(b). A more detailed analysis will be provided in Sec. IV, which also considers the general case in which molecules are not pinned. We will label the two resonant internal states of the molecule as |0 and |1 [50], and assume that all other states are far-detuned on the scale of interactions and so may be neglected. Further, let us assume that the states |0 and |1 have well-defined internal angular momentum projections M = 0 and 1, respectively, for simplicity, though we stress that the assumption of well-defined an- Symmetric top molecules (STMs) in optical lattices (a) Rotational angular momentum geometry of the STM CH3F. (b) Schematic of field and lattice geometry. Purple and green denote two internal states. gular momentum is not essential. Two molecules, call them molecules i and j, interact through the dipoledipole interaction where R ij is the vector connecting two molecules, e ij is a unit vector along R ij , andd (i) is the molecular dipole operator for molecule i. For our purposes, it is useful to recast the dipole-dipole interaction aŝ where C Here, θ ij is the polar angle between R ij and the quantization axis and φ ij is the azimuthal angle in the XY plane. For the 2D geometry of Fig. 2(b), θ ij = π/2, and so the interactions Eq. (5) vanish identically. The term Eq. (4) conserves the internal and orbital angular momenta separately. Hence, when projected into our basis {|0 , |1 } of states with well-defined internal angular momentum, the most generic spin-spin coupling that can result iŝ , |1 } and we have ignored constant terms and single-spin terms [3]. The coupling constants J ⊥ and J z are set by dipole matrix elements, and in general are affected by external confinement. The terms proportional to J ⊥ are responsible for the "spin exchange" or "state swapping" [28] dipolar interactions which were observed recently in the KRb experiment at JILA [12], and the J z terms account for the fact that interactions between molecules in the |0 state may be different from interactions between molecules in the |1 state. In contrast to the q = 0 term, Eq. (6) does not conserve internal and external angular momentum separately, but transfers two units of angular momentum from the molecular rotation to the orbital motion or vice versa. Projected into our two-state basis, these read Hence, the complete dipole-dipole interaction, Eq. (3), projected into these two states allows for vast control over the X, Y, and Z components of the Heisenberg spin couplings via geometry and dipole matrix elements. Models with unequal X and Y coupling strengths do not conserve the total magnetization. Quantum spin models which do not conserve magnetization are of interest because they can generate quantum phases with no counterpart in magnetization-conserving systems, and also for their connection to Majorana fermions and other topological phenomena, see Sec. IV. We stress that the q = ±2 components, Eq. (6), which are responsible for the terms which do not conserve magnetization, Eq. (9), only contribute near a resonance such as in Fig. 1(b). In the remainder of this work, we will show how to engineer such resonances for symmetric top molecules, and also how to tune the effective spin-spin couplings J ⊥ , J z , and J ∆ (see Eqs. (20) and (21) for the final spin model results). Also, we relax many of the simplifying assumptions made in this section, such as the restriction that the molecules are pinned and that the molecular states have well-defined internal angular momentum projection. III. SYMMETRIC TOP MOLECULES AND THEIR INTERACTION WITH EXTERNAL FIELDS In this section, we review the basic properties and energy scales of symmetric top molecules (STMs) and their interactions with both static and dynamic external fields. A key result of this section is that STMs display a linear Stark effect, which is to say the energy varies linearly with the applied electric field strength at moderate fields. A linear Stark effect has the consequence that a large portion of the dipole moment of an STM can be accessed with very modest electric fields. We also show that the linear Stark effect can be used together with microwave dressing of low-lying rotational states to engineer level crossings in the single-molecule energy spectrum. Such level crossings enable the realization of the resonances shown in Fig. 1(b) that are key for the unconventional magnetism described in this work. A. Rotational structure and interaction with static electric fields Polyatomic symmetric top molecules (STMs), of which methyl fluoride, CH 3 F, is a canonical example, are defined by a doubly degenerate eigenvalue of the inertia tensor. Such a doubly degenerate eigenvalue corresponds to a cylindrical symmetry of the molecule, see where the rotational constants B 0 ≈ 25GHz, A 0 ≈ 155GHz for CH 3 F. Diatomic 1 Σ molecules, such as the alkali dimers, cannot have a projection of J on the body axis, and so K = 0 identically. Just as the isotropy of space requires that the states with differing projections M of J onto a space-fixed axis are degenerate in the absence of external fields, the cylindrical symmetry of STMs requires that states with opposite projection ±K of J onto a molecule-fixed axis are degenerate. Corrections to the rigid rotor approximation in the vibration-rotation Hamiltonian, such as the wellknown inversion of ammonia, can cause mixing of the K levels and result in a splitting of this degeneracy. For simplicity of discussion we will focus on molecules such as CH 3 F which do not have an inversion splitting in the body of this paper, though we will revisit this issue at the end of the next subsection. The presence of a nonzero molecule-frame projection of rotational angular momentum K in STMs means that STMs can display a linear response to an externally applied static electric field. This is in stark contrast to the quadratic response exhibited by Σ-state molecules such as the alkali metal dimers [19]. In particular, in a static electric field of strength E DC B 0 /d defining the quantization axis, with d the permanent dipole moment, the matrix elements of the dipole operator along space-fixed spherical direction p,d p , take the form of a spherical tensor with reduced matrix element J, K ||d||J, K = dK The strong coupling of STMs to external fields enables them to be effectively decelerated by electric fields [30], and is the basis of opto-electrical cooling, a novel route to bring generic STMs to quantum degeneracy [31,32]. Furthermore, the nonzero reduced matrix element of the dipole operator within a rotational state manifold enables STMs to simulate the physics of magnetic dipoles and quantum magnetism with greatly enhanced dipolar interaction energies [19]. In Ref. [19], we showed how this correspondence between STMs with rotational quantum number J and an elemental quantum magnet with spin J gives rise to long-range and anisotropic spin models. In what follows, we introduce microwave dressing of rotational states as an additional handle with which to modify the forms and relative strengths of interactions that appear in such effective spin models. B. Microwave dressing of symmetric top molecules Microwave radiation couples together neighboring rotational states of a molecule when the frequency of the radiation is near-resonant with the rotational energy level difference. For simplicity, we first consider applying a microwave field E AC with linear polarization along the space-fixed quantization axis, ε AC = e Z , see Fig. 2(b), which is red-detuned an amount ∆ B 0 [51] from resonance with the |J, K, 0 → |J + 1, K, 0 transition, as shown in Fig. 3(a). While the frequency of this transition in CH 3 F is larger than the corresponding rotational transition in the alkali dimers, the wavenumber of the transition k ≈ 2(J + 1)B 0 /(hc) is much less than 1/a, with a the average separation between molecules, of order a few hundred nanometers for typical optical lattices. Hence, we neglect the spatial dependence of the microwave field. Applying the rotating wave approximation and transforming to the Floquet picture [33], the quasienergies are obtained by solving the Schrödinger equation for fixed M with the 2 × 2 Hamiltonians [52]: where with the Rabi frequency Ω ≡ dE AC . Single-particle eigenstates of Eq. (11) in the rotating frame will be denoted by an overbar, e.g., |0 . In the perturbative regime where Ω, dE DC ∆, the quasienergies are split into manifoldsẼ JKM ;± separated by roughly ∆, see Fig. 3(b). The M dependence of the off-diagonal components Ω JKM introduces an effective tensor shift between states of different M which is proportional to Ω 2 , similar to the microwave-induced quadratic Zeeman effect in spinor Bose gases [34,35]. Including the static field E DC can cause two such quasienergy levels with different M to cross as the static field energy dE DC becomes of the order of the effective tensor shift, as shown for the case of the (J, K) = (1, 1) → (2, 1) transition in Fig. 3(a). The ability to engineer generic quasienergy level crossings by tuning the static electric field strength is a consequence of the linear Stark effect exhibited by STMs. The Stark effect in 1 Σ molecules is quadratic, and so shifts all levels with identical J and |M | in the same fashion. Level crossings can also be engineered outside of the perturbative regime, as well as for arbitrary polarization and rotational quantum number J. As an example, we consider the transition (J, K) = (2, 2) → (3, 2) with right-circularly polarized light in Fig. 4. Panel (b) of Fig. 4 shows two levels which cross outside of the perturbative regime. Here, the linear Stark energy must overcome not only the effective tensor shift, but also the detuning ∆ of the microwave field from resonance. In what follows, we will denote the parametric relationship of the Rabi frequency Ω and the electric field at a quasienergy level crossing asΩ (E DC ). In our analysis of the field dressing of STMs we have neglected hyperfine structure. Though the hyperfine structure of STMs is complicated [19], a single hyperfine component may be selected via a strong magnetic field, similarly to the alkali dimers [36]. Alternatively, working at microwave detuning large compared to the typical hyperfine splittings, ∆ E hfs ≈ 10kHz for CH 3 F, one can address all hyperfine states equally with a readily achievable microwave power on the order of tens of W/cm 2 . While we have focused on polyatomic STMs in which all states with a given J and K are degenerate in zero DC field, we expect similar level crossings in other systems with a linear Stark effect but no zero-field degeneracy, such as the Lambda doublet of OH [37], its fermionic analog OD [38], or other species with non-zero projection of orbital angular momentum along the symmetry axis of the molecule |Λ| > 0. Generally, one can take the detuning ∆ much larger than any fine energy scale which is not to be resolved and simply rescale the static field energy dE DC and the Rabi frequency Ω accordingly. C. Dipole-dipole interactions in microwave-dressed states We now turn to the effective dipole-dipole interactions (Eq. (3)) in the microwave-dressed states. The components of the dressed states |0 and |1 in the |J +1, K, M manifold oscillate in time with frequency ω. Hence, the dipole moments of the dressed states contain both static and time-oscillating pieces. While the oscillating terms time-average to zero for a single molecule, the dipoleallowed exchange of rotational quanta for two molecules can be resonant due to the two dipoles oscillating in phase [12,39]. In more detail, let us consider the Floquetpicture eigenstates in the presence of a microwave with spherical polarization p = 0, which couples states |JKM to |(J + 1)KM , see Fig. 3(a). Consider two such eigen- Because we are interested in levels which cross, we assume that M = M . From Eq. (11) are the dipole matrix elements in pure rotational STM states. Here, we recall that J is the rotational principal quantum number, M is the projection of rotation on a space-fixed quantization axis, and K is the projection of the rotational angular momentum on the symmetry axis of the molecule. In order to find the effective dipole-dipole interactions, we take the matrix elements of the dipole operators in Eq. (4)-(6) using the matrix elements of Eq. (15)- (17) and then perform the long-time average. Here, "long" time refers to a time which is long compared to the period of the microwave field. The long-time average is justified by the fact that the characteristic timescales of the translational motion of molecules are orders of magnitude longer than the period of the dressing field. The resulting time-averaged interactions for our two example polarization schemes are discussed in Appendix A, and specific numerical examples of interactions for these two polarizations are given in the next section. The only assumption we use in this work is that the dipole moments of two states near a level crossing only have static components along a single space-fixed spherical direction. Practically, the microwave field can contain either p = ±1 components or p = 0 components, but not both. This is equivalent to the statement that terms which transfer only a single molecule between dressed state components are all proportional to sin θ cos θ, and so vanish in the geometry of Fig. 2. The requirement of only a single microwave frequency is in contrast to proposals with 1 Σ molecules, which often require precise frequency and polarization control of multiple microwaves [17,[25][26][27]. IV. UNCONVENTIONAL HUBBARD AND SPIN MODELS WITH SYMMETRIC TOP MOLECULES In this section we incorporate the single-particle physics discussed in the previous section into an interacting many-body description in second quantization. Following a translation of the many-body problem to a Hubbard-type lattice model for the lowest lattice band, we then show how limiting cases of this description, for example when the molecules are pinned to lattice sites, leads to spin models with unconventional magnetic couplings. Our main results are Eq. (19), the most complete Hubbard-type description of the physics of STMs near a quasienergy level crossing, and Eqs. (21) and Eqs. (23), which are the reductions to the Heisenberg XYZ and XY models, respectively. A. Second-quantized description of physics near a quasienergy level crossing In order to derive an effective model for the microwavedressed STMs trapped in an optical lattice, we use the standard prescription [40] of expanding the field operator in the second quantized representation of the Hamiltonian in a basis of Wannier functions and keep only the terms corresponding to the lowest band of the lattice. Additionally, in what follows we will assume hard-core particles, which can be either bosons or fermions. By hard-core we specifically mean no more than one molecule can simultaneously occupy a given lattice site irrespective of internal state considerations. Such a constraint can arise either from a large positive elastic interaction energy, or from rapid inelastic losses via the quantum Zeno effect. The quantum Zeno effect has been shown to enforce a hard-core constraint for KRb, where two-body losses are due to chemical reactions, and gives rise to lifetimes which are long compared to the typical time scales of interactions [12,41]. Because our scheme populates multiple dressed states consisting of different rotational levels, molecules undergo possibly rapid rotationally inelastic processes at short range which will cause a loss of molecules from the trap even if the molecules themselves are chemically stable. The numerical examples given in this work have sufficiently large elastic on-site interactions that we do not need to worry about the nature of short-range inelastic collisions, and we can attribute the hard-core constraint to elastic interactions alone. For two dressed states σ ∈ {0,1} which are separated from all others by an energy large compared to the characteristic dipole-dipole energy scale, an expansion of the full many-body description in terms of the lowest band ) FIG. 5: Interaction processes in the effective lattice Hamiltonian. The two internal states |0 and |1 may be viewed as an discrete spatial degree of freedom, e.g. a ladder. (a) Tunneling rates tσ depend on the internal state due to polarizability anisotropy [19,42]. (b) E and W interactions change the internal state of the molecules. E processes preserve the number in each internal state, W processes change it by ±2. (c) U interactions preserve the internal state of the molecules. Wannier functions readŝ Here,â iσ destroys a STM in Wannier state w iσ (r) centered at site i, In order, the terms in Eq. (19) are state-dependent tunneling t σ of molecules between neighboring lattice sites i, j [53]; a single-particle energy offset δ of state |1 with respect to state |0 ; state-exchanging collisions E i,j of molecules at sites i and j; state-transferring collisions W i,j which transform two molecules in state |0 at sites i and j into the state |1 and vice versa; and state-preserving collisions U σσ i,j between molecules in states σ and σ at lattice sites i and j, respectively. A schematic view of the processes in Eq. (19) is given in Fig. 5(a)-(c). Note that Eq. (19) applies to any 2D lattice geometry. The Hamiltonian Eq. (19) bears a strong resemblance to the molecular Hubbard Hamiltonian (MHH) which has been derived for 1 Σ molecules in optical lattices [43][44][45], and many of the terms here have the same meaning as in the MHH. In particular, the interaction terms U σσ i,j correspond to the direct terms σσ |Ĥ DDI |σσ , and the interaction terms E i,j correspond to the exchange terms σσ |Ĥ DDI |σ σ , whereĤ DDI is given in Eq. (1). Expressed in the language of quantum magnetism for a spin-1/2 encoded in the states | {|0 , |1 }, the U terms correspond toŜ zŜz or Ising-type interactions and the E terms correspond toŜ +Ŝ− or spin-exchange-type interactions [25]. The new terms here, which have no counterpart in the MHH description of 1 Σ molecules, are the W i,j terms, which correspond to σσ|Ĥ DDI |σ σ . These terms correspond toŜ +Ŝ+ -type interactions in the spin language, and are absent from typical Heisenberg XXZtype models of quantum magnetism, including those realized with 1 Σ polar molecules [25]. All of the interaction coefficients U , E, and W may be tuned by adjusting the static and microwave field dressing strengths, ensuring that the two quasienergy levels remain near resonance. The magnitudes of the Hubbard parameters for the specific level crossings in Figs. 3 and 4 are displayed in Fig. 5(d)-(e) as a function of the E DC -dependent Rabi frequency at the level crossing,Ω(E DC ), with analytic expressions given in the appendix. For these dressing schemes, the Hubbard parameters U and E are overlaps of the q = 0 component of the dipole-dipole potential, Eq. (4), in the basis of Wannier functions [46], while the W terms involve overlaps of the q = ±2 components of the dipole-dipole potential, Eq. (6). All dipolar parameters U , E, and W have an approximately 1/|i − j| 3 decay between lattice sites, and the W terms additionally feature a dependence on the azimuthal angle φ. Other dressing schemes, for example those involving both p = ±1 polarizations, divide this angular dependence between U , E, and W . The Hamiltonian Eq. (19) has a U(1) symmetry generated by the total number operatorN =N 0 +N 1 witĥ N σ = in iσ . The W term breaks number conservation within each internal state, but preserves the parities defined byP σ = exp(−iπN σ ). Due to the U(1) symmetry, the two parities are redundant, both being proportional toP = exp[− iπ 2 (N 0 −N 1 )], which is the parity of the number difference between internal states. Hence, the internal symmetry of the model Eq. (19) is U(1)×Z 2 [47]. We can interpret the W term as being a hopping of pairs between two quantum wires or layers, where the wire indices correspond to the dressed states of the molecule, see Fig. 5. Due to the fact that exchange of rotational quanta only occurs when the dipoles oscillate in phase and the particular geometry, dipolar excitation of a single molecule is forbidden. Single excitation processes which break the Z 2 symmetry can be included systematically by other choices of geometry or field polarization, see Sec. III C and the appendix. B. Mapping to a pure spin model In ultracold gases it is often easier to achieve low temperatures for the internal degrees of freedom even when the motional degrees of freedom remain hot. As an example, a collection of molecules all prepared in the same quantum state has zero effective spin temperature. Provided that the motional temperature of the molecules is lower than the rotational excitation energy (typically on the order of a few hundred milliKelvin), the spin and motional temperatures are effectively decoupled, and only the former is important for the dynamics. Hence, a natural first step for many-body physics is to freeze the motional degrees of freedom by loading into a deep optical lattice and consider the dynamics of only the internal degrees of freedom [12]. In the limit in which the quasi-2D confinement is so deep that the tunneling is negligible, Eq. (19) becomes a long-range and anisotropic spin model where h i = δ + 1 4 j,j =i U 00 i,j − U 11 i,j σn jσ is the effective magnetic field at site i, we have ignored a constant term, and W R i,j (W I i,j ) is the real (imaginary) part of W i,j , theŜ + iŜ + j coupling. Again, we would like to stress that Eq. (20) is defined on any 2D lattice geometry. The Hamiltonian Eq. (20) does not conserve magnetization due to the non-zero W R and W I terms easily accessible in our scheme, in contrast to the XXZ models realized with alkali dimer molecules [12,25]. In deep optical lattices, where the Wannier functions become welllocalized [46], the dipolar coupling constants can be approximated as where φ i,j is the angle between the vector connecting sites i and j and the x axis, and U , E, and W are related to geometrical factors and expected dipole moments. With these approximations, we can rewrite Eq. (20) aŝ which makes the spatial anisotropy of the model more explicit. Some simplification of Eq. (20) occurs in one spatial dimension (1D), which corresponds, e.g., to taking a single row of the 2D square lattice shown in Fig. 2(b). Such a reduction can be performed in experiment by applying electric field gradients to select a single row of a 2D optical lattice. In a 1D geometry, we can always choose coordinates such that the x axis lies along the lattice direction, and so W I i,j vanishes and W R i,j is a monotonically decreasing function of |i − j|. Here, Eq. (20) reduces to a spin-1/2 XYZ Heisenberg model in a longitudinal field where J X i,j = (E + W )/|i − j| 3 , J Y i,j = (E − W )/|i − j| 3 , and J Z i,j = (U 00 + U 11 − 2U 01 )/|i − j| 3 . Hence, the degree of spin anisotropy is tunable by changing the ratio between E and W , see Fig. 5. The phase diagram of the nearest-neighbor version of this model has been investigated recently in Ref. [20], displaying Berezinsky-Kosterlitz-Thouless, Ising, first-order, and commensurate-incommensurate phase transitions. Further, considering that the coefficient ofŜ z iŜ z j vanishes [54], Eq. (20) becomes a long-ranged version of the XY model in a longitudinal field The nearest-neighbor XY model is equivalent to the Kitaev wire Hamiltonian [48], which has connections to topological phases and Majorana fermions. It was also pointed out that long-range interactions may not qualitatively change the nature of topological phases [26]. Finally, we note that in the limit of motionally quenched molecules, the quantum statistics of the underlying molecules are unimportant; one can also realize Eq. (20) with bosonic or fermionic STMs. V. CONCLUSION We have identified a general mechanism for generating level crossings between internal states with a finite transition dipole matrix element in symmetric top molecules by a combination of microwave dressing and the linear Stark effect. Such a pair of near-degenerate dressed states form an effective spin-1/2. The dipole-dipole interaction generates resonant pair transitions between such nearly degenerate levels. By appropriate choices of geometry and field polarization, transfer of a single molecule between internal states can be forbidden, and the resulting manybody system features tunable degrees of spatial and spincomponent anisotropy. Using only a single microwave frequency, we show rich tunability of the effective model parameters over a wide range. As special cases of our general many-body description, we show that Heisenberg XYZ and XY spin models arise when molecules are confined to a one-dimensional line in a deep optical lattice. Our results provide a new route towards the study of unconventional quantum magnetic phenomena by harnessing the rich internal structure of molecules. We acknowledge useful conversations with Christina Kraus and Ryan Mishmash during initial development and exploration of the ideas in this work, and thank Kaden Hazzard and Ana Maria Rey for their comments on the manuscript. This work was supported by the AFOSR under grants FA9550-11-1-0224 and FA9550-13-1-0086, ARO grant number 61841PH, ARO-DARPA-OLE, and the National Science Foundation un-der Grants PHY-1207881, PHY-1067973, PHY-0903457, PHY-1211914, PHY-1125844, and NSF PHY11-25915. We also acknowledge the Golden Energy Computing Organization at the Colorado School of Mines for the use of resources acquired with financial assistance from the National Science Foundation and the National Renewable Energy Laboratories. We thank the KITP for hospitality. Note added: After this work was submitted, we learned of a related proposal by Glaetzle et al. for generating XYZ Heisenberg models using Rydberg atoms (arxiv:1410.3388). We believe Rydberg atoms offer an exciting alternate scenario for the realization of anisotropic XYZ models, complementary to those described in this paper. , (A9) In addition, terms of the form M M |Ĥ DDI |M M , which cause a transition |M → |M for one molecule while the other molecule's state is unchanged are also present. All such terms are proportional to sin θ cos θ in the present dressing scheme, and so vanish for the 2D geometry of Fig. 2(b) in which the DC electric field is perpendicular to the plane. Note that this geometry only refers to the orientation of the DC electric field with respect to the plane, and makes no assumptions about the lattice structure in the plane. In Eqs. (A3)-(A8) we explicitly show the θ-dependent factors in to provide clarity about the origins of each term. In what follows, we assume θ = π/2 as in Fig. 2(b). To see how these matrix elements can be modified by polarization, let us consider that we have polarization p = 1 and consider M = M +1, as shown in Fig. 4. Here, again neglecting terms which vanish in the 2D geometry of Fig. 2
8,388.8
2014-10-15T00:00:00.000
[ "Physics" ]
Class-Specific Interferometric Phase Estimation Using Patch-Based Importance Sampling Interferometric phase (InPhase) estimation, that is, the denoising of modulo- $2\pi $ phase images from sinusoidal $2\pi $ -periodic and noisy observations, is a challenging inverse problem with wide applications in many coherent imaging techniques. This paper introduces a novel approach to InPhase restoration based on an external data set and importance sampling. In the proposed method, a class-specific data set of clean patches is clustered using a mixture of circular symmetric Gaussian (csMoG) distributions. For each noisy patch, a “home-cluster”, i.e., the closest cluster in the external data set, is identified. An InPhase estimator, termed as Shift-invariant Importance Sampling (SIS) estimator, is developed using the principles of importance sampling. The SIS estimator uses samples from the home-cluster to perform the denoising operation. Both the clustering mechanism and the estimation technique are developed for complex-valued signals by taking into account patch shift invariance, which is an important property for an efficient InPhase denoiser. The effectiveness of the proposed algorithm is shown using experiments conducted on a semi-real InPhase data set constructed using human face images and medical imaging applications involving real magnetic resonance imaging (MRI) data. It is observed that, in most of the experiments, the SIS estimator shows better results compared to the state-of-the-art algorithms, yielding a minimum improvement of 1 dB in peak signal-to-noise ratio (PSNR) for low to high noise levels. I. INTRODUCTION Coherent imaging techniques, especially interferometry, plays an important role in many present-day technologies. Interferometry is a family of techniques that measure the phase differences of two or more waves (electromagnetic or acoustic) to infer physical parameters, such as small displacements, refractive index, topography of an irregular surface, etc. It has many applications in remote sensing [1], [2], medical diagnostic [3]- [5], surveillance [6], [7], weather forecasting [8], [9], and photography [10]. Some of the very relevant technologies in which phase imaging is a crucial part include interferometric synthetic aperture radar and sonar (InSAR/InSAS) [1], [2], [11]- [14], magnetic resonance imaging (MRI) [15], [16], optical The associate editor coordinating the review of this manuscript and approving it for publication was Yu Zhang. Often, in a coherent imaging system, a physical quantity of interest is coded in a phase image. However, inference of phase is a challenging inverse problem due to the degradation in the sensing mechanism. Let z ∈ C be a complex-valued observation in a noiseless coherent imaging system which is related to the phase φ ∈ R via where j = √ −1. A major issue with this observation model is that the measured signal z depends only on the principal (wrapped) value of the absolute phase φ. Such sensing mechanism provides only a wrapped version of φ, usually defined in the interval [−π, π). We denote the wrapping operation as where W is the wrapping operator that performs modulo 2π wrapping, given by The wrapped phase is termed ''interferometric phase'', which is abbreviated as InPhase (φ 2π ) in this paper. Inference of the original phase (φ) from the observed InPhase (φ 2π ) is known as phase unwrapping (PU). In addition to the above-mentioned non-linear distortion, the measured phase in a practical acquisition system is further corrupted by the noise inherent to the acquisition mechanism and the electronic equipment. Let a noisy observation be denoted as where n ∈ C is a complex-valued perturbation, often termed noise. Model (4) captures the essential features of most interferometric phase estimation problems, including magnetic resonance imaging (MRI) [13], [14], [19]- [21]. The observed noisy InPhase can be written as φ noisy 2π = W φ noisy = angle (z) . Here, the observed InPhase φ noisy 2π is not only wrapped but also noisy. Estimation of the noiseless wrapped phase (φ 2π ) from noisy wrapped observation (φ noisy 2π ) is termed as phase denoising (PD) or InPhase estimation. Though there have been a few attempts to address PD and PU jointly [14], [22]- [24], the common strategy is to follow a two-step approach in which PD is followed by PU. This paper focuses on PD, the first step of the two-step approach. We emphasize that PD is very different from natural image denoising due to the wrapping discontinuities present in InPhase images. A. RELATED WORK One of the simplest and most straightforward strategies in PD is to assume that the phase is constant in a small local neighbourhood. Local polynomial approximation (LPA) [25] and PEARLS [26] are two representative algorithms from this class. LPA approximates the phase in a rectangular window using a zero-order polynomial. The denoising performance of such algorithms depends crucially on the size of the window. LPA's performance is considerably affected by its fixed-size windows. PEARLS is another polynomial approximation algorithm that provides an adaptive-window framework to address LPA's limitations. It uses a zero-order LPA to estimate the window size selection and a first-order LPA to perform the filtering operations. However, PEARLS fails to deal with sharp discontinuities since the first-order polynomials are not adequate to model such structures. Many recent image denoising methodologies benefit from the study of self-similarity and sparsity, often using patch-based techniques [27]- [29]. Natural images exhibit non-local self-similarity, meaning that they contain many similar patches at different locations. Among the selfsimilarity-based algorithms, BM3D [30] is a famous representative example, in which image denoising is accomplished through collaborative filtering of groups of self-similar patches. Although BM3D is not directly applicable for PD, there are a few variants of it proposed exclusively for InPhase applications [31]- [33]; among them, CD-BM3D and ImRe-BM3D [32] are state-of-the-art. The CD-BM3D algorithm performs the collaborative filtering through a 3D high order SVD (HOSVD); on the other hand, ImRe-BM3D incorporates a 4D HOSVD-based filter and treats the real and imaginary parts of the complex-valued signal as a pair. One major disadvantage of BM3D-based algorithms is that they do not incorporate a data-adaptive transform, which limits their performance for images with fine details, singularities, or sharp and curved edges. Although some data-adaptive versions of BM3D have been proposed [34], they are not developed for complex-valued signals. SpInPhase [21] and NL-MoGInPhase [35] represent another type of recent PD algorithms that exploit self-similarity in complex-valued images. SpInPhase is a dictionary-based denoising method that utilizes sparse coding in the complex domain. In NL-MoGInPhase, PD is accomplished by modelling the complex-valued phase patches using mixtures of Gaussian (MoG) densities. The data-adaptive capabilities of these algorithms obtained by learning a dictionary or a MoG from the noisy data make its performance better compared to CD-BM3D and ImRe-BM3D [32]. Although the data-driven representations like SpInPhase and NL-MoGInPhase yield very powerful PD algorithms, the fact that, in many applications, the phase is locally smooth makes fixed representations in the Fourier domain still powerful tools in PD. Such locally smooth phase images can be approximated by a few windowed Fourier transform (WFT) coefficients. Windowed Fourier filtering (WFF) [36] is a relevant work in this category. Although WFF yields stateof-the-art PD performance in many InPhase applications, it is limited by its fixed-resolution due to the lack of an adaptive window. SURE-fuse WFF [37] is a recent PD algorithm, based on a multi-resolution WFF analysis to overcome the fixed-resolution limitations of the original WFF. In that algorithm, the WFF estimates having different resolutions are fused in a linear, pixel-wise, and data-adaptive manner by minimizing an unbiased estimate of the mean square error derived using Stein's lemma. However, SURE-fuse WFF suffers from high computational complexity since it requires the computation and fusing of WFF estimates having different window sizes. One common drawback of most of the PD algorithms discussed above is that they do not have an algorithmic structure to make use of prior knowledge from an external database. In many practical applications, the image to be denoised is known to belong to a certain class. In such a situation, a large data set of clean images, termed ''external data set'' [38]- [40], from the same class, may be available, which can be used to tailor a statistical prior to enhance the denoising performance. Further development of this idea can be observed in [41]- [43], in which the principles of importance sampling (IS) are exploited to approximate the intractable minimum mean square error (MMSE) integral, which is a usual roadblock in image denoising. An important contribution of those works is that they bring flexibility in image denoising irrespective of the noise statistics. Recently proposed class-specific image denoising algorithms [41]- [43] were mainly developed for real-valued images, and they do not adapt well to PD, in which the underlying signals are complex-valued. Although SpInPhase [21] and NL-MoGInPhase [35] can be modified to exploit a class-specific external data set [44], they were not originally developed to learn the dictionary (or MoG) from external noise-free data. This limitation motivates the need for a PD algorithm that can make use of clean external data sets, often available in many practical applications. Figure 1 illustrates a motivational application of class-specific PD, aiming to reduce MRI scanning time, which is an active research topic in medical imaging [45], [46]. In Fig. 1, a noisy MRI interferogram, possibly obtained from a quick scanning [47], of a person is denoised using clean MRI data from other people. A similar approach is applicable in other InPhase scenarios; for example, a noisy InSAR image of a mountain could be denoised using the InSAR data of other mountains. To the best of our knowledge, such class-specific strategies have not been explored in PD, and this paper is the first to advance in this direction. B. PAPER CONTRIBUTIONS In this paper, we investigate how to exploit an external data set to develop a class-specific PD algorithm. Inspired by [41]- [43], we propose a new estimation technique for InPhase images, which involves two major parts: • A shift-invariant clustering mechanism developed using csMoG densities to cluster external class-specific InPhase patches • A patch-based shift-invariant InPhase estimator developed using IS technique to denoise the noisy InPhase images using the clustered external patches. The remainder of this paper is organized as follows. Section II provides a brief introduction to importance sampling necessary to the following algorithm derivation. In Section III, the class-specific InPhase estimator is developed. The experiments and results are provided in Section IV and the paper concludes in Section V. II. INTRODUCTION TO IMPORTANCE SAMPLING Let us consider a random variable X distributed as X ∼ p X and a function f that is zero outside a set A. In many applications, it can happen that the set A is ''small'' or it is in the tail of the distribution p X as shown in Fig. 2. In such cases, the probability P(A) = A p X (x) dx is small. Now, to compute an estimate of any statistical characteristic of f (x), say the expectation E p X [f (X)], a plain Monte Carlo sampling from the distribution p X is not an effective method. This is because a plain sampling from p X could fail to provide even one sample from the set A. This is a typical problem that appears in many fields, namely, in high energy physics, Bayesian inference, rare event simulation for finance, insurance etc. [48,Chapter 9]. An intuitive approach is to sample from a different distribution, say q, that yields more samples from the ''important'' region, that is the set A in the above example. Such a sampling strategy is termed importance sampling (IS). Let us consider the expectation formula where D ⊆ R d is the domain of p X . 1 Let q be another positive probability density function defined on R d as shown in Fig. 2. IS is based on rewriting (6) by making q the sampling distribution instead of p X : where E q denotes the expectation operator under the density q. Here p X is a nominal density and q is an importance density. A Monte-Carlo approximation based on (7) yields the IS estimate of µ = E p X [f (X)] given by where the samples By the strong law of large numbers, P lim n→∞ µ q = µ = 1 [48,Chapter 9]. In the following section, we propose a new PD algorithm by adopting the IS strategy. III. CLASS-SPECIFIC InPhase ESTIMATION USING PATCH-BASED IMPORTANCE SAMPLING Our proposal is based on the assumption that an external clean database of the same class of the InPhase image to be denoised is available. Following the standard procedure in a patch-based approach, patches are extracted from both the noisy image and the external database. In this work, we extract square shaped patches of size √ m × √ m from the images in an overlapped manner with unite stride. The readers may refer to [21], [44] for further details of extraction and aggregation of patches. Let {z i } i=N p i=1 be the patches extracted from a noisy image z image ∈ C N 1 ×N 2 , where N 1 × N 2 is the pixel-size of the noisy image and N p is the total number of patches extracted from it. A noisy patch z i , using vectorized form of (4), can be written as where z i , n i ∈ C m and φ i ∈ R m . The noise n i is assumed to be i.i.d. zero-mean circular Gaussian [49] with variance σ 2 , i.e., n i ∼ CN (0, σ 2 I m ). All the operations in (9) are to be understood as component-wise. Let the set of external complex-valued patches be denoted as . We propose an InPhase estimation algorithm, which comprises the following steps: i) the external patches are clustered using a suitable clustering algorithm; ii) for each noisy patch, an external ''home-cluster'' is identified as the importance region; iii) InPhase estimation is accomplished by an estimator, developed using IS strategy, which makes use of the samples drawn from the home-cluster. These three steps are discussed in the following subsections. A. CLUSTERING THE EXTERNAL DATABASE The mixture of Gaussian (MoG) has been identified as a good model for multivariate complex-valued InPhase patches [35]. In this work, clustering is accomplished by fitting a mixture of Gaussian (MoG) to the external patches. Following [35], we assume circular symmetry (cs) to the Gaussian components of the MoG, which is a reasonable assumption that yields state-of-the-art PD performance [35]. The cs Gaussian expression for a signal y ∈ C m is given by where := E yy H , E yy T = 0, and E [y] = 0. The cs assumption has the following advantages: i) it simplifies the expression for Gaussian density involving complex-valued variables; ii) the cs Gaussian has a ''shift-invariance'' property, which is desirable in the context of InPhase patch modelling. To explain shift-invariance, let us consider a patch x i = e jψ i and another patch x i = e j(ψ i +1θ) , where 1 ∈ R m is a vector with all its elements equal to 1. Here x i is obtained by applying a common phase shift θ to the components of x i , as shown in Fig. 3. An efficient representation should treat x i and x i as similar patches irrespective of the common phase shift θ. This, in turn, facilitates the exploitation of the self-similarity between x i and x i in the InPhase estimation step. We term this property as shift-invariance and it is straightforward to verify that the expression (10) is shift-invariant. 2 In this work, we use circular symmetric MoG (csMoG), i.e., MoG whose Gaussian components are circularly symmetric, to model the InPhase patches. The csMoG density is given by where K is the number of components of the mixture model, k and α k are the covariance matrix and the weight of the k th Gaussian component, respectively, with 0 ≤ α k ≤ 1 and K k=1 α k = 1. The parameters of the csMoG are learned using the classical expectation maximization (EM) algorithm run on the external patches. The components of the learned csMOG are associated with K different clusters. The cluster labels of the external patches are identified using the ''posterior weights'' (γ ik ) learned using the EM algorithm. We remark that γ ik indicates the ''responsibility'' of a particular MoG component k in the generation of the patch x i and is defined as After the EM algorithm converges, the cluster label of an external patch x i , denoted as L (x i ), is assigned to the cluster that has maximum value of γ ik , i.e., The pseudo code for the clustering is provided in Algorithm 1, whose main part is the EM algorithm. The algorithm can easily be derived by following a standard EM derivation adapted to csMoG density (see [50,Chapter 9] for further details). B. IDENTIFICATION OF HOME-CLUSTER FOR THE NOISY PATCHES Section III-A discussed the clustering of the external database. Our strategy is to make use of these learned clusters in the denoising process. For each noisy patch z i , a homecluster is identified and n samples from the home-cluster are randomly chosen to denoise z i . Each cluster is related to one Gaussian component of the MoG. In order to identify the home-cluster for a patch z i , the probability density values of z i under all the Gaussian components are calculated and the one that has maximum density value is considered as the 3 M-step: for k = 1, 2, . . . , K , Log Likelihood Evaluation: Convergence Check: where k is the covariance matrix estimated from the clean external patches using Algorithm 1. In (14), the covariance matrix corresponding to the noisy data is obtained by adding the noise variance σ 2 to k . C. InPhase ESTIMATION WITH SHIFT-INVARIANT PATCHES Let the distribution of the clean and noisy patches be ∼ p z , respectively. Given the prior p ψ , our objective is to estimate φ i 2π := W φ i , where W is the wrapping operator defined in (3). We adopt the following minimum risk Bayesian criterion: Therefore As discussed in Section III-A, for InPhase images, there can be many patches that are just phase-shifted versions of each other. These patches should be identified as self-similar, apart from a phase shift, to improve the efficiency of the estimator. This aspect was considered in Section III-A while developing the shift-invariant clustering. We now derive a shift-invariant InPhase estimator in this section. Let θ ∈ R be a common phase shift of a patch φ ∈ R m and φ 0 ∈ R m be the remaining part, i.e., We assume that φ 0 and θ are independent random variables, i.e., We use the same risk function as in (16); for a given patch φ, dropping the patch index i, where φ is an estimate of φ. Now (19) is rearranged by using (17) and (18) as follows: Since the model (9) assumes n ∼ CN (0, σ 2 I m ), the probability p z|ψ 0 ,θ (z|φ 0 , θ) in (20) can be written as Also, we denote the angle of the noisy patch as: Using (21) and (22), the risk function (20) is further expanded. In the following derivations, we use three constants c 1 , c 2 , and c 3 to account for the constants independent of φ. In (23), we have assumed that p θ is approximately constant in any interval of length 2π; the constants accumulated from every 2π period is accommodated in c 2 . The required estimate is defined as φ k is computed in Appendix by solving the equation where I 1 is the modified Bessel function of the first order, and Computing φ k 2π for all values of k and stacking them leads to the vector solution as follows: We remark that the expression (28) is independent of the shift θ. Assuming that the class-specific external samples ψ 1 0 , ψ 2 0 , . . . , ψ n 0 ∼ p ψ 0 are available, the estimate of φ 2π can be computed as In (29), ζ i and b i depend on the external patch ψ i 0 and the noisy patch z. In order to understand the shift-invariance property of the estimator, let us substitute a shifted patch ψ i = ψ i 0 + θ1 instead of ψ i 0 in (29). In this case, it can easily be verified that the corresponding ζ i is ζ i − θ. Hence the corresponding exponent term of the estimator becomes (29) with ψ i and the estimator automatically handles the shift contained in ψ i . By applying this shift-invariance property, (29) can be modified as VOLUME 8, 2020 which allows direct sampling from the external database without manipulating the shift, i.e., sampling from the density p ψ . D. SHIFT-INVARIANT InPhase ESTIMATION USING IMPORTANCE SAMPLING The estimator (30) assumes that the external data samples x i = e jψ i i=n i=1 are available to denoise a patch z. Let us consider a large external database x i = e jψ i i=N ext i=1 having N ext patches, where N ext n. As discussed in Section II, a random sampling of n patches from a large database may fail to yield even one patch with I 1 (b i ) ≈ 0 in (30). In order to tackle this issue, we use the IS strategy. To develop an IS strategy, we make use of the learned clusters of the external patches. From (11), p ψ e jψ i can be written as Now, we choose an importance density (q ψ ), based on the home-cluster of the noisy patches computed using (14). Let k * = L (z) be the home-cluster of patch z. Using (31) and (32), is evaluated as The RHS of (34) is evaluated using (12), which yields Inserting (35) in (33) leads to We emphasize that the expectation operation in (36) is w.r.t. the importance density q ψ . The IS estimator of φ 2π can be written as where α k * and γ ik * are easily computed from Algorithm 1 and expression (14) as follows: The estimator (37) is termed as Shift-invariant Importance Sampling (SIS) estimator. The pseudo-code of the proposed SIS estimation algorithm by summarizing the discussions in Sections III-A to III-C is shown in Algorithm 2. SIS estimation using where e jψ l 's are the drawn from the home-cluster k * = L (z i ) . IV. EXPERIMENTS AND RESULTS In this section, we illustrate the effectiveness of the proposed SIS estimator. Two types of data sets are used: i) semi-real InPhase data constructed from human face images and ii) real MRI interferograms. A. EXPERIMENTS USING SEMI-REAL DATA SET Let us consider a complex-valued image e jφ image ∈ C N 1 ×N 2 . Here, we consider human face images (from FEI face data set [51]) as φ image . We would like to emphasize that the data considered here are not the phase images collected by means of any coherent imaging technique; they are the real images of the human faces captured using normal cameras. All the images considered here are of size 90 × 65. The pixel values are scaled to the range 0 to 9 to ensure a maximum number of wrapping equal to 1. Face images from 4 different persons, shown in Figs. 4a to 4d, are considered as the absolute phase φ image . The InPhase images, i.e., W φ image , are constructed from these four face images using the wrapping operator defined in (3). These images are shown in Figs. 4e to 4h. Since these InPhase images are synthetically created from real images, we term them as ''semi-real'' data. For the external database, 81 face images of the same size and the same range of pixel values as that of the test images, are considered. The external data set and the corresponding InPhase images are shown in Fig. 5a and Fig. 5b respectively. The following are the settings of the parameters: patch size 10×10, number of clusters K = 50, number of patch samples used for the SIS estimator in (37) n = 1000. Observations are generated as per the model (9), for low to high levels of noise corresponding to σ ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1}. Observations of all these noise levels are denoised separately for each of the four test images in Figs. 4a to 4d. We choose 4 different algorithms to compare the performance of the SIS estimator: i) CD-BM3D [32], ii) WFF [36], iii) SpInPhase [21], and iv) SpInPhaseExt [21]. Although the first three algorithms do not make use of an external data set, they are selected for the comparison since they are state-of-the-art in InPhase denoising. However, we chose a fourth algorithm by modifying SpInPhase [21], which we term SpInPhaseExt. The SpInPhaseExt learns a complexvalued sparse dictionary from the external data set and uses it for denoising the noisy images. To the best of our knowledge, there are no other recent algorithms developed for classspecific InPhase estimation. To evaluate the quality of the estimates, given that our main objective is InPhase denoising, we adopt peak signal-to-noise ratio (PSNR), defined as: [dB], (40) where φ image is the true unwrapped phase image, φ image 2π is the estimated InPhase, N is the number of pixels, and W is the wrapping operator defined in (3). In Figs. 6a to 6e, the images estimated from the noisy face-1 (Fig. 4i, σ = 0.9) using various algorithms are shown. From these estimates, it can be observed that the ability of SIS in preserving the minute details is much better, when compared to other estimates. For instance, the fine details associated with the eye region of the other estimates are either lost or partially preserved, whereas SIS retains these details. This claim is well supported by the PSNR value of SIS, which shows more than 2 dB improvements over the others. Also, the estimation error obtained by computing the difference between the true and the estimated InPhase is shown Figs. 6f to 6j. The estimation error of SIS is less structured, i.e., the features of the human face are less visible in SIS error image compared to the errors of the other two estimators. This, in turn, indicates that the SIS estimate is of better quality. Next, we compare the performance of all algorithms using all the test images (Figs. 4a to 4d) by considering six different noise levels. The PSNR values are plotted in Fig. 7. For the low-noise case (σ = 0.5), SIS and SpInPhase show almost equal performance. This might be due to the fact that, at lownoise, the SpInPhase is able to learn a good quality dictionary. But as the noise level increases, the performance of SpIn-Phase degrades in comparison with SIS. When the noise is very high (σ = 1), SIS shows at least 2 dB improvement in PSNR compared to all other algorithms, for all the four test cases. One remarkable observation from Fig. 7 is that the SpInPhaseExt, which exploits the external data sets by learning dictionaries from them, does not perform as good as its original version SpInPhase. This indicates that the stateof-the-art dictionary learning based technique for PD is not well designed to exploit a class-specific external data set, highlighting the significance of SIS, since such data sets are often available in many practical applications. From this comparative study, it can be concluded that the SIS estimator outperforms all other previous methods, especially when the noise level is high. In Fig. 8, we test the performance of the SIS estimator by varying the number of clusters (MoG components). The noisy InPhase data shown in Figs. 4i to 4l (σ = 0.9) are denoised using SIS estimators with cluster numbers K ∈ {10, 20, 30, . . . , 100}. The PSNR for each K averaged over all the four test cases are shown in Fig. 8. It can be observed that the PSNR increases from K = 10 to K = 50, and thereafter, it remains almost the same with a very slight increase. Similar observations were obtained with the InPhase data for other noise levels. We note that the number of clusters for the proposed SIS estimator is heuristically chosen to be K = 50, and further research on this aspect is left as future work. In Table 1, we have also considered an additional algorithm termed ''SIS full sampling'' estimator as a reference algorithm for the time comparisons. SIS full sampling estimator is the SIS estimator where the importance sampling step (clustering) is discarded and a full sampling is considered. This means that for the estimation of each noisy patch, all the external patches (nearly half a million patches) are used, resulting in a huge run time of 2306 seconds. From Table 1, it can be verified that the SIS estimator yields very close results (≈ 0.5 dB difference) compared to the full sampling in a very short run time of 45.02 seconds. This clearly illustrates the role of importance sampling in SIS estimation. We would like to remark that although the WFF and SpInPhaseExt algorithms are much faster than the SIS algorithm, their PSNR values are considerably low. VOLUME 8, 2020 B. EXPERIMENTS USING REAL MRI INTERFEROGRAMS Next, we present a challenging experiment in which a noisy medical image of a particular person is restored using clean images from completely different persons. Restoring medical images from a large external database of previously scanned images is a challenging task, which may have useful SpInPhase [21], SpExt: SpInPhaseExt [21], CD-BM3D [32], and WFF [36]. applications. For instance, in an MRI scan, the noise level depends on the scanning time. A patient may not be able to have very long scanning sessions due to medical reasons. In such a case, if the noisy images from a quick scan could be denoised with the help of an external database, it will be a great leap. Here, we open the door to such research possibilities. The experiments presented here are conducted using MRI phase images from four different persons. These images are the interferograms of the head region obtained through the scanning along the front, side, and top orientations (see Fig. 9). The scanning was carried out on a 1.5 T GE Signa clinical scanner operating within Western General Hospital (WGH), University of Edinburgh [52]. A particular scanning orientation, say front view, is taken as a specific class. The front view interferograms of persons 1, 2, and 3 are used to construct an external data set. Patches from this external database are used to denoise interferograms of the fourth person's front view scan. The experiment is repeated for the other two scanning orientations, i.e., side view and top view. Figure 10 illustrates the restoration of a very noisy interferogram (σ = 0.9), where the estimates have PSNR values around 26 dB. The same parameter settings as described in Section IV-A are used. The images estimated using different algorithms from a high-noise observation (σ = 0.9) are shown in Fig. 12. We emphasize that the MRI phase images here considered are very challenging to denoise due to the presence of abrupt discontinuities. The WFF, being a Fourier-transform-based PD algorithm, usually performs better for images with smooth variations, since such images have better sparse representations in the Fourier domain. Also, CD-BM3D does not perform well for images with sharp discontinuities. It can be observed from Fig. 12 that WFF and CD-BM3D estimates are oversmoothed, and hence many minute details are lost. The visual quality of SpInPhaseExt estimate is better compared to WFF and CD-BM3D. However, SIS estimates are the best among all the estimates, approximately having at least 1 dB better PSNR. The same experiments are repeated for the noise levels σ ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1} and the PSNR values are plotted in Fig. 11. As expected, the performance of WFF and CD-BM3D are not very competitive. Although SpIn-Phase shows good performance for very low noise levels, the PSNR degrades quickly as the dictionary representation of non-smooth high-noisy images is a difficult task. SpIn-PhaseExt is the only algorithm that shows competitive results with SIS, but at medium to high noise levels, SIS clearly outperforms SpInPhaseExt. It is to be noted that, in these experiments, the external database is constructed using the images from just 3 persons. A larger external database with more images will definitely improve the restoration ability of the SIS estimator. Limitations and future works: The present work does not discuss an optimum strategy for clustering the external database. The number of clusters (K ) associated with the proposed SIS estimator was decided based on a heuristic tuning, as shown in Fig. 8. However, the nature of the InPhase data has a vital role in the clustering strategy. As future work, we suggest further research on optimum data-driven clustering strategies. Also, the proposed algorithm uses fixed-sized square patches irrespective of the nature of data. A generalization of the current work by incorporating size-and shape-adaptive patches is expected to bring improvements in the algorithm performance, to which we will devote future work. V. CONCLUSION A novel patch-based method for restoring class-specific InPhase images has been proposed. The new algorithm uses patches from a clean external data set. The main part of the algorithm is an estimator, termed SIS estimator, which is developed based on importance sampling. Also, a clustering mechanism is developed to cluster the external data set by fitting a mixture of complex Gaussians. For each noisy patch, the SIS estimator identifies a home-cluster, draws samples from it, and performs the InPhase estimation. Both the clustering and the estimation technique are developed for complex-valued patches by considering the shift-invariance property. The effectiveness of the SIS estimator is illustrated through a challenging medical imaging application in which the MRI interferogram of a particular person is denoised using the scan images from different persons. Also, a comparative study among the state-of-the-art PD algorithms is done using semi-real human face InPhase images, which concludes that SIS is a robust PD algorithm under a wide range of noise levels, yielding at least 1dB better PSNR than the previous state of the arts.
8,024.8
2020-09-02T00:00:00.000
[ "Computer Science" ]
Deformations over non-commutative base We make some remarks on deformations over non-commutative base. We describe the base algebra of versal deformations using $T^1$ and $T^2$. We will consider deformation theory over non-commutative (NC) base algebras.Such a theory is interesting because there are more deformations than the usual deformations over commutative bases.The deformations over commutative base can possibly be regarded as the 'first order' approximation of more general 'higher order' deformations.The formal theories of deformations over commutative and non-commutative bases are parallel and the extension to the non-commutative case is simple, but some new phenomena and invariants appear. We make some remarks on NC deformations.The first remark is that the deformations over NC base is natural.This is because the differential graded algebras (DGA) which govern the deformations of sheaves are naturally noncommutative.Hence it is natural to consider deformations parametrized by NC base algebras.We will also consider the problem of convergence of formal NC deformations and the moduli space.The second remark is that we obtain 'higher order invariants' because there are more NC deformations than commutative ones by slightly generalizing results of [13] and [4].The last remark is that a description of the base algebra using the tangent space T 1 and the obstruction space T 2 is possible. We use the abbreviation NC for "not necessarily commutative".In §1, we recall the definition of NC deformations, and explain how the base algebra of semi-universal NC deformations is described by a minimal A ∞ -algebra arising from DGA in the case of deformations of coherent sheaves.In §2, we consider the problem of convergence and the existence of moduli space by taking an example of deformations of linear subspaces in a linear space.In §3, we consider another example of flopping contractions of 3-dimensional manifolds, and show how invariants appear beyond those obtained by commutative deformations.We will give a description of the base algebra of the semi-universal NC deformation by using the tangent space and the obstruction space in §4. The author would like to thank Jungkai Alfred Chen and NCTS of National Taiwan University where the work was partly done while the author visited there.He would also like to thank the referee for the careful reading and suggestions for improvements.This work is partly supported by JSPS Kakenhi 21H00970. multi-pointed non-commutative deformations We recall the non-commutative deformation theory developed by [9] (see also [3], [6]).We use NC as "not necessarily commutative".This is a generalization of the formal commutative deformation theory of [10] to the case where the base algebras are allowed to be NC. Let k r be the direct product ring of a field k, and let (Art r ) be the category of augmented associative k r -algebras R which are finite dimensional as kmodules and such that the two-sided ideal M = Ker(R → k r ) is nilpotent.We assume that the composition of the structure homomorphisms k r → R → k r is the identity.(Art r ) is the category of the base spaces for r-pointed NC deformations. Let k i ∼ = k be the i-th direct factor of the product ring k r for 1 ≤ i ≤ r. k i is generated by e i = (0, . . ., 1, . . ., 0) ∈ k r , where 1 is placed at the i-th entry.A left k r -module F has a direct sum decomposition F = r i=1 F i as k-modules by F i = e i F , and k r -bimodule has a further decomposition F = r i,j=1 F ij by F ij = e i F e j .R ∈ (Art r ) is an NC Artin semi-local algebra with maximal two-sided ideals M i = Ker(R → k i ).NC deformation is multi-pointed because an NC semi-local algebra is not necessarily a direct product of local algebras unlike the case of a commutative algebra. The model case is a deformation of a direct sum of coherent sheaves F = r i=1 F i (r-pointed sheaf).The sheaves F i interact each other and there are more NC deformations of F than those of the individual sheaves F i . Let F be something defined over k r which will be deformed over R ∈ (Art r ).An NC deformation of F over R is a pair ( F , φ) where F is "flat" over R and φ : F → R/M ⊗ R F is an isomorphism.The definition depends on the cases what kind of F we are considering.The set of isomorphism classes of deformations of F over R gives an NC deformation functor Φ = Def F : (Art r ) → (Set). We define an object R e ∈ (Art r ) as a generalization of the ring of dual numbers k[ǫ]/(ǫ 2 ).Let R e be the trivial extension k r ⊕ End(k r ), where End(k r ) is a square zero two-sided ideal, and the multiplication of k r and End(k r ) is induced from the embedding to diagonal matrices k r → End(k r ).As a k-module, The multiplication is defined by e i e jk = δ ij e jk , e ij e k = δ jk e ij and e ij e kl = 0 for all i, j, k, l.The augmentation R e → k r is given by e ij → 0. Now we state the conditions (H 0 ), (H f ), (H e ), ( H).For ring homomorphisms The tangent space T 1 of the functor Φ is defined by T 1 = Φ(R e ).The k r -bimodule structure of the ideal End(k r ) ⊂ R e induces a k r -bimodule structure on T 1 , so we can write ).An element ξ ∈ Φ(R) for R ∈ (Art r ) is called an r-pointed NC deformation over R of the unique element of Φ(k r ). Let T R = (M/M 2 ) * be the Zariski tangent space of R. It is a k r -bimodule.The Kodaira-Spencer map KS ξ : T R → T 1 associated to the deformation ξ is defined as follows ) be a pro-object of (Art r ), and let ξ : ) be an element of a projective limit.Then ξ is called a formal r-pointed NC deformation over R. The Kodaira-Spencer map KS ξ : T R → T 1 is similarly defined. In this case, the Kodaira-Spencer map A versal NC deformation is said to be semi-universal if the Kodaira-Spencer map is bijective.In this case, we have M / M 2 ∼ = (T 1 ) * .We note that it is called "versal" in some literatures.The existence of the semiuniversal NC deformation is proved in a similar way to [10] from the conditions (H 0 ), (H f ), (H e ), ( H). In the case r = 1, if we take the abelianization Rab = R/[ R, R] of the base ring of the semi-universal deformation, then we obtain a usual semiuniversal commutative deformation ξab over Rab given by ξab = Φ(q)( ξ), where q : R → Rab is the quotient map. We recall a description of the semi-universal NC deformation in the case of deformations of a coherent sheaf using an A ∞ -algebra formalism ( [8]).Let X be an algebraic variety over k and let F = r i=1 F i be a coherent sheaf with proper support.Then the infinitesimal deformations of F are controlled by a differential graded algebra (DGA) RHom X (F, F ).The tangent space and the obstruction space are given by k r -bimodules It is also controlled by an are the higher multiplications of degree 2 − d, where the left hand side is a tensor product with d factors over k r and the right hand side has degree shift 2 − d.In particular, we have In general, for a k r -bimodule E, we have where there are d-times E on the right hand side.We apply this construction to be the formal sum of dual maps of m d .Then the base algebra R of the semiuniversal NC deformation F is determined as an augmented k r -algebra to be R = Tk r (Ext 1 (X, X) * )/(m * (Ext 2 (X, X) * )) 4 ( [8]).Thus the Taylor coefficients of the equations of the formal NC moduli space are determined by A ∞ -multiplications. There is another way of describing a semi-universal r-pointed NC deformation of a direct sum of coherent sheaves with proper support F = r i=1 F i .The semi-universal NC deformation F of F is given by a tower {F (n) } of universal extensions (cf.[6]): We have direct sum decompositions i , and we can write ).The deformation theory of a simple collection is particularly nice.In this case, ), and the parameter algebra R of the semi-universal deformation F is given by R = lim We do not consider deformation theory of varieties over non-commutative base in this paper, because such a theory seems to be difficult by the following reason.Suppose that there is an infinitesimal deformation X R of a variety X over an NC ring R. Then the structure sheaf O X R should be NC too.When we consider a base change over a ring homomorphism R → R ′ , it seems necessary that the base rings should be commutative in order for the tensor product O X R ⊗ R R ′ to have a ring structure.Indeed the DGLie algebra which controls the deformations of X is NC but its non-commutativity is restricted.But when X is a subvariety of an ambient variety Y , then we can consider a deformation of X inside Y over an NC base as a deformation of the structure sheaf O X as a sheaf on Y (see §2). (2) The deformation functor is pro-representable when there is a universal deformation.But a universal deformation does not exist in general (see [6] Remark 4.10). convergence and moduli The above described semi-universal NC deformation is a formal deformation, and the question on the convergence is important.We will make some remarks on the convergence of the formal NC deformations and the relationship with the moduli space of commutative deformations.We consider only 1-pointed NC deformations, and we take an example of the moduli space of linear subspaces in a fixed linear space.We consider NC deformations of the structure sheaves of linear subspaces. We would like to say that the formal semi-universal NC deformation is convergent if the corresponding semi-universal commutative deformation is convergent.This is because the numbers of commutative monomials and non-commutative ones on n variables of degree d grow similarly to n d .Maybe we should require that the growth of the Taylor coefficients of the non-commutative power series are bounded in a similar way as the commutative power series. Any k-algebra homomorphism R → k for any associative k-algebra R factors through the abelianization R → R ab .Therefore we can think that the set of closed points of the moduli spaces are the same for commutative and NC deformation problems.In other words, when we observe points, then the moduli space of NC deformations is reduced to the usual moduli space.We can say that the NC deformations give an additional infinitesimal or formal structure at each point of the commutative moduli space.And the formal structure is usually convergent.However, a compactification is another problem, and it seems that it does not exists. As an example, we consider NC deformations of linear subspaces in a finite dimensional vector space.As explained in Remark 1.1, we consider the NC deformations of the structure sheaf of the subspace instead of the subspace as a variety.The following is a slight generalization of [8] Example 7.8.The commutative deformations are unobstructed and yield a compact moduli space, a Grassmann variety.But we will see that NC deformations are obstructed. Let V ∼ = k n be an n-dimensional linear space with coordinate linear functions x 1 , . . ., x n , and let W be an m-dimensional linear subspace defined by an ideal I = (x m+1 , . . ., x n ).The commutative moduli space G(m, n) has an affine open subset Hom(W, V /W ) ∼ = k m(n−m) with coordinates a i,j (1 ≤ i ≤ m, m + 1 ≤ j ≤ n).We consider NC deformations of W as a linear subspace of V , i.e., the NC deformations of the ideal sheaves generated by linear functions. Proposition 2.1.Let V ∼ = k n with coordinate linear functions x 1 , . . ., x n , and let W ∼ = k m be defined by x m+1 = • • • = x n = 0. Then the formal semiuniversal NC deformation of W as a linear subspace of V has the parameter algebra R and the ideal Î given as follows: Proof.This is almost the same as [8] Example 7.8.Let Y = P(W * ) ⊂ X = P(V * ) be the corresponding projective spaces.We consider NC deforma- be the ideal sheaf of Y ⊂ X generated by the homogeneous coordinates x m+1 , . . ., x n .By [8] Lemma 7.6, the semi-universal NC deformation of F is given in the form ) such that M n+1 n = 0.By the flatness, the ideal sheaf I ′ n is generated by linear forms Since the x i are commutative variables in R n ⊗ O X , we have The above relations are non-commutative polynomials which are linearly independent quadratic forms, and their number is equal to This is equal to the dimension of the obstruction space.Therefore there are no more independent relations contained in Ĵ. The above deformation is "algebraizable".There is an NC deformation of ideals Ĩ over a parameter algebra R which is a quotient algebra of an NC polynomial algebra: The meaning of this formula is that it induces a semi-universal NC deformation at every closed point of an affine open subset Spec( Rab ) ⊂ G(m+1, n+1) for NC variables a ij , b kl and a 0 ij , b 0 kl ∈ k.Hilbert schemes and Quot schemes are constructed from Grassmann varieties.We wonder if their NC deformations are also semi-globalizable.(2) n = 3 and m = 2.We have G(2, 3) ∼ = P 2 .Then R = k a, b is not Noetherian.Indeed a two-sided ideal (ab k a | k > 0) is not finitely generated. R has a following quotient algebra, which corresponds to an NC deformation which is not semi-universal: R has a following quotient algebra: where ǫ i ∈ k.For example, if flopping contractions of 3-folds As a typical example of multi-pointed NC deformations, we will consider NC deformations of exceptional curves of a flopping contraction from a smooth 3-fold f : Y → X over k = C. [2] observed that there are more NC deformations than commutative ones, and the base algebra of NC deformations gives an important invariant of the flopping contraction called the contraction algebra.Indeed Donovan and Wemyss conjectured that the contraction algebra, which is a finite dimensional associative algebra, determines the complex analytic type of the singularity of X. [13] and [4] proved that the dimension count of the contraction algebra yields Gopakumar-Vafa invariants of rational curves defined in [5].We will consider slight generalizations where there are more than one exceptional curves. Let f : Y → X = Spec(B) be a projective birational morphism defined over k = C from a smooth 3-dimensional variety Y whose exceptional locus C is 1-dimensional.Let C = r i=1 C i be a decomposition into irreducible components.We assume that f is crepant, i.e., (K Y , C i ) = 0 for all i.It is known that C i ∼ = P 1 , the dual graph of the C i is a tree, and X has only isolated hypersurface singularities of multiplicity 2. The contraction algebra R for f is defined to be the base algebra of the semi-universal r-pointed NC deformation of the sheaf F = r i=1 O C i (−1). We consider commutative one parameter deformation of the contraction morphism f : Y → X, and investigate the behavior of the contraction algebras under deformation.Let p : X → ∆ be a one parameter flat deformation of X over a disk ∆, and assume that there is a flat deformation f : Y → X of the flopping contraction f : Y → X.We assume that there are Cartier divisors L 1 , . . ., L r on Y such that (L i , C j ) = δ i,j .This is always achieved when we replace X by its complex analytic germ containing f (C) and ∆ by a smaller disk. Let C t = st j=1 C t j be the exceptional curves with decomposition to irreducible components for the flopping contraction f t : Y t → X t for t = 0, where Y t = (p f ) −1 (t) and X t = p −1 (t).It is not necessarily connected even if C is connected.We may assume that s = s t is constant on t = 0. We define integers m j,i by the degeneration of 1-cycles C t j → m j,i C i when t → 0. This means that O C t j degenerates in a flat family to O i m j,i C i .We have (L i , C t j ) = m j,i .If the deformation f is generic, then C t is a disjoint union of (−1, −1)curves, i.e., smooth rational curves whose normal bundles are isomorphic to O P 1 (−1) ⊕2 .In this case, we denote The numbers n d should be called the Goparkumar-Vafa invariants ( [5] for the case r = 1).In the case r = 1, [13] proved that n 1 is equal to the dimension of the abelianization of the contraction algebra n 1 = dim R ab , while higher terms n d for d ≥ 2 contribute to dim R (see Theorem 3.1 (3)). We consider NC deformations of F = r i=1 F i for F i = O C i (−1) on Y and Y.The set {F i } is called a simple collection on Y and Y in the terminology of [6] in the sense that Hom Y (F, F ) ∼ = Hom Y (F, F ) ∼ = k r .The NC deformations of a simple collection behave particularly nice. Let F = r i=1 F i and F 0 = r i=1 F 0 i be the semi-universal NC deformations of F on Ŷ and Y , respectively, and let R and R be the base algebras of these semi-universal deformations.We note that F 0 is obtained by finite number of extensions of the F i while F may not.This is because C is isolated in Y while C may move inside Y. Hence we have dim R < ∞ as k-modules.We will see that dim R = ∞ (see Theorem 3.1 (1)). F is also a semi-universal NC deformation of F on Y.We will see that there is also a "convergent version" F on Y, and F is its completion. F and F 0 can be described explicitly in the following way ([2], [6], [7]).In particular, there exists a sheaf F on Y such that i.e., the semi-universal NC deformation F is convergent when we replace ∆ by a smaller disk if necessary. By [14], we construct extensions of locally free sheaves on Y: Since the dimensions of fibers of f are at most 1, we obtain R 1 f * (M 0 ) * = 0 from R 1 f * M * i = 0. Then semi-universal NC deformations F = Fi and F 0 are given as the kernels of natural homomorphisms ([7] Theorem 1.2): We define F by an exact sequence We denote , where Y t = (p f ) −1 (t) and k t is the residue field at t ∈ ∆. The following is a slight generalization of results in [13] and [4]: (1) F is flat over ∆, and (3) Assume in addition that C t is a disjoint union of (−1, −1)-curves C t j for t = 0. Then Proof.(1) We have an exact sequence where the first arrow is the multiplication by t.Because R 1 f * M = 0, there is an exact sequence By snake lemma, we obtain 0 → F → F → F 0 → 0 hence the flatness. (2) Since t : F → F is injective, R has no t-torsion.Thus it is sufficient to prove that the natural homomorphism Hom Y (F, F) → Hom Y ( F 0 , F 0 ) is surjective.By the flat base change, it is also sufficient to prove that Hom Ŷ ( F, F ) → Hom Y ( F 0 , F 0 ) is surjective, i.e., R → R is surjective.Then the assertion follows from the fact that R and R are the base algebras of NC semi-universal deformations of the same sheaf F with Y ⊂ Y. (3) This is proved in [13] and [4] )-curve, x t j is an ordinary double point on a 3-fold.We take a small complex analytic neighborhood x t j ∈ U t j ⊂ X t , and let V t j = f −1 (U t j ).Let L t j be a Cartier divisor on V t j such that (L t j , C t j ) = 1.We know that (L i , C t j ) = m j,i and R 1 f * M * i = 0. Since C t j ∼ = P 1 and M i is relatively generated, M i | V t j is a direct sum of line bundles whose degrees are nonnegative but at most 1.Since the total degree is equal to m j,i , it follows that We will prove that Ker , and the assertion is proved. abstract description using T 1 and T 2 We will describe the base algebra of the semi-universal NC deformation of a deformation functor Φ which has the tangent space T 1 and the obstruction space T 2 , which is defined below. Let Φ : (Art r ) → (Set) be an NC deformation functor which has a formal semi-universal deformation ξ ∈ Φ( R).A k r -bimodule T 2 = r i,j=1 T 2 ij is said to be the obstruction space if the following condition is satisfied.Let ξ ∈ Φ(R) be an NC deformation over (R, M ) ∈ (Art r ), and let (R ′ , M ′ ) ∈ (Art r ) be an extension of R by a two-sided ideal J: such that M ′ J = 0, so that J is a left k r -module.Then there is an obstruction class o ξ ∈ T 2 ⊗ k r J such that ξ extends to an NC deformation ξ ′ ∈ Φ(R ′ ) if and only if o ξ = 0. We assume that the obstruction class is functorial in the following sense.Let Theorem 4.1.Let Φ : (Art r ) → (Set) be an NC deformation functor.Assume that the obstruction space T 2 is finite dimensional.Then there is a k r -linear map m : (T 2 ) * → Tk r (T 1 ) * such that R ∼ = Tk r (T 1 ) * /(m((T 2 ) * )), a quotient algebra of the completed tensor algebra by a two-sided ideal generated by the image of m. Proof.Denote  = Tk r (T 1 ) * = k r ⊕ M .Then the base algebra of the semiuniversal NC deformation R is a quotient algebra Â/ Î by some two-sided ideal Î.Let {z l } N l=1 be a k-basis of T 2 .Let R k = Â/( Î + M k+1 ).We define a sequence of two-sided ideals I k ⊂ Â/ M k+1 by R k = Â/(I k + M k+1 ).By definition of the semi-universal deformation, there is an NC deformation ξ k ∈ Φ(R k ).We will prove that I k is generated by elements {s k,l } N l=1 ∈ Â/ M k+1 such that s k+1,l → s k,l by the natural map Â/ M k+2 → Â/ M k+1 inductively as follows. 0 and let R = End Y (F).By the flat base change, we obtain (3.1) and R ∼ = R ⊗ O Y O Ŷ . ) ([4] Conjecture 4.3).R is a flat O ∆ -module, and R ∼ = R⊗ O ∆ k,where k is the residue field of O ∆ at 0.
5,821.2
2023-03-11T00:00:00.000
[ "Mathematics" ]
Development of Fast Dispersible Aceclofenac Tablets: Effect of Functionality of Superdisintegrants Aceclofenac, a non-steroidal antiinflammatory drug, is used for posttraumatic pain and rheumatoid arthritis. Aceclofenac fast-dispersible tablets have been prepared by direct compression method. Effect of superdisintegrants (such as, croscarmellose sodium, sodium starch glycolate and crospovidone) on wetting time, disintegration time, drug content, in vitro release and stability parameters has been studied. Disintegration time and dissolution parameters (t50% and t80%) decreased with increase in the level of croscarmellose sodium. Where as, disintegration time and dissolution parameters increased with increase in the level of sodium starch glycolate in tablets. However, the disintegration time values did not reflect in the dissolution parameter values of crospovidone tablets and release was dependent on the aggregate size in the dissolution medium. Stability studies indicated that tablets containing superdisintegrants were sensitive to high humidity conditions. It is concluded that fast-dispersible aceclofenac tablets could be prepared by direct compression using superdisintegrants. Aceclofenac, (2-[2-[2-(2,6-dichlorophenyl)aminophenyl] acetyl]oxyacetic acid), a nonsteroidal antiinfl ammatory drug (NSAID) has been indicated for various painful indications 1 and proved as effective as other NSAIDs with lower indications of gastro-intestinal adverse effects and thus, resulted in a greater compliance with treatment 2 .Aceclofenac is practically insoluble.For poorly soluble orally administered drugs, the rate of absorption is often controlled by the rate of dissolution.Clear aceclofenac-loaded soft capsules have been prepared to accelerate the absorption 3 .The rate of dissolution can be increased by increasing the surface area of available drug by various methods (micronization, complexation and solid dispersion) 4 .The dissolution of a drug can also be infl uenced by disintegration time of the tablets.Faster disintegration of tablets delivers a fi ne suspension of drug particles resulting in a higher surface area and faster dissolution. Of all the orally administered dosage forms, tablet is most preferred because of ease of administration, compactness and flexibility in manufacturing. Because of changes in various physiological functions associated with aging including diffi culty in swallowing, administration of intact tablet may lead to poor patient compliance and ineffective therapy.The paediatric and geriatrics patients are of particular concern.To overcome this, dispersible tablets 5 and fast-disintegrating tablets 6 have been developed.Most commonly used methods to prepare these tablets are; freeze-drying/Lyophilization 7 , tablet molding 8 and direct-compression methods 9 .Lyophilized tablets show a very porous structure, which causes quick penetration of saliva into the pores when placed in oral cavity 7,10 .The main disadvantages of tablets produced are, in addition to the cost intensive production process, a lack of physical resistance in standard blister packs and their limited ability to incorporate higher concentrations of active drug 5 .Moulded tablets dissolve completely and rapidly.However, lack of strength and taste masking are of great concern 8,11 .Main advantages of direct compression are, low manufacturing cost and high mechanical integrity of the tablets 9,12 .Therefore, direct-compression appears to be a better option for manufacturing of tablets.The fast disintegrating tablets prepared by direct compression method, in general, are based on the action established by (Shimadzu, AX200, Japan).Tablets were weighed individually and compared with average weight.The Pfi zer hardness tester and the Roche friabilator were used to test hardness and friability loss respectively.Disintegration time was determined using USP tablet disintegration test apparatus (ED2L, Electrolab, India) using 900 ml of distilled water without disk at room temperature (30°) 13 .Thickness of tablet was determined by using dial caliper (Mitutoya, Model CD-6 CS, Japan).To measure wetting time of tablet, a piece of tissue paper was folded twice and placed in a small Petri dish containing suffi cient water.A tablet was kept on the paper and the time for complete wetting of tablet was measured. Stability studies: The stability of selected formulations was tested according to International Conference on Harmonization guidelines for zone III and IV.The formulations were stored at accelerated (40± 2 o /75±5% RH) and long-term (30±2 o /65±5% RH) test conditions in stability chambers (Lab-Care, India) for six months following open dish method 17 .At the end of three months, tablets were tested for disintegration time, hardness friability, thickness, drug content and moisture uptake. Dissolution Study: In vitro release of aceclofenac from tablets was monitored by using 900 ml of SIF (USP phosphate buffer solution, pH 7.4) at 37±0.5° and 75 rpm using programmable dissolution tester [Paddle type, model TDT-08L, Electrolab, (USP), India].Aliquots were withdrawn at one minute time intervals and were replenished immediately with the same volume of fresh buffer medium.Aliquots, following suitable dilutions, were assayed spectrophotometrically (UV-1700, Shimadzu, Japan) at 274 nm. Statistical analysis: Each tablet formulation was prepared in duplicate, and superdisintegrants such as croscarmellose sodium, crospovidone and sodium starch glycolate.The effect of functionality differences of the superdisintegrants on tablet disintegration has been studied 13 .The objective of the present work was to develop fast dispersible aceclofenac tablets and to study the effect of functionality differences of superdisintegrants on the tablet properties and to provide information on the storage conditions of these tablets. Blending and tableting: Tablets containing 100mg of aceclofenac were prepared by direct compression method and the various formulae used in the study are shown in Table 1.The drug, diluents, superdisintegrant and sweetener were passed through sieve # 40.All the above ingredients were properly mixed together (in a poly-bag).Talc and magnesium stearate were passed through sieve # 80, mixed, and blended with initial mixture in a poly-bag.The powder blend was compressed into tablets on a ten-station rotary punchtableting machine (Rimek Mini Press-1) using 7 mm concave punch set. Evaluation of dispersible tablets: Tablets were evaluated for weight variation, hardness, friability, thickness and disintegration time 14 , wetting time 15 and stability 16 .In weight variation test, twenty tablets were selected at random and average weight was determined using an electronic balance times of crospovidone containing tablets are comparatively lower than those containing croscarmellose sodium and sodium starch glycolate. The faster disintegration of crospovidone tablets may be attributed to its rapid capillary activity and pronounced hydration with little tendency to gel formation 20 .Thus, these results suggest that the disintegration times can be decreased by using wicking type of disintegrants (crospovidone). Since the dissolution process of a tablet depends upon the wetting followed by disintegration of the tablet, the measurement of wetting time may be used as another confirmative test for the evaluation of dispersible tablets.Fig. 1 RESULTS AND DISCUSSION Since, the fl ow properties of the powder mixture are important for the uniformity of mass of the tablets, the fl ow of the powder mixture was analyzed before compression to tablets.Low Hasner`s ratio (≤1.32), compressibility index (≤24.68)and angle of repose (≤18.13)values indicated a fairly good flowability of powder mixture.As the tablet powder mixture was free fl owing, tablets produced were of uniform weight with acceptable weight variation (≤4.68%) due to uniform die fi ll.Hardness (3.63-4.31kg/cm 2 ) and friability loss (0.15-0.72 %) indicated that tablets had a good mechanical resistance.Drug content was found to be high (≥96.2%)and uniform (coeffi cient of variation between 0.89-2.56%) in all the tablet formulations. The most important parameter that needs to be optimized in the development of fast dispersible tablets is the disintegration time of tablets.In the present study, all the tablets disintegrated in ≤57.5 sec fulfi lling the offi cial requirements (<3 min) for dispersible tablets 18 .Fig. 1 of croscarmellose sodium.However, t 50% and t 80% values increased (P<0.05) with increase in the level of sodium starch glycolate.While t 50% and t 80% values did not change (P>0.05) with increase in the level of crospovidone.These results indicated that dissolution parameter values of croscarmellose sodium and sodium starch glycolate containing tablets are in consistent with the disintegration time values observed.However, disintegration time values observed with crospovidone tablets are not predictable of dissolution of the drug.The rapid increase in dissolution of aceclofenac with the increase in croscarmellose sodium may be attributed to rapid swelling and disintegration 20 of tablet into apparently primary particles 13 (fi g. 5a).While, tablets prepared with sodium starch glycolate, disintegrate by rapid uptake of water, followed by rapid and enormous by using any of the superdisintegrants used.The dissolution parameters were consistent with disintegration times of croscarmellose sodium and sodium starch glycolate containing tablets.However, disintegration time values of crospovidone tablets were not correlating with dissolution profi les.Dispersible tablets prepared with superdisintegrants must be protected from atmospheric moisture.swelling 20 into primary particle but more slowly 13 (fi g. 5b) due to the formation of a viscous gel layer by sodium starch glycolate 19 .Crospovidone exhibits high capillary activity and pronounced hydration with a little tendency to gel formation 20 and disintegrates the tablets rapidly but into larger masses of aggregated particles 13 (fi g. 5c).Thus, the differences in the size distribution generated and differences in surface area exposed to the dissolution medium with different superdisintegrants rather than speed of disintegration of tablets may be attributed to the differences in the t 50% and t 80% values with the same amount of superdisintegrants in the tablets.Thus, although the disintegration times were lower in crospovidone containing tablets, comparatively higher t 50% and t 80% values were observed due to larger masses of aggregates. When tablets were kept at real time (30±2 o /65±5% RH) and accelerated (40±2 o /75±5% RH) storage conditions, both disintegration time and hardness values decreased signifi cantly indicating that tablets have lost the mechanical integrity leading to more friability loss (Table 2).Increase in thickness of all tablets was noticed particularly pronounced in crospovidone tablets.These results indicate that, at higher relative humidity, tablets containing high concentration of superdisintegrants get softened and hence, must be protected from atmospheric moisture.As crospovidone tablets absorbed larger amount of moisture, tablets became fragile and developed cracks.After stability test period, some portion of the tablet edges was removed and hence, drug content, hardness, friability and disintegration tests could not be conducted on these tablets. It is concluded that, although functionality differences existed between the superdisintegrants, the fast dispersible aceclofenac tablets could be prepared TABLE 1 : FORMULAE USED IN THE PREPARATION OF TABLETS depicts the wetting times for tablets prepared with three superdisintegrants.Wetting times of the tablets did not change (P>0.05) with increase in the croscarmellose sodium from 2-4%.However, wetting times decreased (P<0.05) with increase in the level of croscarmellose above 4%.A signifi cant decrease (P<0.05) in the wetting times is seen with increase in the level of crospovidone (4 to 12%).It is interesting to note that wetting times increased (P<0.05) with increase in the level of sodium starch glycolate from 2% to 12% in the tablets.Thus wetting times of tablets with crospovidone<croscarmellose sodium<sodium starch glycolate.These results are in consistent with disintegration test results.
2,452.8
2008-03-01T00:00:00.000
[ "Materials Science", "Medicine" ]
Information-Theoretical Analysis of EEG Microstate Sequences in Python We present an open-source Python package to compute information-theoretical quantities for electroencephalographic data. Electroencephalography (EEG) measures the electrical potential generated by the cerebral cortex and the set of spatial patterns projected by the brain's electrical potential on the scalp surface can be clustered into a set of representative maps called EEG microstates. Microstate time series are obtained by competitively fitting the microstate maps back into the EEG data set, i.e., by substituting the EEG data at a given time with the label of the microstate that has the highest similarity with the actual EEG topography. As microstate sequences consist of non-metric random variables, e.g., the letters A–D, we recently introduced information-theoretical measures to quantify these time series. In wakeful resting state EEG recordings, we found new characteristics of microstate sequences such as periodicities related to EEG frequency bands. The algorithms used are here provided as an open-source package and their use is explained in a tutorial style. The package is self-contained and the programming style is procedural, focusing on code intelligibility and easy portability. Using a sample EEG file, we demonstrate how to perform EEG microstate segmentation using the modified K-means approach, and how to compute and visualize the recently introduced information-theoretical tests and quantities. The time-lagged mutual information function is derived as a discrete symbolic alternative to the autocorrelation function for metric time series and confidence intervals are computed from Markov chain surrogate data. The software package provides an open-source extension to the existing implementations of the microstate transform and is specifically designed to analyze resting state EEG recordings. INTRODUCTION AND BACKGROUND Electroencephalography (EEG) is a routine technique in neuroscientific research and the clinical sciences, used to measure electrical potentials generated by the cerebral cortex. It is a relatively low-cost and widely distributed diagnostic tool. The measured EEG signal records a superposition of excitatory and inhibitory postsynaptic potentials via electrodes located on the skull surface (Niedermeyer and da Silva, 2005). Among the data reduction techniques that have been employed to compress EEG recordings, the microstate algorithm is of special importance as it has been evaluated in a variety of experimental conditions (Lehmann et al., 1987;Wackermann et al., 1993;Pascual-Marqui et al., 1995). The microstate algorithm can be summarized as follows. Consider an EEG data set that consists of n t time samples from n ch channels, or electrode locations. Then, each sample is an array of n ch real numbers, each number representing the electrical potential at a specific location, and the whole array provides a discrete sampling of the continuous electrical field. An EEG data set can therefore be visualized as a time series of changing spatial patterns, often called maps. The microstate algorithm searches for a small set of spatial patterns that explain the maximum amount of the data's variance. Often, only four representative maps are needed to explain ca. 70% of the data variance and to capture neurobiologically relevant data features (Koenig et al., 2002;Brodbeck et al., 2012;Khanna et al., 2015;Kuhn et al., 2015). It could be shown that the computed microstates convey information about functional brain states during cognition (Milz et al., 2015), different vigilance states Kuhn et al., 2015), and disease (Koenig et al., 1999;Nishida et al., 2013). We here implement the commonly employed modified Kmeans algorithm, as introduced in Pascual- Marqui et al. (1995) and which has been used in many published studies, for instance in vision research (Antonova et al., 2015), studies on olfaction (Iannilli et al., 2013), and taste (Iannilli et al., 2017), and in a multi-center schizophrenia study (Lehmann et al., 2005), to name but a few. The canonical K-means algorithm yields a cluster assignment that minimizes the sum of squared distances of all data points to their respective cluster centroids, i.e., to the arithmetic mean of all points currently assigned to that cluster. The algorithm proceeds stochastically, using a fixed number of clusters and initializing the cluster centroids with randomly selected data samples. In the case of EEG records, a data sample consists of an array of electrical potential values at a given time point, and the size of the array represents the number of EEG channels. In each iteration, the algorithm assigns each data sample to its closest cluster centroid, and then updates all clusters and their centroids taking into account the newly assigned samples. Modified Kmeans clustering for EEG microstates, as introduced in Pascual- Marqui et al. (1995), does not use the arithmetic mean of samples to represent the cluster, but the first principal component of the samples. Thus, the polarity of the EEG topography is ignored, leaving the overall symmetry of the potential topography as the feature to be clustered (Wackermann et al., 1993;Pascual-Marqui et al., 1995). The convergence criterion for the modified Kmeans algorithm is the relative error in the explained variance, as detailed further below. In this context, two particular types of EEG experimental designs should be mentioned, resting state recordings on the one hand, and event-related potentials (ERP) on the other. We here focus on resting state recordings, in which the ongoing EEG in a task-free ("resting") condition is recorded in order to follow spontaneous changes in cortical activity. Resting state recordings have received considerable attention as they provide insight into functional brain networks that spontaneously activate and de-activate . In ERP experiments, which we will not study further in this article, certain stimuli (acoustic, visual, cognitive tasks) are presented repetitively and the synchronously recorded EEG signal is analyzed in blocks. The start of each EEG data block is aligned with the stimulus presentation times, and thus, EEG features time-locked to stimulus onset are extracted. Due to often low signal-to-noise ratios in single stimulus responses, the evoked EEG changes are usually averaged. We mention ERP experiments since the microstate approach has been applied to ERP data for a long time (Murray et al., 2008). The algorithms presented here can readily be applied to ERP data sets, however, we do not provide the functionalities for the necessary pre-processing, such as epoch splitting, averaging, and ERP component identification. The microstate algorithm transforms an EEG data set into a sequence of microstate labels, according to maximum similarity between the candidate microstates and the actual EEG topography. Commonly, the microstate maps are labeled with the symbols A-D. The fact that the resulting time series consist of categorical variables severely limits the list of applicable time series methods. Frequently used linear characteristics, such as the autocorrelation function or the power spectral density for instance, cannot be computed as sum and product terms are not defined on the discrete set of states. The most frequently used approach is the transition matrix method that summarizes microstate dynamics by a square matrix of first-order transition statistics, i.e., the conditional probabilities of transitioning from one microstate to the next (Wackermann et al., 1993). The main limitation of this approach is the fact that only the t → t + 1 transition, i.e., a single time lag is considered. On a conceptual level, the transition matrix can only fully represent a first-order Markov process, for which the complete information about the future state X t+1 is contained in the random variable X t . We have shown that resting state EEG microstate sequences do not follow the Markov property, when testing Markovianity of the time series statistically (von Wegner et al., 2017). EEG microstate sequences rather show memory effects extending up to time scales of several hundred milliseconds (von Wegner et al., 2017). The statistical tests for Markovianity of order 0, 1, and 2 are contained in the software package introduced here. As an alternative, a random walk analysis of microstate sequences has been proposed ( Van de Ville et al., 2010). To use the method, however, the microstate labels (e.g., A-D) have to be mapped to real numbers, e.g., ±1, in order to use Hurst exponent estimators. Furthermore, the method aggregates several microstates into one class which is mapped to a single real number (Van de Ville et al., 2010;von Wegner et al., 2016). In the case of four microstates, an arbitrary pair of microstates is mapped to the value −1, and the other two microstates are mapped to the value +1. This procedure has several disadvantages. First, there are no biologically inspired reasons which microstate maps should be grouped into one class. If all possible class assignments are tested independently, their number diverges exponentially for larger numbers of microstates. In the case of an odd number of microstates, the partition into two classes is even more difficult to justify. Second, the arithmetic operations performed on the assigned real numbers (sums, products, square roots) do not have a clearly defined meaning on the level of the EEG potential topographies they represent. Finally, the transition matrix approach and the random walk embedding contradict each other on a theoretical level, as the first uses a memory-less Markov model and the latter uses an infinite memory, scale-free approach. To overcome these limitations, in a recent publication we introduced information-theoretical methods in order to (i) work with an arbitrary number of microstate labels directly, and (ii) to assess the memory structure of microstate sequences for all time lags, i.e., beyond t → t + 1 transitions, as captured by the transition matrix method (von Wegner et al., 2017). We also added further statistical tests for stationarity and symmetry of the transition matrix, and finally detected previously unrecognized periodicities in microstate sequences by means of the time-lagged mutual information function. Our previous publication provides an evaluation of these methods on a set of healthy subject resting state EEG recordings. In the present work, we make the methods developed and analyzed in von Wegner et al. (2017) available to other researchers. The code provided along with this manuscript allows to reproduce our previous results, and to perform new studies using the same methodology. SOFTWARE DESIGN The philosophy of this project is to provide a free and open-source stand-alone package. We chose to implement the algorithm in Python (Rossum, 1995) in order to provide a freely distributable, open-source, cross-platform implementation without restrictions with respect to licensed or commercial software, or the operating system used. Moreover, Python offers easily accessible source code and a reasonable trade-off between performance and code comprehensibility. The programming style is procedural, providing a set of functions to import, pre-process, visualize, and analyze EEG data sets with the microstate algorithm and the informationtheoretical metrics described in von Wegner et al. (2017). Individual functions can easily be imported into other Python projects, and the procedural approach facilitates portability, compared to an object-oriented approach introducing specific class structures for EEG data that may interfere with data structures defined in other software packages. From our experience, the scientific process often starts with a visual exploration phase, followed by more extensive and often automated data analyses, where the code may run on headless servers or remote computation facilities using the command line. Therefore, we have chosen to implement the software as a command line tool, to be used in scripts aimed at high productivity, i.e., to process a hierarchy of directories containing the EEG files to be analyzed. Rather than providing a graphical user interface (GUI), we use an interactive IPython notebook tutorial for visual analysis during the data exploration phase. The notebook format can easily be extended and modified. By this means, preliminary and intermediate results can directly be used in a tutorial or presentation setting. Finally, as another argument to use Python and as a perspective into the processing of larger EEG databases, Python code allows straightforward extensions to database management and web applications while staying within the same programming framework. The code is provided as a single source code file in Python 2.7 syntax and using standard code documentation following the PEP 8 style guide. Requirements and Dependencies We provide stand-alone code using minimal dependencies. Dependencies consist in standard Python packages for scientific computing and visualization and have no link to licensed or commercial software. In order to run the tutorials provided in the source code and in the IPython notebook, the user needs to have installed: • NumPy for array handling and numerical computing. • matplotlib for data visualization. • StatsModels for multiple comparison statistics. • scikit-learn for principial component analysis based EEG data visualization. When only using the core functions implementing the microstate algorithm, statistical tests and information-theoretic quantities, the packages StatsModels (for multiple comparisons) and scikitlearn (for principal component analysis based visualization) can be omitted. Command Line Options From the command line, the following options are available: • "-i" or "--input" defines the full path to a single ".edf " file to be processed. • "-f " or "--filelist" defines the full path to a text file containing the full paths to all ".edf " files to be processed, row-wise. • "-d" or "--directory" defines the path to a directory, from which all ".edf " files will be processed. • "-m" or "--markovsurrogates" sets the number of Markov surrogates to be used for the computation of the mutual information confidence interval. On the command line, all options can be viewed running the source file with the -help option. Other Implementations of the Microstate Algorithm A detailed description of the original microstate algorithm is given in Pascual-Marqui et al. (1995) and Murray et al. (2008). Computational implementations exist as the Windows executable Cartool which is described in Brunet et al. (2011). The program is freely available, but ships without source code. For the commercial Matlab software, a free and open-source implementation called microstates has been published, and the package depends on the Matlab EEGLAB toolbox. Another EEGLAB-based implementation is the Microstate-EEGlab-toolbox. The large Chicago Electrical Neuroimaging Analytics software package contains the microstate algorithm as a Matlab plugin for the Brainstorm software. Recently, the probabilistic microstate analysis approach was published (Dinov and Leech, 2017), where microstates were obtained by the classical K-means algorithm rather than the modified K-means algorithm as given in Murray et al. (2008). The Key institute Python implementation for ERP analysis contains the microstate algorithm as presented in Milz (2016). We would like to highlight that none of the implementations listed above includes the information-theoretical analyses contained in our package. However, as the other packages contain alternative implementations of the microstate algorithm and further analytic approaches, the user may choose to use and benefit from interfacing our code with some of these software packages. This is especially easy in the case of communication with external Python code. Our code allows for easy portability to other Python programs by simply importing the corresponding functions, e.g., the Markov tests, entropy calculations, surrogate data synthesis, and the implementation of the mutual information function. THE PROCESSING PIPELINE In the following, we illustrate a typical processing pipeline that can be implemented with the provided functions. During presentation, it should become clear that not all presented computations have to be performed, or necessarily in the order presented in the example. In particular, the number of clusters to be computed is provided by the user, and any integer (≥ 2) is allowed. All subsequent analysis steps are not affected by the choice and are computed for the given number of clusters. The Figure 1 summarizes the procedure. Selected EEG channel data is illustrated on the top left, showing the channel abbreviations on the y-axis (e.g., O1 is the left occipital electrode). Below, the global field power (GFP, blue) and its local maxima (MAX, red dots) are shown. The EEG topographies at the local GFP maxima provide the input for modified K-means clustering (arrow 1). The clustering procedure yields the four microstate maps A-D shown on the right. Step 2 refers to the competitive backfitting of the microstate maps into the EEG data set, based on a maximum squared correlation metric. The microstate time series is illustrated by the label sequence A, B, B, C... depicted below the EEG data. Step 3 corresponds to information-theoretical analysis, in particular to time-lagged mutual information. The Venn diagram visualizes mutual (or shared) information between the microstates at time point t and the k-step future time point t + k as the intersection between two sets representing the entropies H(X t ) and H(X t+k ). Equivalently, mutual information is defined as I(k) = H(X t+k ) − H(X t+k | X t ), i.e., as the difference between the uncertainty about state X t+k , and the uncertainty about X t+k given exact knowledge about X t . To put it differently, I(k) measures the information about X t+k that is contained in X t . Step 3 points to the bottom panel of Figure 1, which shows the time-lagged mutual information function for time lags k up to 400 ms. To point out non-Markovian memory effects in experimental EEG data, we show the mutual information function for a microstate sequence from experimental EEG data, as well as a confidence interval computed from 10 Markov surrogate sequences (significance level α = 0.01). Time-lagged mutual information of EEG data is shown in black (solid line with black squares) and the Markov confidence interval is shown as a gray-shaded area. The experimental mutual information FIGURE 1 | Algorithm: the (top) panel shows a section of resting state EEG (1-40 Hz, black), the global field power (GFP, blue), and the local GFP maxima (red dots). EEG topographies at local GFP maxima are clustered by the modified K-means algorithm to obtain the n = 4 microstate maps A-D (step 1). Fitting the maps back into the EEG data set yields the microstate sequence (A,B,B,C,...) depicted below the EEG data (step 2). Step 3 illustrates information-theoretical analysis of the microstate sequence. Time-lagged mutual information I(k) for time lag k is illustrated by a Venn diagram. The (bottom) panel shows the periodic mutual information function (black) and the Markov confidence interval (gray area, α = 0.01). function shows distinct oscillatory peaks not explained by the Markov model. Further details are given in the subsequent sections. EEG Data and Pre-processing To run this tutorial, we provide a test EEG file (test.edf) which must be located in the same folder as the source code file (eeg_microstates.py). The record contains 192 s. from an eyesclosed resting state experiment of a healthy male subject recorded with a 30 channel EEG cap in the standard 10-10 electrode configuration. The experiment was approved by the local ethics committee of the Goethe University, Frankfurt, Germany. The EEG sampling rate is 250 Hz and the data is band-pass filtered to the 1-40 Hz range. Electrode locations are given as cartesian coordinates in the cap.xyz file which is imported for visualization of EEG topographies. All files are contained in our GitHub repository. To import EEG data, the package contains a basic edf file reader, using the publicly available specifications for the .edf file format. Data is loaded in a single line of code: # (1) Load EEG data from '.edf' file chs, fs, data_raw = read_edf("test.edf") The first two return variables are a list of channel name strings and the sampling frequency in Hz. EEG data is contained in a NumPy array of shape (n t , n ch ), with time samples along the rows (first index) and electrodes or channels along columns (second index). In case the provided functions are called from another Python program, EEG data must be formatted into that shape to be processed by our functions. EEG data is usually pre-processed by a band-pass filter. As an example, we give the code for a pass band of 1-35 Hz, where fs denotes the sampling frequency in Hz and the data array has the time axis running along the first dimension (axis = 0): # (2) Band-pass filtering data = bp_filter(data_raw, (1, 35), fs) To get a general impression of the data in one dimension, an option to plot the time-course of the first principal component of the multi-channel data set is included. Figure 2 shows that the time series contains strongly amplitude modulated, irregular oscillations in the alpha frequency band. The inset in the upper right corner shows the first 8 s. of the data to illustrate alpha oscillations on a shorter time scale, as often found in EEG visualization software. Modified K-Means Clustering and Competitive Fitting Microstates are computed by the modified K-means clustering algorithm introduced in Pascual- Marqui et al. (1995) and as reviewed in Murray et al. (2008). The algorithm receives the EEG data array (n t × n ch ) and the desired number of microstate maps n maps as minimum inputs. Note that the number of microstates is an optional argument that can be set to any integer n maps ≥ 2. If not provided, the default value n maps = 4 is used. In the worked example, four microstates are computed. The remaining parameters (n runs , maxerr, maxiter) can be provided as additional parameters to the kmeans function call, otherwise the default values (n runs = 10, maxerr = 10 −6 , maxiter = 500) are used. From all K-means runs, the optimum run is selected according to the cross-validation (CV) criterion detailed in Murray et al. (2008). The CV criterion to be minimized measures the residual variance while correcting for the number of electrodes. Note that Murray recommends to (spatially) downsample EEG data with more than 64 electrodes (Murray et al., 2008). The function call looks like: # (4) Modified K-means clustering maps, x, gfp_peaks, gev, cv = kmeans(data, n_maps=4, doplot=True) The microstate maps are returned in a (n maps , n ch ) formatted array, and the variable x contains the sequence of microstate map labels for the given EEG data series, i.e., its length is equal to the number of EEG time samples. The remaining return values contain the indices of the GFP peaks used for clustering, the global explained variance (GEV) of the microstate maps, and the absolute value of the cross-validation criterion, i.e., the minimum value across all K-means runs. The microstate ordering returned by the K-means algorithm is random, whereas a standard ordering has been established in the literature (Koenig et al., 2002). Our K-means implementation contains an option to re-label the microstates interactively before proceeding. In the case of four microstates, the standard microstate labeling is based on the map geometry. For map A, the border between positive and negative potential values runs approximately along the diagonal from the frontal left to the occipital right corners. Map B is diagonal in the opposite direction, map C has a horizontal orientation, and map D is often circular. Sometimes, slightly different maps are generated. We mostly observed the occurrence of a map D with a vertical axis instead of a circular pattern. In this case, you can either accept the results and proceed, or re-cluster the data set as the results of the K-means algorithm differ between runs. Note that the sequence returned by the K-means algorithm contains the microstate labels as numbers, {A, B, C, D} → {0, 1, 2, 3}, in order to accelerate informationtheoretical computations. Using numbers, the microstate label can directly be used as an index in arrays and matrices. If microstate sequences generated by external software are analyzed, the labels have to be converted to the numerical values 0, ..., n maps − 1 before using the functions of this package. Information-Theoretical Analysis-Motivation and Basics To analyze the microstate sequence with information-theoretical methods, we first need to compute the distribution of microstate labels P(X t = S i ), i.e., the probability to find the label S i ∈ {A, B, C, D} at time point t. The distribution of X t can be characterized by its Shannon entropy H (Kullback et al., 1962): If the sequence always repeated the same microstate label, uncertainty would be minimal and its Shannon entropy would attain its minimum value H = 0, corresponding to a delta distribution P(X t ). Maximum entropy is obtained for a uniform distribution of microstate labels, resulting in H = log(4) in the case of four microstates. Logarithms are taken with respect to the base e (Euler's constant), leading to the unit "nats." The subsequent tests will take into account dependencies between microstate labels at different times. For one-step transitions X t → X t+1 , dependencies on the values X t , X t−1 , and X t−2 are tested by the Markovianity tests, as detailed below. For further time lags, X t+k with k > 1, temporal dependencies are assessed by mutual information between X t and X t+k . Moreover, we test time stationarity and the symmetry of the transition matrix. Each test leads to specific distributions, an empirical distribution derived from the actual data, and a reference distribution derived from the independence assumption under the null hypothesis. The distance between the empirical distribution (p i ) and the null distribution (q i ) is measured by the Kullback-Leibler divergence D(p, q) = − i p i log p i q i (Kullback, 1959;Kullback et al., 1962). Statistical significance is tested with χ 2 -statistics using classical convergence theorems (Anderson and Goodman, 1957;Kullback, 1959;Billingsley, 1961;Kullback et al., 1962). The specific test statistics along with their mathematical expressions are given in Table 1. In notation, we follow (Kullback et al., 1962) and denote observed frequencies by f . The estimated probability of microstate label S i , denoted as p i , is the ratio of f i , the number of observations of label S i , and the sample size n, or p i = f i n . In Table 1, indices run over microstate labels and multiple sums are abbreviated by a single summation sign and the indices over which the sum is calculated. Symbol Distribution and the Transition Matrix Basic statistics commonly used to characterize microstate sequences can be obtained by calling: The empirical Shannon entropy of the microstate sequence is H = 1.38, while the maximum possible Shannon entropy for any series of four symbols is log(4) = 1.39. We see that the EEG derived sequence almost achieves maximum entropy, suggesting a process with high randomness. In the following sections, however, we show how distinct memory features such as periodicities linked to the cortical alpha rhythm can be extracted and used to characterize the sequence. Markov Properties and Markovianity Tests First, we test if the sequence follows a simple Markov process of order 0, 1, or 2. To this end, the following tests will be computed: # (7) Markovianity tests alpha = 0.01 p0 = testMarkov0(x, n_maps, alpha) p1 = testMarkov1(x, n_maps, alpha) p2 = testMarkov2(x, n_maps, alpha) p_geo_vals = geoTest(x, n_maps, 1000./fs, alpha) The console output for all tests also gives the value of the test statistic and the degrees of freedom of the corresponding χ 2 distribution, calculated according to Table 1. Zero-Order Markov Property The null hypothesis is that the transition probability from the current state X t to the next state X t+1 is independent on X t . Therefore, P(X t+1 | X t ) = P(X t+1 ) under the null hypothesis. We obtain the corresponding test statistic G 0 in Table 1, if the observed number of transitions X t = S i → X t+1 = S j is denoted f ij , the number of observations X t = S i is f i , and the number of observations X t+1 = S j is f j . The length of the microstate sequence X t is n. The degrees of freedom (d.o.f.) of the asymptotic χ 2 distribution is given in the right column of Table 1 (Kullback, 1959;Kullback et al., 1962). The console output shows that the zero-order Markovianity test yields a p-value of almost zero, within double floating point precision, under the assumption of total independence between subsequent symbols. The independence assumption is therefore clearly rejected. First-Order Markov Property The null hypothesis is that the transition probability P(X t+1 | X t ) only depends on X t , and not on any states further in the past, implying P(X t+1 | X t , X t−1 ) = P(X t+1 | X t ). Calculating the test statistic G 1a as given in Table 1, we find p = 4.29 × 10 −145 , indicating that a first-order Markov process is also rejected as a data model. Alternatively, firstorder Markovianity can be assessed based on the equivalence of the first-order Markov property (the memoryless property) with a geometric distribution of state durations (Feller, 1971). Each microstate label has an associated lifetime distribution that contains the lengths of contiguous segments of the given label. For a first-order Markov process, the probability that label i appears in a contiguous segment of length k follows the geometric distribution q i (k) = (1 − T ii ) × T k−1 ii . The term T k−1 ii is the (k − 1)-th potency of the i-th diagonal element of the transition matrix T. In the G-test statistic G 1b in Table 1, m is the maximum lifetime and p i is the empirical lifetime distribution. Testing our EEG data set for geometric lifetime distributions gives analogous results to the test statistic G 1a , rejecting the first-order Markov hypothesis for all four microstate maps (p A = 1.33 × 10 −19 , p B = 2.68 × 10 −17 , p C = 1.57 × 10 −75 , p D = 3.24 × 10 −14 ). Second-Order Markov Property Second-order Markovianity is tested based on the null hypothesis that the transition probability P(X t+1 | X t , X t−1 ) does not change if one step further in the past is taken into consideration. The resulting statistical expression for the null hypothesis is P(X t+1 | X t , X t−1 , X t−2 ) = P(X t+1 | X t , X t−1 ). Note that a true first-order Markov process, as later used in surrogate data tests, should also fulfill the second-order Markov property, as neither of the states X t−1 or X t−2 contribute information about the transition probability P(X t+1 | X t ). For the test data, second-order Markovianity is clearly rejected (p = 3.32×10 −86 ). Stationarity of the Transition Matrix Stationarity of the transition matrix over time depends on the length L of the time window. For a given L, the data set is partitioned into r non-overlapping blocks of length L and the transition matrix is computed for each data block k = 0, . . . , r−1. In case of stationarity, the number of transitions X t = S i → X t+1 = S j within block k, denoted f ijk , is independent of the block index k. The expression for the null hypothesis and the test statistic can be found in For a block size of 5,000, we obtain 9 data blocks, and the p-value of p = 2.21 × 10 −5 indicates that the transition matrix of the test data set is not stationary. Other block sizes can be defined in the function call, or interactively in the console and IPython tutorials provided. Symmetry If each state transition occurs with the same probability as the reverse transition, the transition matrix T will be symmetric. The expression for the null hypothesis and the test statistic are given in Table 1, and the symmetry test is computed as: # (9) Symmetry test for the transition matrix p4 = symmetryTest(x, n_maps, alpha) The test result p = 4.88 × 10 −89 leads to rejection of the null hypothesis, and to the conclusion that the EEG data set has an asymmetric transition matrix. Asymmetry of the transition matrix is important when considering non-equilibrium processes possibly underlying microstate dynamics (von Wegner et al., 2017). Markov Surrogate Data As a first-order Markov process is uniquely defined by an initial state distribution π and a first-order transition matrix T, an equivalent Markov process with π and T identical to the empirical microstate sequence can be synthesized (Häggström, 2002). The iterative construction is visualized in Figure 3 where the individual steps are labeled with (blue) numbers. The procedure starts with an initialization function and then iterates an updating function for the desired length of the surrogate sequence. The initial state, one of the microstate labels A, B, C, D, is selected in accordance with the equilibrium distribution π. A pseudo-random number r 0 ∼ U[0, 1], uniformly and independently distributed on the unit interval, defines the index of the initial state by the condition Step 1 in Figure 3 illustrates this step showing that the equilibrium distribution π partitions the unit interval [0, 1]. Given the initial state, all subsequent states of the surrogate sequence are generated by the transition matrix T. The same principle as in the initialization step is used. The current state at time t determines the row of the transition matrix to be used for the next transition t → t + 1. In Figure 3, the random initial state is B, so the next state is calculated from the second row of T (step 2). As the conditional probabilities in each row fulfill j T ij = 1, each row of T is a partition of the unit interval. Choosing another random variable r 1 ∼ U[0, 1] (step 3), the index of the next state is determined by j−1 l=0 T il ≤ r t < j l=0 T il . In Figure 3, r 1 points to element p BC , and thus, we record the state transition B → C for t → t + 1. The next state is generated in the same manner (step 4), this time using the third row of T, because the current state is now C. Using NumPy, the two computational steps used by the algorithm can be written as a single line of code each. First-order Markov surrogates are computed as: # (10) Synthesize a surrogate first-order Markov process x_mc = surrogate_mc(p_hat, T_hat, n_maps, len(x)) p_surr = p_empirical(x_mc, n_maps) T_surr = T_empirical(x_mc, n_maps) By construction, the synthetic Markov chain has a symbol distribution almost identical, within stochastic boundaries, to the one of the experimental data. Here, we get p = The initial state is selected according to the equilibrium distribution π using a pseudo-random number r 0 ∼ U[0, 1] (step 1). In this example, state B is selected and therefore, the next state is selected using the partition of unity defined by row 2 of the transition matrix T (step 2) and the pseudo-random number r 1 ∼ U[0, 1]. Here, r 1 points to p BC (step 3), so the next state of the process is C. Starting from state C, the successor state will be selected from the third row of T, and a new random number r 1 (step 4). The algorithm can be iterated for any desired length of the surrogate sequence. All tests calculated for the experimental data set can now be applied to the surrogate sequence, which by construction is a first-order Markov sequence: # (11) Markov tests for surrogate data p0_ = testMarkov0(x_mc, n_maps, alpha) p1_ = testMarkov1(x_mc, n_maps, alpha) p2_ = testMarkov2(x_mc, n_maps, alpha) p_geo_vals_ = geoTest(x_mc, n_maps, 1000./ fs, alpha) p3_ = conditionalHomogeneityTest(x_mc, n_maps, L, alpha) p4_ = symmetryTest(x_mc, n_maps, alpha) The test results reveal that all desired properties are fulfilled by the surrogate Markov sequence. The sequence is not zero-order Markov (p = 0.00), but is first-order Markov (p = 0.553) and also second-order Markov (p = 0.886), as expected. The alternative test for first-order Markovianity based on geometric lifetime distributions confirms the above results, as the null hypothesis is accepted for all four microstate maps (p A = 0.545, p B = 0.302, p C = 0.207, p D = 0.797). As the surrogate sequence is synthesized from a constant transition matrix, we find the sequence to be stationary (p = 0.180, block size 5,000) while reproducing the asymmetry of the experimental transition matrix (p = 3.84 × 10 −95 ). Mutual Information Time-lagged mutual information for discrete time lag k can be defined in entropy terms as: In words, I(k) measures dependencies between time points t and t + k as the difference between two entropies. The term H(X t+k ) is the uncertainty about X t+k without further knowledge about the past, and H(X t+k | X t ) is the conditional uncertainty about X t+k , knowing the state X t . The time-lagged mutual information of a first-order Markov process can be written in terms of its equilibrium distribution π and its transition matrix T (von Wegner et al., 2017): The equation uses the matrix potency T k , computed using a diagonalization of T (von Wegner et al., 2017). The time-lagged mutual information function for the EEG microstate sequence and the Markov surrogate are calculated as: # (12) Time-lagged mutual information with confidence interval l_max = 100 aif = mutinf(x, n_maps, l_max) aif_mc = mutinf (x_mc, n_maps, l_max) In the tutorials, a confidence interval for the mutual information function, or autoinformation function (AIF) is computed from 10 Markov surrogates, in order to limit the computation time. The demonstration of non-Markovianity, non-stationarity and periodic information in resting state EEG recordings were the main results presented in von Wegner et al. (2017). This tutorial should enable the reader to reproduce these results with their own data, and to design new studies to further elucidate the functional role of EEG microstates. IPYTHON TUTORIAL We provide an interactive IPython notebook to illustrate a typical analysis pipeline starting from raw EEG and leading to test statistics and graphical presentations of the results. The analysis proceeds in the same order as the console tutorial, and using identical numbering of all steps for easier comparison. The notebook is part of the repository and can be viusalized on the GitHub page directly, or using the nbviewer web application, pasting the eeg_microstates package url. The only exception is the iframe element calling the PubMed site for reviewing recent publications on the topic, which is not rendered on these pages for security reasons. In a running notebook, however, the link is rendered interactively. Acceleration The published code can be further optimized with respect to computational speed. As several functions involve nested loops over simple numerical values, well known Python accelerators can be applied. We tested Numba just-in-time compilation, compiled C functions using Cython, pure C functions invoked by a system call from Python and external Julia code. All methods gave considerable speedups which we do not further quantify here. We anticipate users to choose their favorite method. Though we did not include these code variants in the published package, in order to minimize dependencies and to maximize portability of the code across platforms, additional code can be obtained from the authors. DISCUSSION AND OUTLOOK In the present article, we introduce an open-source Python package to perform the microstate algorithm on EEG data sets, and to analyze the resulting symbolic time series using information-theoretic measures and statistical tests. We presented the application of the procedures included in the package in a recent publication (von Wegner et al., 2017). As the methods we used in the paper are not available as opensource code, to the best of our knowledge, the code is presented alongside with the theoretical basis and a tutorial. We focused on code portability in order to provide easy access of the algorithms presented here. Useful applications of the package include the comparison of information-theoretical quantities under varying experimental conditions. For instance, in the past we have used the transition matrix approach to quantify microstate sequences calculated from EEG recordings during wakefulness and non-REM sleep in healthy subjects and in synaesthesia patients Kuhn et al., 2015). Using the new algorithms, we can extend these analyses and add spectral information, in particular the peaks of the time-lagged mutual information function, to search for subtle differences in the temporal structure of microstate sequences. Also, the (non-)stationarity of microstate sequences can be compared under different conditions, and prior to using other algorithms requiring stationarity. The same principle can be followed to study EEG recordings (resting state or ERPs) in neuropsychiatric diseases or during cognition. AUTHOR CONTRIBUTIONS FvW implemented the code, performed software tests and wrote the manuscript and tutorials. HL provided EEG data and codesigned the analysis pipeline and software structure.
9,419.2
2018-06-01T00:00:00.000
[ "Computer Science" ]
Direct observation of zitterbewegung in a Bose-Einstein condensate Zitterbewegung, a force-free trembling motion first predicted for relativistic fermions like electrons, was an unexpected consequence of the Dirac equation's unification of quantum mechanics and special relativity. Though the oscillatory motion's large frequency and small amplitude have precluded its measurement with electrons, zitterbewegung is observable via quantum simulation. We engineered an environment for 87Rb Bose-Einstein condensates where the constituent atoms behaved like relativistic particles subject to the one-dimensional Dirac equation. With direct imaging, we observed the sub-micrometer trembling motion of these clouds, demonstrating the utility of neutral ultracold quantum gases for simulating Dirac particles. Introduction Among the great discoveries of the enlightenment was the realization that physical laws are equivalent in all places, at all times, and for all scales; this remains a central tenet in contemporary science. Quantum simulation exploits this universality to study the behaviour of systems that are difficult to access or impossible to manipulate, by performing direct measurements on analogue systems composed of well-characterized and highly manipulable quantum building blocks. In this work, we used neutral rubidium atoms to simulate zitterbewegung, a trembling motion usually associated with relativistic electrons [1], and we illuminate its microscopic origins by drawing an analogy to the well-understood atomic physics of Rabi oscillations. The Dirac equation-describing the motion of free fermions-is an essential part of our current description of nature; by engineering new Dirac particles in novel settings, we expose the equation's properties by direct measurement. Simulations of the Dirac equation have been proposed for superconductors [2], semiconductors [3,4], graphene [5], cold atoms [4,[6][7][8][9][10][11][12][13][14] and photonic systems [15]; and have been realized with cold atoms [16], trapped ions [17] and photons [18]. The ion and photon experiments demonstrated zitterbewegung for quantities analogous to position or time in the Dirac equation. Here, we directly observed a neutral-atom Bose-Einstein condensate (BEC) undergoing zitterbewegung in space and time. Zitterbewegung, as observed here, is an example of a broader class of phenomena where a group of states with differing velocities are quantum mechanically coupled together and undergo Rabi-like oscillations [19][20][21][22]. As with the present case, eigenstates of the full Hamiltonian are static, but superpositions can tremble. Neutrino oscillations [23] are an example of this generalization: neutrinos are produced by the weak nuclear force in superpositions of the propagating (i.e. mass) eigenstates, each with a different mass and, therefore, velocity. The precise control and direct measurement techniques available in systems of ultracold atoms, coupled with their accessible length and energy scales, make these systems ideal for quantum simulation. In this experiment, our quantum building blocks were Bose-condensed 87 Rb atoms. Using two counter-propagating Raman lasers (figure 1(a)) with wavelength λ = 790.1 nm, we coupled the atoms' | f = 1, m F = ∓1 = |↑↓ atomic hyperfine states (comprising our effective two level system) to their external motion [24] with a four-photon Raman transition ( figure 1(b)). In this environment, each atom's behaviour was governed by the one-dimensional Dirac Hamiltonian, making its motion analogous to that of a relativistic electron. The system's characteristic momentumhk R = 2πh/λ-that of a single photon-specifies the recoil energy With suitable values of m * and c * , this same dispersion relationship and its underlying Dirac Hamiltonian equally describe relativistic electrons and our atomic system. In the vicinity of the depicted avoided crossing, atoms in |↑ move with velocities near 2v R , and those in |↓ have velocities near −2v R . Bottom panel: typical momentum distribution of the BEC (narrow peak) and thermal cloud (broad) in our system. The vertical axis is truncated to show detail-the central peak reaches a value of 18 on this scale. where m is the atomic mass. These recoil units set the scale for all physical quantities in our analogue system, such as the recoil velocity v R =hk R /m. The Raman lasers drove the four-photon |↑,hk x = p x + 2hk R ↔ |↓,hk x = p x − 2hk R transition (resonant when p x = 0), wherehk x is the atomic momentum along e x and p x will play the role of momentum in the Dirac equation. The simulated speed of light c * = 2v R = 11.6 mm s −1 was twice the atoms' recoil velocity, a factor of ≈10 10 less than the true speed of light. The artificial rest energy m * c * 2 =h /2 1E R was a factor of ≈10 17 less than the electron's rest energy (h is the four-photon laser coupling strength). The effective Compton wavelength λ * C = h/m * c * ≈ 1 µm, the approximate amplitude of zitterbewegung, exceeded that of an electron by a factor of ≈10 6 . These new scales enabled our direct measurement of zitterbewegung. The dynamics of our ultracold 87 Rb atoms were described by the one-dimensional Dirac equationĤ wherep x is the momentum operator;σ x,y,z are the Pauli spin operators; and |ψ is represented as a two-component spinor, whose components are defined by |↑↓, p x , the m * = 0 eigenstates of H D . For the massless, m * = 0 case, this equation simply describes particles (positive energy) or anti-particles (negative energy) travelling with velocity ±c * , as depicted by the dashed lines in figure 1(c). The mass term couples together these m * = 0 states, producing an avoided crossing (solid curves in figure 1(c)) with energy given by the familiar relativistic dispersion E( p x ) = ±( p 2 x c * 2 + m * 2 c * 4 ) 1/2 , gapped at p x = 0 by twice the rest energy. In our atomic analogue, the two massless states coupled by the effective rest energy physically corresponded to the atomic states |↑↓ moving with velocity ±c * . Zitterbewegung equations of motion Zitterbewegung arises because the Pauli matrices associated with the two terms in the Dirac equation do not commute. In the Heisenberg representation of quantum mechanics the operators, not the wavefunctions, depend on time: for examplev x = dx/dt = [x,Ĥ D ]/ih. In this formalism, the velocity operator obeys the differential equation For an initial state |↑, p x = 0 , which gives initial conditions v x = c * and dv x /dt = 0, the expectation values of the position and velocity observables oscillate with the zitterbewegung frequency according to Initial states with p x = 0, or localized wave packets, follow more complex trajectories [25]. Zitterbewegung, as usually understood, refers to trembling in position; an oscillatory velocity is the obvious dual. In these experiments, we observed the out-of-phase oscillation of these conjugate quantities. The atomic Dirac Hamiltonian The one-dimensional Dirac Hamiltonian for a system of 87 Rb atoms can be realized by coupling different spin-momentum states. The three m F states comprising the 5S 1/2 , f = 1 electronic ground state manifold are subject to a two-photon Raman process ( figure 1(a)), and the atomic dynamics along e x are described by the three-level Hamiltonian where 2 is the two-photon Raman coupling strength; is the quadratic Zeeman shift that energetically displaces the m F = 0 state;σ 3,z are the generalized Pauli operators for a spin-1 system; and1 3 is the 3 × 3 identity. We concentrate on the avoided crossing at k x = 0 between the states that adiabatically connect to |m F = −1 and |m F = +1 . By adiabatically eliminating the lowest-energy eigenstate, we obtain the effective two-level Hamiltonian which includes a global rotation of the systemσ x →σ y ,σ y →σ z ,σ z →σ x . For k x /k R 1, the effective coupling is =h 2 2 /2(4E L −h ) and E 4 = E L / 2 . Ignoring the uniform energy offset, we identify the parameters from the Dirac Hamiltonian (1): the effective c * = 2hk R /m is twice the atomic recoil velocity, the rest energy m * c * 2 =h /2 is the coupling strength and the Compton wavelength λ * C = h/m * c * = 8πhk R /m sets the scale for the zitterbewegung's amplitude. The equivalence of this Hamiltonian (5) and the Dirac Hamiltonian (1) provides the opportunity for our quantum simulation of relativistic electron dynamics. Experimental techniques To study zitterbewegung with an ultracold atomic gas, we measured the positions and velocities of atomic systems subject to the Dirac Hamiltonian for varying times after starting in an initial state with speed c * . These experiments began with N ≈ 5 × 10 4 atom optically trapped 87 Rb BECs ( f c = 0.75(10) condensate fraction) in the | f = 1, m F = −1 ground state, subject to a uniform B 0 = 2.1 mT bias magnetic field. The atoms were confined in a harmonic trap [(ω x , ω y , ω z )/2π = (38, 38, 130) Hz] with characteristic timescales greatly exceeding those of the zitterbewegung. We transferred these atoms (at rest) to | f = 1, m F = 0 using an adiabatic rapid passage technique; a fixed frequency 15.0 MHz radiofrequency magnetic field coupled the different m F states together as the bias magnetic field was swept through resonance. Using a pair of Raman beams counterpropagating along e x with wavelength λ = 790.1 nm and frequency difference δω = g F µ B B 0 + 4E R + (where = h × 32 kHz is the quadratic Zeeman shift), a 30 µs π -pulse transferred approximately 85% of the atoms from |m F = 0, k x = 0 to |m F = −1, k x = 2k R (moving with velocity v = 2hk R /m = c * ). Before the trap appreciably altered their velocity (200 µs), we changed the Raman lasers' frequency difference to δω = g F µ B B 0 , bringing |m F = −1, k x = 2k R and |m F = +1, k x = −2k R into four-photon resonance. We then suddenly introduced a four-photon Raman coupling between these states ( figure 1(b)), and allowed the system to evolve under this new Hamiltonian for an evolution time t. Just before transferring the BEC into |m F = −1, k x = 2k R , two 6.8 GHz microwave pulses spaced in time by 50 ms each out-coupled ≈10% of the atoms to the f = 2 hyperfine manifold. These atoms were separately imaged (without repumping on the f = 1-2 transition) leaving the atoms in f = 1 undisturbed. These f = 2 atoms served two purposes: (i) by setting the microwave frequency 2 kHz above (first pulse) and 2 kHz below (second pulse) resonance, we tracked shifts in the bias field that would change our four-photon Raman resonance condition. Upon analysing the data, we rejected points where the atom number difference between these two images was greater than two standard deviations from being equal; (ii) we determined the BEC's position immediately before each zitterbewegung experiment began, allowing us to cancel shot-to-shot variations in the trap position. The beginning of the three transfer pulses-two microwave outcoupling pulses, and the final four-photon Raman pulse-were each separated in time by 50 ms. As three periods of a 60 Hz cycle, this separation was chosen to reduce magnetic field background fluctuations at the power line frequency, and to facilitate rethermalization between pulses. Measurement and analysis We measured the system either by imaging the atoms immediately following this evolution (to determine the atoms' position) or by releasing the atoms from their trap and simultaneously turning off the Raman lasers, allowing for a short time-of-flight (TOF, with duration t TOF ) before imaging (to determine the atoms' velocity). Figure 2 shows the evolution of the signal for several times of flight, and figure 3 shows in situ and after TOF (t TOF = 550 µs) measurements at several coupling strengths; the velocity-dominated TOF images clearly show the expected cosinusoidal behaviour. For in situ measurements, the Raman and trapping beams remained on during the 40 µs absorption imaging pulses. For TOF, these were removed during TOF during which time the atoms flew ballistically for t TOF and were subsequently absorption imaged. We used high intensity imaging, with intensity I ≈ 3I sat (where I sat is the saturation intensity), that reduced the effective optical depth [26] and gave better signal-to-noise in the determination of the clouds' positions. This simple description of zitterbewegung assumes that the range δp x of occupied momentum states is small compared to m * c * , and only those states near the avoided-crossing structure are populated. To maintain a sufficiently narrow δp x , the spatial size of the system must be at least δx λ * C , which, as observed in [27], is larger than the λ * C /4π amplitude of the zitterbewegung itself. We satisfied this requirement in our experiment by using clouds whose Thomas-Fermi radii R x = 12(2) µm greatly exceeded the measured sub-micron zitterbewegung oscillations, and overcame the fundamental measurement challenge with good statistics. Just before initializing zitterbewegung, we measured the initial position of the BEC by outcoupling and imaging ≈5 × 10 3 atoms. In principle, this allowed us to measure the centre of the distribution with an uncertainty estimated by R x / √ 5 × 10 3 ≈ 0.17 µm. Our actual measurements, which include technical noise and are averages of four independent images, have a typical 0.3 µm rms uncertainty (much less than both the distribution's 12(2) µm width and our ≈1.75 µm imaging resolution). From fits to data as in figure 3-with parameters joint between each in situ and TOF pair-we extracted the frequency , amplitude λ * C /4π and velocity c * of the observed zitterbewegung (shown in figure 4). The observed values are attenuated by approximately 2.5 from those predicted by (3), as explained below. Amplitude attenuation For our finite-temperature system, higher momentum states are thermally occupied in the initial equilibrium system (schematically illustrated in figure 1(c) and observed in figure 5). The zitterbewegung frequency for these states is increased, and the oscillations correspondingly dephase, decreasing the observed amplitude. These finite temperature effects give rise to a non-participating fraction f k of the atomic population drifting at c * . Indeed, figure 5 shows that the majority of the 'thermal' population surrounding the initial BEC is unaffected by the coupling. Additionally, owing to imperfect preparation of the initial |↑, p x = 0 state, a fraction f 0 = 0.15 (10) remained at rest in m F = 0 (and therefore did not participate at all in the −1 to +1 coupling). Fluctuations in the background magnetic field also contribute to both f 0 and f k . The rest fraction f 0 was determined from long TOF images (such as those in figure 5). The drifting fraction f k was found by fitting a model wherex 0 is the initial offset position and f 0 is fixed at 0.15. Using two sets of data, one in situ and one with t TOF = 550 µs, we performed joint fits for each laser intensity (four-photon coupling strength). In the initial analysis, we fix c * = 2hk R /m and fit the data to extract the parameters f k , , φ 0 andx 0 . The non-participating fractions for the data shown in figures 4(a) and (b) are shown in figure 4(c). Next, we found the average of f k as a function of to use in the model. Finally, we remove the background slope due to the f k c * t term from the same five sets of data using a simple linear fit. We refit the remaining signal to a 'fully participating' model ( (6) with f 0 = f k = 0) with fixed (from the original fit) to extract the effective speed of light parameter c * . We found the zitterbewegung amplitude (of the participating atoms) c * / . The model used to predict the amplitude is given by (1 − f 0 )(1 − f k )2hk R /m , and the uncertainty is dominated by our systematic uncertainty in f 0 , which is due to magnetic field variations. The background slope due to the f k c * t term was subtracted from the data presented in figures 2 and 3. The curves are calculated from the values from the original fit using (6) without the f k c * t term. The average of the extracted f k was used in the theory curve in figure 4(c) to show the expected amplitude of in situ oscillations. Summary By engineering a two-level quantum system from initial states with opposite velocity, we reinterpret the 'curious' physics of zitterbewegung in analogy to the Rabi oscillations ubiquitous in atomic physics. In this language, the particles trembled because the initial state was not an eigenstate of the coupled system; once subject to the Dirac Hamiltonian, the system Rabioscillated between bare states of equal and opposite velocity. As the atoms' coupling was provided by resonant laser light instead of the electrons' rest energy, it is natural to think of a Rabi oscillation picture where the mass (coupling) is suddenly turned on and off. (Somewhat amusingly, the mechanism by which our laser field (a coherent state of light) generates mass is analogous to the Higgs mechanism where a Higgs condensate (a coherent matter wave) generates mass in the standard model [28].) The zitterbewegung of electrons arises because two states-particle and antiparticle states-are coupled, and the resulting eigenstates are superpositions of the two. Projections of bare electron states onto this basis result, as in the case of the atoms, in oscillations between states of opposite velocity. This straightforward analogy compels us to accept that the rest energy acts exactly as a coupling field and mixes the particle and antiparticle states into eigenstates that are superpositions of the two. While the Dirac equation generally applies only to fermionic systems in nature, quantum simulations such as ours directly realize Dirac-boson systems in the laboratory [12,29], permitting access to new classes of experimental systems. Though BECs near these Dirac points are short-lived [22,[30][31][32][33], strong interactions, as are present near the superfluid-Mott transition in an optical lattice, can stably populate these states [34,35], for example leading to bosonic composite-fermion states [36,37].
4,310.6
2013-03-05T00:00:00.000
[ "Physics" ]
Applications of the Methylotrophic Yeast Komagataella phaffii in the Context of Modern Biotechnology Komagataella phaffii (formerly Pichia pastoris) is a methylotrophic yeast widely used in laboratories around the world to produce recombinant proteins. Given its advantageous features, it has also gained much interest in the context of modern biotechnology. In this review, we present the utilization of K. phaffii as a platform to produce several products of economic interest such as biopharmaceuticals, renewable chemicals, fuels, biomaterials, and food/feed products. Finally, we present synthetic biology approaches currently used for strain engineering, aiming at the production of new bioproducts. Introduction In the context of industrial biotechnology, many bio-based products can be obtained as a result of bioprocesses using microorganisms as cell factories.Among these, yeasts stand out as one of the most important microbial platforms.Despite the fact that baker's yeast Saccharomyces cerevisiae still occupies a relevant position as one of the main cell factories [1], the methylotrophic yeast Komagataella phaffii has gained much attention as a promising "biotech yeast" [2]. K. phaffii exhibits traits ideal for a microbial platform in biotechnological settings.This yeast shows minimal nutritional needs and grows on economical substrates, achieving cell densities exceeding 100 g•L −1 of dry cell weight [3].Notably, it demonstrates resilience against high methanol concentrations, acidic and basic pH levels, and inhibitors derived from lignocellulosic sources, surpassing other methylotrophic yeasts [4].Moreover, it outperforms S. cerevisiae in terms of thermo-and osmo-tolerance [2].Its appeal as a host organism is further accentuated by efficient secretion mechanisms, yielding abundant protein secretion in bioreactor settings [5], and versatility in protein processing and posttranslational modifications [6], due to its ability to perform modifications such as Oand N-linked glycosylation and disulfide bond formation [7].Also, K. phaffii can generate recombinant proteins either constitutively or through induction [2], facilitated by the development of numerous genetic and metabolic engineering tools alongside established fermentation processes [8]. Given its importance in modern biotechnology, in this review we present the main applications of K. phaffii in different industrial sectors.Finally, we show how modern synthetic biology approaches may be used to further optimize the utilization of K. phaffii in the context of modern biotechnology. Applications in the Pharmaceutical Industry The expression system based on K. phaffii is licensed in more than 100 companies involving the biotech, pharmaceutical, vaccine, and food industries [9].Proteins produced in this yeast have also received GRAS (generally recognized as safe) status by the American Food and Drug Administration since 2006 [10] and K. phaffii was first approved for the production of biopharmaceuticals in the USA in 2009 [11].Since then, it has become a promising platform to produce biopharmaceuticals, which include a wide range of products such as vaccines, blood and blood components, and recombinant therapeutic proteins [12].A list of the main biopharmaceuticals produced in K. phaffii is shown in Table 1. Table 1.Biopharmaceuticals produced in K. phaffii that are approved or undergoing clinical trials. Intestinal schistosomiasis vaccine Phase I trials [30] PfAMA1-DiCo Institut National de la Santé et de la Recherche Médicale (Paris, France) Malaria vaccine Phase I trials [31] Ecallantide (trade name, Kalbitor) was the first FDA-approved biopharmaceutical produced in K. phaffii.It is a kallikrein inhibitor indicated for treatment of hereditary angioedema.In its discovery, researchers used a phage display technique to identify a possible human inhibitor analog that could interfere with inflammatory and coagulation pathways and expressed the recombinant 60-amino-acid protein in K. phaffii [32].Dyax Inc., its original developer and manufacturer, has since been acquired by Shire, which in turn was acquired by Takeda Pharmaceuticals, currently the supplier of this biopharmaceutical to the USA [13].Kalbitor raised some concerns regarding hypersensitivity reactions, which hindered its approval by the European Medicines Agency [33]. Ocriplasmin (trade name, Jetrea), produced in K. phaffii, was approved by the FDA for the treatment of vitreomacular adhesion in 2012 [15].This protein is a truncated form of human plasmin, a serine protease that acts on fibronectin and laminin.This proteolytic activity was shown to be able to resolve vitreomacular traction and reduce the requirement for surgical treatment [34].Jetrea was initially approved for manufacturing by Thrombogenics but was discontinued in the USA due to commercial reasons; its rights are currently licensed to Inceptua Pharma in Europe [14]. In 2021, the FDA approved the first biosimilar insulin product in the American market, insulin glargine-yfgn, also produced in K. phaffii (Semglee, Mylan Pharmaceuticals Inc., now a part of Viatris Inc.) [16].Insulin glargine-yfgn is an insulin analogue with a prolonged duration of action that is also approved by the European Medicines Agency [17].This protein showed noninferiority versus the reference insulin glargine and, as a biosimilar, had a reduced price and significantly improved access to diabetes treatment [35].Biosimilar insulin products produced in this yeast and approved for commercialization in Europe include Baxter's Inpremzia and Mylan's Kirsty, which have shown comparable results to Novo Nordisk's reference biopharmaceuticals, Actrapid and Novorapid [20,21], respectively. The production of biopharmaceuticals in K. phaffii has bloomed in India and Japan owing to less strict intellectual property regulations than in the USA or Europe [28], and this market currently presents a range of K. phaffii-based products that include collagen, interferon, vaccines, and hormones [36,37].Up until 2022, the Japanese Mitsubishi Pharma Corporation commercialized recombinant human serum albumin produced through a K. phaffii-based expression platform [38].Human serum proteins comprise a large share of biopharmaceuticals expressed in K. phaffii: the "a" subunit of coagulation factor XIII, involved in coagulation disorders, and human antithrombin III, used in the treatment of disseminated intravascular coagulation, have been produced and purified in this yeast [39,40]. Research on the production of some biopharmaceuticals in K. phaffii, such as glycoproteins and antibodies, has faced issues regarding the fungal pattern of glycosylation of the resulting proteins, which could hinder their biological activity.Efforts towards engineering the glycosylation pattern of this yeast have resulted in Merck's Glycofi strain, which produces proteins with human-like N-glycosylation and terminal sialylation.With this, the yeast could potentially replace mammalian cell lines in the production of human glycoproteins [41,42].Different works describe the use of this platform to produce antibodies and erythropoietin [43][44][45].Also, an anti-HER2 cancer therapeutic antibody produced in a Glycofi strain showed promising results in a preclinical study [46].Unfortunately, the Glycofi facility was closed when Merck decided to stand back from biopharmaceutical research and development [47] in 2016. Many other proteins produced in K. phaffii are currently under research, including pre-clinical and clinical trials.Fusions of human serum albumin and peptide hormones such as the parathyroid hormone (PTH) aim to increase the stability of these hormones and improve their pharmacokinetic properties via the increased biological half-life of albumin [48].Considering other protein fusion technologies, antibody-directed enzyme prodrug therapy (ADEPT) represents an ingenious strategy for overcoming both drug resistance and lack of selectivity in anti-cancer treatments.An antibody directed against cancer cell antigens is fused with a drug-converting enzyme; the corresponding prodrug is then administered to the patient and converted into an active drug in the tumor, avoiding systemic toxicity.K. phaffii is used to produce a fusion currently undergoing clinical trials, MFECP1, which contains an anti-carcinoembryonic antigen antibody fused to a carboxipeptidase.Once the antibody tags cancer cells, a bis-iodo phenol mustard prodrug is administered and then converted by the peptidase, leading to cell death [25]. Since the FDA approval for the first therapeutic antibody in 1986, these proteins have set remarkable milestones in the treatment of various diseases, including cancer, immune disorders, and infectious diseases [49].K. phaffii has been used to produce a few of them, some that are currently approved for commercialization and others still under research.Eptinezumab (trade name Vyepti), produced by Lundbeck Seattle BioPharmaceuticals, received approval by the American regulatory agency in 2020 and by the European agency in 2022 for the prevention of migraine.The rationale behind this is based on the fact that migraine is a neurovascular disorder involving the release of the vasodilator calcitonin gene-related peptide (CGRP).The K. phaffii-produced eptinezumab is a humanized monoclonal antibody that binds to CGRP, preventing the triggering of migraine episodes [50].Clazakizumab is an anti-IL-6 antibody currently being studied in various clinical trials [27].The ubiquitous role of IL-6 in immune disorders, inflammatory diseases, and even cancer has prompted the development of various studies with anti-IL-6 antibodies.Bristol-Myers Squibb acquired exclusive worldwide development rights for most applications of the biopharmaceutical initially developed in K. phaffii by Alder Pharmaceuticals and is currently conducting a phase II trial [51,52].New applications of therapeutic antibodies include nanobodies, which are heavy chain domains of camelid antibodies that penetrate tissues and overcome the blood-brain barrier more efficiently than regular therapeutic antibodies owing to their small size.These proteins have been efficiently expressed in K. phaffii, and future studies on their glycosylation and binding properties should bring more insights to this subject [53,54].Ablynx, now a part of Sanofi, holds the worldwide rights for the Nanobody trademark and carries out the research that could achieve future therapeutics using this protein platform [55]. Virus-like particles (VLPs) purified from S. cerevisiae are already FDA-approved and commercially available (for example, Gardasil and Gardasil9 against the human papilloma virus, HPV [56]).VLPs purified from K. phaffii are not yet available, but some examples undergoing preclinical studies include HPV and coxsackievirus [57].DENV envelope protein-based VLPs generated using K. phaffii showed encouraging results against dengue [58][59][60] and gave the perspective of an inexpensive vaccine that could be used in developing countries where dengue is endemic.A preclinical evaluation of hepatitis B virus (HBV) core antigen VLPs purified from K. phaffii against hepatocellular carcinoma is also underway [61]. Specifically considering HBV, the recombinant hepatitis B surface antigen (rHBsAg) is expressed by K. phaffii in the production of Shanvac-B, a historically successful Indian vaccine indicated for immunization against chronic liver infection caused by all known subtypes of HBV.The antigen is produced by a culture of genetically engineered K. phaffii carrying the gene that codes for the major HBV surface antigen in a high-cell-density fed-batch fermentation process [62,63]. Other examples of recombinant vaccines with registered clinical trials (ClinicalTrials.gov, accessed 10 February 2024) include a hookworm vaccine tested in Brazilian and American participants [29], an intestinal schistosomiasis vaccine [29], and a recombinant malaria vaccine [31]. In view of the wide applicability of K. phaffii in the production of recombinant antigens and the increasing emergence of biological therapeutics against infectious, autoimmune, and non-communicable diseases, this production platform will certainly play a key role in biopharmaceutical production in the near future. Applications in the Production of Renewable Chemicals and Fuels Bio-based chemicals, materials, and fuels produced from renewable biomass such as lignocellulose or even carbon dioxide (CO 2 ) are becoming an interesting alternative to replace, at least in part, those derived from fossil feedstock through more sustainable processes [64].K. phaffii can utilize different substrates including glucose, fructose, ethanol, methanol, glycerol, sorbitol, succinic acid, and acetic acid [65][66][67].Recently, the ability of K. phaffii to utilize xylose via the oxidoreductive pathway at a slow rate was demonstrated [68].Furthermore, genes involved in xylose metabolism have also been introduced in the yeast to improve xylose utilization, leading to higher assimilation of this hemicellulosic sugar [69,70].Exploring their natural methanol utilization pathway (MUT), CO 2 may also be used as a substrate.For instance, two approaches have been proposed: methanol can be synthesized via CO 2 hydrogenation and be further metabolized by the MUT pathway [71] or alternatively, in a more elaborate strategy, CO 2 can be directly assimilated by an autotroph-engineered yeast [72]. Among the chemical compounds produced by K. phaffii are organic acids [73], sugar alcohols [74], polyketides [75], terpenoids [76], biopolymers [77], and biofuels [78].However, in general, they represent proof of concept since parameters such as titers, yields, and productivities are not suitable yet for application in large-scale industrial processes [79].A summary of the main bioproducts is shown in Table 2.One of the first metabolites produced in K. phaffii was S-adenosyl-L-methionine (SAM), a potential agent for human therapy that acts as a methyl donor and precursor of some amino acids and peptides such as cysteine and glutathione [93].A recombinant yeast expressing the SAM2 synthase gene was constructed and further improved by the knockout of the cystathionine synthase (CBS).As a result, 13.5 g•L −1 of SAM was obtained in a 5 L-fermenter using L-methionine as the substrate and methanol induction. K. phaffii has been used to produce chemical building blocks like organic acids with a broad range of applications in industry (Table 2).For instance, to produce lactic acid, the lactate dehydrogenase gene (LDH) from Bos taurus was introduced in the yeast [73].The additional expression of an endogenous lactate transporter and optimization of oxygenation conditions during cultivation on glycerol increased the yield significantly, rising from 10% up to 70% of the maximum theoretical yield [73].Similarly, lactic acid production has also been reported for a recombinant K. phaffii strain carrying a multicopy integration of the LDH gene derived from Leuconostoc mesenteroides, albeit using methanol as a carbon source [80]. In another study, glucose and methanol were used as substrates to produce C4dicarboxylic acids.The combined overexpression of pyruvate carboxylase (PYC1) and malate dehydrogenase (MDH1) genes led to the production of 0.76, 42.28, and 9.42 g•L −1 fumaric acid, malic acid and succinic acid, respectively, by the engineered yeast [81].By integration of the malonyl-CoA reductase MCR gene from Chloroflexus aurantiacus into the K. phaffii genome, recently, the production of 3-hydroxypropionic acid (3-HP) has also been demonstrated [82].Protein and metabolic engineering strategies were used to increase titer and productivity, resulting in strains capable of producing 24.75 to 37.05 g•L −1 of 3-HP on glycerol [82].To explore xylose utilization by K. phaffii, the yeast was engineered to express different xylose dehydrogenase genes from bacteria and filamentous fungi.The best strain produced 37 and 11 g•L −1 of xylonic acid from xylose and sugarcane bagasse hydrolysate, respectively, in batch cultivation [70] (Table 2). Synthesis of sugar alcohols such as xylitol and inositol were reported by Louie et al. [74] (Table 2).The biotransformation of xylose into xylitol was evaluated in recombinant K. phaffii harboring heterologous xylose reductase genes and the glucose dehydrogenase, gdh, from Bacillus subtilis.The highest conversion rates, with cells expressing the xylose reductase from Scheffersomyces stipitis, reached up to 80% and 70% with productivity values of 2.44 and 0.46 g•L −1 •h −1 from xylose and a non-detoxified hemicellulose hydrolysate, respectively.The authors also demonstrated that the biocatalyst cells could be recycled in multiple rounds of biotransformation without significant loss of activity [74]. The ability of K. phaffii to produce bulk chemicals and biofuels has been proven in recent years (Table 2).A synthetic route for 2,3-butanediol (2,3-BD) production was implemented in the yeast through the overexpression of two heterologous enzymes: the α-acetolactate synthase AlsS and α-acetolactate decarboxylase AlsD from B. subtilis, with the final reaction catalyzed by an endogenous 2,3-BD dehydrogenase.A titer of 74.5 g•L −1 2,3-BD was achieved in fed-batch cultivation using an optimized medium and glucose as substrate [85].Isobutanol and isobutyl acetate are other examples of chemicals produced in K. phaffii, exploring the native L-valine biosynthetic pathway [78]. Fatty acids and derivatives are important raw materials to produce advanced oilbased chemicals.In the case of K. phaffii, it was shown that deleting two native fatty acyl-CoA synthetase genes improved free fatty acid (FFA) accumulation in the cells [86].Later, to achieve a higher production of FFA from methanol, a global rewiring of the central metabolism was proposed to drive the carbon flux to the final product.As a result, 23.4 g•L −1 FFA was produced by the engineered strain during bioreactor cultivation [86].Using this FFA-overproducing background, 35.2 and 90.8 mg•L −1 fatty alcohols were also obtained from methanol or glucose, respectively, after the expression of three additional heterologous enzymes-a carboxylic acid reductase, a 4 ′ -phosphopantetheinyl transferase, and an alcohol dehydrogenase (Table 2) [86]. Other bio-products produced by engineered K. phaffii include terpenoids, polyketides, and biopolymers (Table 2).Considering the efficient isoprenoid metabolism and functional expression of proteins such as cytochrome P450 enzymes of K. phaffii [87], several studies have described the production of different terpenoids in yeast, for instance: dammarenediol-II [87], lycopene, β-carotene [76,88] and (+)-nootkatone [89].Terpenoids are natural products with broad-range applications in pharmaceutical and industrial sectors due to their distinct biological activities and high bioavailability, being used, for example, as flavoring additives, antioxidants, antiaging agents, drugs, and antitumoral agents [94].Polyketides are a class of secondary metabolites with bioactive properties that have relevant applications in the pharmaceutical industry [95].Engineered overproducing K. phaffii for the production of 6-methyl salicylic acid [90], lovastatin, and monacolin J [75] are examples of polyketides already obtained using this host cell (Table 2). Furthermore, the ability of K. phaffii to produce different biopolymers has been investigated (Table 2).Polyhydroxyalkanoates (PHAs) were accumulated up to 1% DCW in a recombinant K. phaffii expressing a heterologous PHA synthase targeted to the peroxisome [91].The production of hyaluronic acid, a glycosaminoglycan used in pharmaceutical and medical formulations, was also achieved by overexpression of endogenous genes involved with hyaluronic acid synthesis, combined with heterologous expression of hyaluronan synthase and UDP-glucose dehydrogenase from Xenopus laevis [77].Also recently, the production of chondroitin sulfate and heparin has been reported.The engineered and optimized strains produced around 2 g•L −1 of these compounds from methanol in fed-batch cultivation [65]. Applications in the Production of Biomaterials A biomaterial is "any substance or substance of substance, other than drugs, of synthetic or natural origin, that can be used for any period of time, that augments or replaces partially or totally any tissue, organ or function of the body, to maintain or improve the individual's quality of life" [96].Indeed, investigations delve into harnessing this system for biomaterial production, encompassing extracellular polysaccharides and recombinant proteins for diagnostics, therapeutics, and potentially clinical tissue engineering [97].Consequently, the characterization and utilization of biomaterials necessitate substantial quantities, posing a challenge for their cost-effective industrial-scale production.Hence, K. phaffii emerges as a proficient host for manufacturing a diverse array of biomaterials [98]. In this context, silks produced by arthropods, such as spiders, silkworms, dragonflies, and bees, among others, attract attention due to their mechanical and biocompatibility characteristics.Silk produced by silkworms has been widely used in the textile industry and as suture material for many years [99], especially due to its combination of remarkable strength and flexibility.In this sense, these biomaterials can be used to create threads, films, microcapsules, foams, sponges, hydrogels, and implantable materials for application in regenerative medicine, implant coating, and drug delivery [100,101]. Egg glue proteins (EGPs) produced by some insects can form a sticky substance that helps to adhere to surfaces in order to reduce the exposure of eggs to external factors such as wind or rain.In this context, the structure of silkworm EGP was elucidated, providing relevant information about its structure function for uses as a biomaterial [102].The adhesive properties of the natural and recombinant protein were tested by expressing the EGPs in K. phaffii and E. coli.In view of this, the importance of the protein produced with glycosylations was shown, as the natural EGPs and those expressed in K. phaffii presented better adhesive properties when compared to non-glycosylated EGPs produced in E. coli.Indeed, many bioadhesive proteins are known to be glycosylated [103,104], and this often contributes to many properties of protein function such as folding, solubility, thermostability, and protection against proteolysis [105].Therefore, the production of bioadhesive proteins becomes advantageous in the K. phaffii expression system. Recombinant protein cp19k-MaSp1, combining cp19k from the adhesion complex of the barnacle species Megabalanus rosa and MaSp1 from Nephila clavipes dragline silk, was engineered and expressed in K. phaffii, yielding a protein content of 53.38 mg•L −1 .This recombinant protein exhibited remarkable adhesion capabilities, surpassing individual proteins, demonstrating enhanced biocompatibility, mechanical resilience, and self-healing properties conducive to cell adhesion, proliferation, and growth, particularly for human umbilical vein endothelial cells (HUVECs) [106].Additionally, another protein composite with bioadhesive traits was synthesized in K. phaffii [107].This genetic engineering endeavor involved mussel foot proteins 3 and 5 (Mfp3, Mfp5) from Mytilus californianus, gas vesicle protein A (GvpA) from Dolichospermum flosaquae (a cyanobacterium), and the CsgA curli protein from E. coli.The synergistic properties of this chimera, coupled with posttranslational modifications during yeast expression, resulted in robust protein adhesion, positioning it as a promising biomaterial for forthcoming biomedical applications. From another perspective, collagen is a natural biopolymer of the extracellular matrix that makes up many structures in the body, namely the skin, muscles, bones, and cartilage.It is widely used in many fields such as biomedical applications and pharmaceutical and cosmetic industries [97].In the biomedical field, its use includes wound dressings, suture material, tissues, and drug delivery, among others [108].Like collagen, gelatin, produced from denatured collagen, also finds biomedical applications.Its preparation takes place from the hot acid or alkaline extraction of animal tissues [100].However, the production of these biopolymers has some bottlenecks, as they come from animal tissues and, consequently, can cause possible allergic reactions, in addition to the transmission of pathogens [109].Therefore, the production of recombinant human collagen has been explored over the years, in order to obtain a product with a higher yield and better stability and biocompatibility [110,111]. In addition, there is a complexity in the production of helical collagen, since its thermal stability depends on the hydroxylation of proline, present in its central helical molecule, which is only hydroxylated by the enzyme prolyl-4-hydroxylase (P4H), only present in systems of mammalian cell expression [112].To overcome this problem, some groups also cloned and co-expressed the P4H enzyme in K. phaffii, but with low protein production [113]. Recombinant human-like collagen (RHLC) was expressed in K. phaffii [108], showing an expression titer for the extracellular medium of 2.33 g•L −1 and purification of 98% in 48 h, showing more efficiency than the extraction of animal tissues.Furthermore, RHLC showed stability at high temperatures and good biocompatibility, with potential application for industrial production. On the other hand, the production of some protein polymers in a K. phaffii expression system is still challenging, due to the repetitive amino acid sequences that the polymers have in their structure, which can be target sites for proteases.It is known that proteolytic activity is minimized by growth on glycerol and glucose [98].Therefore, some alternatives to the promoters widely used in expression of recombinant proteins in K. phaffii (such as PAOX1, induced by methanol or even constitutive promoters) have been explored.A copper-inducible promoter from S. cerevisiae (CUP1) was developed for gelatin production in K. phaffii [114].This system offers the advantage of being expressed when cells are cultured in dextrose, an economical, non-toxic agent, and a non-flammable carbon source.This strategy provides an exploitable tool for the large-scale production of gelatin and other biomaterials in K. phaffii. In view of this, the biomaterial production system in K. phaffii becomes promising for the most diverse biomedical applications, since several polymers of secreted proteins were successfully produced using this microorganism as a producer. Applications in the Food and Feed Industry One of the first uses of K. phaffii was in the food industry.Because of its natural ability to assimilate inexpensive methanol at high cell densities, K. phaffii was initially considered as an attractive food supplement in the form of single-cell protein (SCP).In this context, the development of a mutant strain with a high methionine content prompted British Petroleum Co to file a patent in the in the early 1980s [115].The interest in SCP has recently re-emerged since methanol can be sustainably synthesized from CO 2 .Using adaptive laboratory evolution and metabolic engineering, Meng et al. [116] developed a K. phaffii strain with a protein content higher than other food sources such as soy, fish, meat, and whole milk. A milestone in the use of food/feed products derived from K. phaffii was when the U.S. FDA awarded GRAS status to recombinant phospholipase C for degumming vegetable oils for food use in 2006 [10].In 2016, Impossible Foods Inc. (Redwood City, CA, USA) launched the Impossible Burger, a plant-based alternative to traditional meat-based burgers.The Impossible Burger contains a recombinant protein produced in K. phaffii called leghemoglobin, a soy-derived heme protein similar to myoglobin.Heme proteins are important factors to mimic animal-derived meat flavors [117].The Impossible Burger, a soy-derived product, has been proven safe for human consumption [118]. Today, the main interest in K. phaffii in the food and feed industry relies on the production of recombinant products, such as enzymes.The enzyme market is projected to reach USD 16.9 billion by 2027, growing at a compound annual growth rate (CAGR) of 6.8% from 2022 to 2027 [119].This is due to the increasing use of enzymes as chemical substitutes, particularly in food and beverage, cleaning, and pharmaceutical applications.In the European Union, approximately 260 different enzymes are available, and most are produced by filamentous fungi (58%), yeast (5%), and bacteria (28%).A third of these enzymes are derived from genetically modified organisms.A list of the main enzymes produced in K. phaffii for the food and feed industry has been published elsewhere [119]. To enhance the digestibility of plant-based feedstuffs, the addition of phytases has been considered.Phytases reduce the need for inorganic phosphate supplements for monosgastric animals by removing phosphate from phytate, the main storage form of phosphorus in some plants.The production of recombinant phytase in K. phaffii is a good example of how this yeast-based platform can have a major impact in modern industrial biotechnology.According to a report by Validogen GmbH, the annual market for this enzyme is approximately USD 350 million [120].In addition to phytase, xylanases are also a desirable supplement in feed since they reduce viscosity of raw plant material by degrading xylan.In order to reduce production costs, Roongsawang et al. [121] constructed an expression cassette formed by both a phytase and a xylanase coding gene separated by the 2A peptide sequence that promotes ribosome skipping.The results showed that the biochemical properties of the resulting enzymes were similar to those produced individually. Although many commercial expression vectors are based on the strong inducible PAOX1, the presence of residual methanol in the final product is a matter of concern in the food industry.To avoid this, constitutive promoters or engineered PAOX1 promoters may be used for enzyme production in a methanol-free medium [122].Validogen GmbH has screened a promoter library of variants of the PAOX1 and isolated a particular mutant that was able to secrete 20 g•L −1 phytase in non-methanol conditions as opposed to 22 g•L −1 in a methanol-induced control [120].Bioprocess developments combining synthetic biology with metabolic engineering should contribute to further improving enzyme production in K. phaffii. Advanced Tools for Synthetic Biology in K. phaffii Synthetic biology is a relatively recent area that has made outstanding contributions to the bioeconomy worldwide, bringing innovative solutions to problems in diverse areas, and is now getting into the second decade of its life.Considering the several advantages of K. phaffii that place it as a desirable chassis organism for industrial applications [123], it is essential to discuss the main synthetic biology tools developed for this organism (Figure 1). an expression cassette formed by both a phytase and a xylanase coding gene separated by the 2A peptide sequence that promotes ribosome skipping.The results showed that the biochemical properties of the resulting enzymes were similar to those produced individually. Although many commercial expression vectors are based on the strong inducible PAOX1, the presence of residual methanol in the final product is a matter of concern in the food industry.To avoid this, constitutive promoters or engineered PAOX1 promoters may be used for enzyme production in a methanol-free medium [122].Validogen GmbH has screened a promoter library of variants of the PAOX1 and isolated a particular mutant that was able to secrete 20 g•L −1 phytase in non-methanol conditions as opposed to 22 g•L −1 in a methanol-induced control [120].Bioprocess developments combining synthetic biology with metabolic engineering should contribute to further improving enzyme production in K. phaffii. Advanced Tools for Synthetic Biology in K. phaffii Synthetic biology is a relatively recent area that has made outstanding contributions to the bioeconomy worldwide, bringing innovative solutions to problems in diverse areas, and is now getting into the second decade of its life.Considering the several advantages of K. phaffii that place it as a desirable chassis organism for industrial applications [123], it is essential to discuss the main synthetic biology tools developed for this organism (Figure 1).[124].The CRISPR-Cas section includes the multiloci genomic integration tool [125] and the programmable expression platform SynPic-X [126].The 3D design of the molecules was generated by Illustrate [127].* In this genetic circuit Acc1 is a single base mutant which was shown to avoid deactivation by AMP-activated serine/threonine protein kinase (Snf1) upon glucose depletion in yeast.[124].The CRISPR-Cas section includes the multiloci genomic integration tool [125] and the programmable expression platform SynPic-X [126].The 3D design of the molecules was generated by Illustrate [127].* In this genetic circuit Acc1 is a single base mutant which was shown to avoid deactivation by AMP-activated serine/threonine protein kinase (Snf1) upon glucose depletion in yeast. Synthetic Genetic Circuits Synthetic genetic circuits aim to develop programmable organisms capable of performing a wide range of tasks [128].Jacob and Monod first described endogenous genetic circuits, drawing parallels between electrical circuits and the gene expression control of lactose and tryptophan operons [129], with the first synthetic genetic circuit developed in 2000 [130,131].These circuits are composed of modular genetic parts which need to be fully characterized, independent, reliable, orthogonal, tunable, composable, and scalable [132]. One of the first synthetic circuits reported in K. phaffii was a positive autoregulated circuit that aimed to obtain a methanol-free strain without gene deletion.The synthetic circuit consisted of the transcriptional activator Mxr1, constitutively expressed and acting in the derepression of PAOX1, and the Nrg1 repressor down-regulated by methanol.In another methylotrophic yeast, Hansenula polymorpha, the PAOX1 orthologous PMOX is not glycerol sensitive and, as the difference between PAOX1 and PMOX relies on its upstream transcriptional regulators, the authors hypothesized that up-regulation of the MXR1 might lead to the same phenotype in K. phaffii.Therefore, they laced an extra MXR1 copy under the control of PAOX2, a weaker promoter than PAOX1.As a result, using GFP expression assays, PAOX1 started responding to the absence of glycerol without the need for methanol induction.Notably, to evaluate the viability of this circuit for recombinant protein production, a secreted single-chain variable fragment (scFv) was expressed under this system and showed a 98% increase in the presence of methanol and a 269% increase in the absence of glycerol [133]. A significant advance was the development of a malonyl-CoA-based regulated genetic circuit oscillator in K. phaffii [124].This circuit consisted of two sensors based on the bacterial malonyl Coenzyme A (malonyl-CoA) system, in which malonyl-CoA binds to the repressor protein FapR and releases it to its DNA operator fapO.Sensor 1 comprises the FapR fused to the Prm1, a transcriptional activator of the PAOX1.In the presence of cerulenin, intracellular malonyl-CoA is upregulated, repressing the expression of the reporter gene.In the second sensor, the Prm1-FapR acts as a repressor instead of an activator.After validation, the authors designed the malonyl-CoA oscillator, allowing the conversion of the accumulated malonyl-CoA to polyketide.Because malonyl-CoA is the building block of several biochemical compounds, this oscillator might be the foundation stone for industrial and pharmaceutical applications. Even with the promise of using synthetic genetic circuits as gene regulation tools, its design demands fulfilling several criteria, such as a synthetic genetic parts database for K. phaffii and landing pads to integrate them into biological systems, avoiding undesired endogenous interference [134].Consequently, the availability of Genomic Safe Harbors (GSHs) [135], stable centromeric vectors [136], and synthetic chromosomes [137] become crucial. CRISPR-Cas Systems as Tools for Gene Editing and Gene Regulation Control In K. phaffii, the first CRISPR-Cas system evaluated was the CRISPR-Cas9 markerless system, which paved the way for CRISPR-Cas9-based metabolic engineering in this organism [138].Despite this, donor cassette integration via Homologous Recombination (HR) was still challenging.However, two years later, the same group demonstrated a CRISPR-Cas9 high-efficiency integration of marker-less donor cassettes via HR with the possibility of marker recycling in a ∆KU70 strain [139], and the latter deletion was shown to increase HR efficiency [140].These results expanded the CRISPR-Cas9 toolset for K. phaffii, allowing not only indels but also point mutations, deletions of genome sequence stretch, protein fusions, and the introduction of scarless tags, augmenting CRISPR-Cas9based metabolic engineering possibilities significantly.As most industrial applications for K. phaffii require the integration of complex biosynthetic pathways, a multiloci genome integration tool is essential to further advancing this organism as a chassis biofactory host.Following this reasoning, Liu and collaborators showed a CRISPR-Cas9-based duplex and triplex integration in the ∆ku70 K. phaffii strain [125].Nevertheless, a drawback of this tool was the low transformation efficiency with the increasing number of expression cassettes. All the CRISPR-Cas9 systems mentioned above relied on ribozymes to express gR-NAs, inserting a layer of complexity into their experimental design, and leading to the requirement to find K. phaffii RNA polymerase III promoters.These promoters were identified and successfully used in a multiplex genome editing strategy [141].Expanding this system, CRISPR-ARE was developed for simultaneous gene activation, repression, and editing [142].Recently, Go and collaborators developed another marker-less multiloci integration tool based on the CRISPR-Cas9 system [143].They evaluated the integration efficiency of three genes of the β-carotene biosynthetic pathway in intergenic regions of a ∆KU70 K. phaffii strain.The results showed a slight increase in HR; however, the disruption of KU70 impaired cell fitness and resulted in low transformation efficiency, indicating that the strategy was not the best option. Another approach to increase HR in K. phaffii was the use of different exonucleases involved at the beginning of the HR process to Cas9 [144].As proof of concept, the authors integrated genes of the fatty alcohol biosynthesis pathway with an HR improvement from approximately 66% to 91%.To overcome CRISPR-Cas9 requirement limitations, other CRISPR-Cas systems were evaluated.Zhang and collaborators developed a system based on Cas12a (Cpf1) that recognizes T-rich PAMs with a single CRISPR RNA (crRNA), shortening the gRNA expression cassette [145].Despite this, the editing efficiency varied according to the target gene and diminished for triplex gene editing.Furthermore, its off-target potential was not evaluated.Recently, Liu et al. [126] described a highly programmable expression platform based on CRISPR-Cas systems (SynPic-X), and Deng and collaborators achieved the highest reported titer in K. phaffii, using CRISPR-Cas to regulate human lactalbumin (α-LA) production [146]. Conclusions The diversity of compounds already produced in K. phaffii, as exemplified above, highlights the potential of this yeast to be employed as a microbial platform for the production of value-added chemicals, fuels, and bioproducts.The increasing knowledge in cell biology and physiology as well as the design of new synthetic biology tools for metabolic engineering certainly will support and contribute to the development of more robust strains and processes required for industrial application. Figure 1 . Figure1.A schematic representation of the principal synthetic biology tools developed for the yeast K. phaffii, including synthetic genetic circuits and CRISPR-Cas systems.The genetic circuit topic is represented by the malonyl-CoA-based oscillator[124].The CRISPR-Cas section includes the multiloci genomic integration tool[125] and the programmable expression platform SynPic-X[126].The 3D design of the molecules was generated by Illustrate[127].* In this genetic circuit Acc1 is a single base mutant which was shown to avoid deactivation by AMP-activated serine/threonine protein kinase (Snf1) upon glucose depletion in yeast. Figure 1 . Figure1.A schematic representation of the principal synthetic biology tools developed for the yeast K. phaffii, including synthetic genetic circuits and CRISPR-Cas systems.The genetic circuit topic is represented by the malonyl-CoA-based oscillator[124].The CRISPR-Cas section includes the multiloci genomic integration tool[125] and the programmable expression platform SynPic-X[126].The 3D design of the molecules was generated by Illustrate[127].* In this genetic circuit Acc1 is a Table 2 . Representative bio-based compounds produced by K. phaffii from different substrates.
8,219.4
2024-06-01T00:00:00.000
[ "Environmental Science", "Biology", "Engineering" ]
Formation Processes of Zinc Excimer Thin Films Due to Ion-recombination Processes In materials science, the number of d-electrons of transition metals is an essentially important factor controlling characteristics of alloys and compounds. In this paper, we show an example to control the number of d-electrons (holes) by using inner-core electron excitation of zinc atoms. An important feature of our research is that we can make a long lifetime excited electronic state of zinc (3d8), and the life-time of excited zinc is more than 307 days. At first, the experimental apparatus and boundary conditions of the ion-recombination processes were explained. From results of XPS, excited zinc films showed satellites peaks what caused by the final state of 3d8 and the charge transfer final state of 3d10L2. Excited states of zinc were formatted at the surface of substrate caused by ion-recombination process between Zn+ and Zn−. The excited zinc diffused from substrate surface to the surface of the excited zinc thin film. Intensity of excited zinc is proportional to the intensity of electron on the substrate. Introduction Technique in materials chemistry is due to control of the most outer shell electron. From the standpoint of materials chemistry, study of new materials' creation under the control of the inner shell electron is scarce. Control of inner-shell electron is involved in the excitation process and the relaxation process. About these processes, there are a lot of researches. The excimer in the excited state is derived from excited dimmer for short. Excimer is the electron excited states atoms and molecules in combination with other atoms and molecules. The lifetime of an excimer is very short, on the order of nanoseconds. The binding of a larger number of excited atoms forms Rydberg matter clusters, the lifetime of which can exceed many seconds [1]. As we all know, the transition mental group has special unfulfilled 3d electronic configuration. And zinc (Π b ) is the only fulfilled 3d electronic configuration of transition metal elements. So, our group is focused on creating permanent lifetime zinc excimer, learning the mechanism formation processes and analyzing the resulted phenomenon. The excitation can be exploited in modification of the surface layers, modification of the bulk in a selected way, energy transport and charge transport, energy storage and so on. The chemical reactions depend on the number of outer electrons usually; however, if we can arrange the inner-core electron system of the atom, we can get a new periodic table, that is, a new development platform [2]. Experiment Apparatus The experiment apparatus which was developed just by our research group is an integral evaporation system with transmission electron spectroscopy evaluation (Electron-assisted PVD) [3] [4]. The conceptual diagram of vacuum experimental system and experiments condition were shown in Figure 1. The vacuum system ultimate pressure was decreased to 10 −5 Pa. The emitted thermal electrons from the tungsten filament hairpin were considered as the incident electron source. Thermal electrons emitted from the electron gun be accelerated by the bias voltage (V B = 0 ~ 240 V), what applied to the substrate electrode, and thermal electrons irradiate a wide range to the substrate surface. The incident angle of electrons was 45 from the substrate surface, Then the 0.1 g zinc atoms were deposited on the insulate area from the effusion cell at 600˚C, and the insulate area is measured about diameter is 6.5 mm at the center of the sapphire substrate. Figure 2 is a conceptual diagram of the reaction field [3]. From Figure 1 and Figure 2, at the peripheral portion of the oxide aluminum substrate, equipped with a gold film in order to provide a bias voltage of the incident electrons. When the bias voltage added from electron gun (cathode) to gold thin film (anode), the reaction field became an electric field and vector of this electric field is from gold thin film to electron gun. Clone force F acting on the point charge is write by F = QE [5] [6], at first, acting a clone force from electron gun toward gold thin film to the incident electron, with the sapphire substrate, the incident electron can adhere on the sapphire. Then, electrons have been accumulating on top of the substrate (Adhesion electrons create a potential/field at the substrate surface.), number of adhered electrons have been increasing until potential of the substrate surface to V B . Now, a stable and downward electric field has been created by adhered electrons. Initial velocity of incident zinc particles is downward. Zinc particles only have momentum in the vertical direction. The sapphire surface was electrified by incident electrons up to the same magnitude of potential as the anode bias. This growth field makes selective growth field for Zn + . The Reaction Field Electrons form a stable and substrate orientation electric field at the center of oxide aluminum substrate where radius R = 3.5 mm. Corresponds to thermal electrons emitted from the electron gun be accelerated by the bias voltage (V B = 0 ~ 240 V), the potential of the insulating portion is negative and its absolute value is equal to the bias potential V B . Corresponds to the bias potential, the electrons that in the reaction field have a stability energy and it equal total of potential energy and kinetic energy (E e = E p + E k ) [6]. And the electrons on the substrate must have a potential energy of 0 ~ 240 eV. In other words, incident electronic energy should be 0 ~ 240 eV. By the quantum theory, the possible energy value of atoms is discrete. Then, when the transition to the (excited state) high energy state from low energy state, just excitation energy of discrete energy difference ∆E = E 2 − E 1 is required [7]. Ion-Recombination Process In a vapor phase growth processes, kinetic energy of incident atoms in the gas phase were dissipated at the surface phase in the condensation processes [8]. Zinc excimers were formed on sapphire substrates within diameter of 6.5 mm area, where enclosed by gold electrode, which is the anode for incident electrons exciting zinc atoms. An estimate excited process of zinc thin films included 4 steps: Charging process: Ionization: Electron Attachment: Zn Zn Zn Firstly, zinc that incidence from effusion cell is iodinated by incidence electron in the negative field (1) in the gas phase, lifetime of Zn + can be extended by the electric field [9]; Secondly, the iodinated Zn + adhered to the substrate (surface of sapphire), and adhered Zn + combined with electron to Zn at substrate surface (2); Thirdly, Zn combined with electron to Znat substrate (3). Lastly, Znwhat adhered at substrate combined with Zn + what come from negative field and excited state formed (4). Lifetime of negative ions is much shorter than positive ions [10]. An electron charged up sapphire surface was used to elongate the lifetime of Zn -. Normally, lifetime of excimer is transitory, counting as ns, but excited zinc atoms are from surface phase fixed in solid phase at the first period, so that, the excited states were preserved for a long time. We daring assumed that the excited zinc was created though the special charging field due to ion-recombination processes. Electron density distribution determined the density of excited zinc. Surface Electron Distribution From equations, excited zinc depends on the electrons that adhered on the substrate. The substrate surface is charged by incident electrons. The electron density and distance between electrons on substrate should be [3] [6]: Firstly, the sapphire surface was electrified by incident electrons up to the same magnitude of potential as the anode bias. We know that the bias potential is a fixed number from 0 V to 240 V. From Equation (5), to keep the surface potential constant, the surface electron density must depend on the radius. The electron density at edge area is much higher than at the center, on other hand, distance between electrons at edge area is smaller than at the center. Excited zinc depended on electron intensity, so that excited zinc growth from edge. Furthermore, distance between excited zinc at edge area is smaller than that at the center, because excited zinc depended on electron. From Equation (5), the intensity of excited zinc depended on the surface electron intensity. The zinc atoms were deposited on the insulative area, which measured 3.0 ~ 7.0 mm in diameter at the center of the substrate. Improved to trial, the reaction field of the experiments could be decreased to micrometer. The electron distribution was simulated in Figure 3. Results of XPS In Figure 4(a), XPS spectrum of Zn 3d, 3p, 3s, 2p, C1s and O1s of excited zinc thin films were compared between 7 days' sample and 307 days' sample. 7 days, what means the sample was detected after it was made 7 days. 307 days, what means the same sample was detected after it was made 307 days. The core level photoelectron spectrum shape of Zn 2d, 3p, 3s, 2p in excited zinc film were shown in Figure 4(a) and Figure 4(b). The standard point of these data is C1s (285.0 eV). The sample shown in Figure 4 was deposited at 230 eV. The same sample was measured after 7 days and 307 days. Figure 4-307days showed the special characteristics of zinc spectrum. Binding energy of 2p1/2 is 23.1 eV (1044.9 -1021.8 = 23.1) higher than 2p3/2. Binding energy of 3p1/2 is 2.8eV (91.4 -88.6 = 2.8) higher than 3p3/2. Binding energy of 3d3/2 is 0.5eV (10.1 -9.6 = 0.5) higher than 3d3/2. Spectrum shape of Zn 3d, 3p, 3s and 2p in Figures 4-7days showed similar features of mental zinc. On other hand, Spectrum shape of Zn 3d, 3p, 3s and 2p in Figure 4-307days showed new features. The satellite peaks at high binding energy side hadn't been reported ever. There are some peaks at 15eV and about 3.6eV high binding energy side of the Figure 4(b)-307days, but there is smooth at 15eV high binding energy side of the Figure 4(b)-7days. Figure 5 showed decomposition of inner-shell photoelectron spectroscopy of Figures 3-7days. First of all, we fitted out the curves of 3d, 3p, 3s and 2p3/2 by Gaussian functions. Fitted peaks curves lined in green, the backgrounds were cut by the green point lines, and peak sum showed in pink curves. 3d:3d was fitted to 2 nearly peaks, peak center was 0 eV and 1.0 eV. 3d specific included final states of 3d5/2 and 3d3/2, binding energy of 3d5/2 and 3d3/2 is approaching. 3p:3p also was fitted to 2 peaks, and the binding energy of peak center was 2.6 eV and 0 eV. 3p specific included 3p1/2 and 3p3/2. 3s:3s was fitted to a single Gaussian function peak at 0 eV. 2p3/2:2p3/2 also was fitted to a single Gaussian function peak at 0 eV. Refer to 3.1 and 3.2, the spectrum of zinc showed normal zinc features. On the upper surface of excited zinc film, there wasn't excited zinc existed after it was made 7 days. Figure 6 showed decomposition of inner-shell photoelectron spectroscopy of Figure 3-307days. First of all, we considering that, Figure 5 included main peaks at 0 eV as Figure 4 and shift peaks at the high binding ener- gy side. Refer to Figure 4, fitted out the curves of 3d, 3p, 3s and 2p3/2 by Gaussian functions, and the backgrounds were cut by the green point lines. 3d:3d specific included final states of 3d5/2 and 3d3/2, and binding energy of 3d5/2 and 3d3/2 is approaching, that it couldn`t be fitted out. 3d was fitted to 2 peaks, peak center was 0 eV and 3.6 eV, corresponding to main 3d peak of zinc and +3.6 eV shift peak. 3p:3p should be fitted to four peaks, corresponding to two of main 3p peak and two of shifted peaks of main 3p peak at high binding energy side. It was fitted to 2 curves, corresponding to main 3p peak of zinc and +3.6 eV shift peak. 3s:3s was fitted to two single Gaussian function peaks at 0 eV and +3.6 eV. 2p3/2:2p3/2 also was fitted to a single Gaussian function peaks at 0 eV and +3.6 eV. There are two kinds of zinc atoms on the film. We referred the theory of reference [3]. The data of the transition metals from Mn 2+ to Zn 2+ , exhibit satellites effects which may be qualitatively interpreted in this way [12]. Structures for ions with almost filled shell for Cu 2+ 3d 9 , Ni 2+ 3d 8 , and Co 2+ 3d 7 ; and neither effect for the filled shell in Zn 2+ 3d 10 . The resolved satellites are believed to be due to inner hole of 3d n (n ≤ 9). Means if there has no inner hole of 3d state, there has not satellite peaks. In Figure 5, the spectrum of zinc showed normal zinc features and no normal zinc features. The peaks of the high binding energy side are Zn3d 10 L 2 final states. Two electrons transfer from ligand to Zn3d at the same site [13] [14]. The charge transfer energy is 3.6 eV. The ligand elements only considered to pollution of C or O atoms in air. Proofs of Long Lifetime 3d-Holes by XPS Spectrum shape of Zn 3d, 3p, 3s and 2p in Figure 6 showed new features. Three binding energies were observed at 0 eV, 3.6 eV and 15.8 eV. The peaks at 0 eV, 3.6 eV and 15.8 eV are normal zinc atom 3d 10 , charge transferred 3d 10 L 2 and 3d 8 . The components that form the sharp rise of the low binding energy side originated to d 10 final state (3d 10 ); The peak of the high binding energy side was identified as the peak due to d 8 final state and movement of the hole did not occur (3d 8 ); Between the 3d 10 and 3d 8 , there is a satellite peak, in response to the coulomb force of the inner shell hole, the hole in the valence band has moved from the Zn3d orbital to the same site ligand orbital was d 10 L 2 final state (3d 10 L 2 ). This process is charge transfer (CT) and the ligand should be O or C [15] [16]. On the special excited zinc thin film (Figure 6), the peaks were observed at 0 eV, 3.6 eV and 15.8 eV corresponds to the 3d 10 , 3d 10 L 2 and 3d 8 . The charge transfer Δ = 3.6 eV. The coulomb repulsion between 3d electrons of zinc U dd = 15.8/2 = 7.9 eV. Excited zinc formatted at the bottom of the excited thin films. The excited zinc will automatically spread to the surface of the zinc film. The diffusion mechanism is not yet understood. The lifetime of excited is more than 307 days. Conclusion The formation processes of Zn excimer exactly depend on discreet energies which are combinations of binding energies of electrons of Zn atom. The condensed states depend on the excitation processes of each ion. Intensity of the condensed states depends on intensity of electrons. And the intensity of electrons depends on location and bias voltage. We present formation processes of excited 3d states of Zn atoms with abnormally long-lifetime. The lifetime of excited states can calculate by days (more than 307 days) and our samples can be re-excited. The excited zinc atoms form excimers (excited molecule) due to ion-recombination process what happed in the surface phase. In the special zinc thin film, excited state of zinc, 3d 8 was observed. The charge transfer Δ = 3.6 eV and ligand was considered to C/O. Excited zinc formatted at the bottom of the excited thin films. The excited zinc will automatically spread to the surface of the zinc film. Excimer is exploited in energy transport and storage becomes possible.
3,708.6
2014-06-13T00:00:00.000
[ "Materials Science", "Physics" ]
The Effect of Using Zoom towards Online English Teaching and Learning Process This research deals with the online teaching and learning process, the objective of this study is to explore the effect of Zoom use for online English teaching and learning process in the post-pandemic era. The study was designed in a mixed research model with combined Quantitative and Qualitative data. The data of this study were students in the first semester of the English Study Program. In the Quantitative aspect of the study, a quasi-experimental design, with a pretest-posttest control group was used and the data were analyzed by two-factor variance analysis for mixed measurements. The analysis indicated that both learning environments have a different effect on the success of students and that supporting the traditional environment by using zoom is more effective to increase of English teaching and learning process. For the qualitative aspect of the study, content analysis techniques were employed to analyze the data which were collected by open-ended question forms. The analysis shows that students developed positive opinions toward the use of Zoom in their courses. They demanded the same practice in their other courses as well. They reported that learning could also take place unconsciously and the messages delivered directly by Zoom Virtual are more satisfied with their English learning. However, a few students have expressed adverse opinions about signal unstable in their area and excessive quotas during using this application. Finally, it is suggested that the use of Zoom in the education process be encouraged as a supportive technology. INTRODUCTION Social media by Andreas Kaplan and Michael Haenlein, defines social media as "a group of Internetbased applications that build on the ideological and Web 2.0 technologies and that allows the creation and exchange of "user-generated content". The rapid development of social media now because everyone could have their own media. If to have a traditional media such as television, radio, or newspaper required substantial capital and labor that much, then another case with the media. A user can access social media using social media to the Internet network to which access is slow even though, without great expense, without expensive tool and do its own without employees. Social media users can freely edit, add, modify either text, images, video, graphics, and various other content models. Users of social media growing very fast and has changed the shape of people's lives, especially students in Indonesia. Various reality of modernity in fact easily found even on a daily basis is provided either by the family, the community and the world of information. It is inversely proportional to the learning process in Indonesia that most still use traditional media that could be considered saturated. It can affect the teaching and learning activities of students because of their inferior to modern times as it is today. With their increasing time, scope, and frequency of use, internet technologies have started to shape the way people form and share content and their way of communication. Social Media, which are very popular among young people, are becoming prevalent due to their nature to meet the needs of individuals towards socialization. Their nature that focuses on individuals, started to shape users' process of interaction and has become one of the important elements of the daily life. The high number of people joining social Media, which are defined as programs that ease the interaction between individuals and groups, provide various opportunities for social feedback and support the formation of tangled social relations (Boyd, 2007), show how immense the people's need is for these networks. Within the framework of these needs, development of mobile versions of these programs that carry the social structure from real life to virtual environment and eliminate the time and space limitations, has become inevitable. This process, which started by commonly used web based social Network (Zoom ,Facebook, Twitter, etc.). There are a lot of challenges to maintain students engagement and to make learning by distance as effective as on campus studies. Moreover, most of the distance students choose to study and want to engage with academics outside standard working hours which challenges work-life balance. Online support tools such as Zoom allow students and academics to connect through virtual tutorials from any convenient location, which is an effective use of technology to improve students engagement and their success rate whie minimizing the inconvenience of after-hours commitment for academics. The presence of positive and negative sides of social networks does not change the fact that these tools are rapidly becoming popular, gaining an important place in our lives, and starting to take their place in education. In recent years, Zoom, which can be identified as virtual tutorial based social networks, have started to become popular. (Church, 2013). However, in studies on the use of different instant messaging platforms in education, it is also determined that these applications have potential to increase learning (Smit, 2012), learners' being active in their studies (Cifuentes, 2011), interaction between students on personal, school, and course related (Doering, 2008) eliminate social barriers (Doering, 2008), and increase students' motivation (Plana et al., 2013). By the help of these benefits, which are also supported by the studies conducted on Zoom) (Bouhnik, 2014), it is noted that the application can be a useful tool within the scope of learning anytime and anywhere, and collaborative learning. In this case, as such in Facebook, Twitter, and other social networks, it won't take a long time for Zoom to influence the learning environments as well as the social life. The potential of the social networks when designed in accordance with the needs of science and information, is alleged to cause revolutionary changes (Zaidieh, 2012), and their influence on the educational (Cetinkaya & Sütçü, 2016;Harrison & Gilmore, 2012;Lenhart, Purcell, Smith, and Zickuhr, 2010), the studies on the effects of new generation instant messaging applications upon interaction between people and learningteaching processes are very limited. On the other hand, the presence of evidence that support the fact that these applications have a great effect on the social development of young people necessitates the determination of their impact on their academic development and expectations. Consequently, the purpose of this study is to explore the effects of Zoom use for education and determine the opinions of students towards the process. In line with this purpose, answers to the following questions are sought: 1. Do students' success scores show a significant difference when Zoom is used as a support to the traditional environment? 2. What are the students' opinions on the educational use of Zoom application and the study process? METHOD In this section, there are explanations about the model of the research, study group, data collection, and analysis of the data. Mixed method model, which combines both qualitative and quantitative approaches, was used in this study. By using mixed methods research, strong aspects of quantitative and qualitative methods can be utilized and its limitations can be minimized . Particularly, complexity of social facts is eliminated by bringing different methods together then observing and discussing them, which hereby contributes to the better understanding of the fact . In this study, explanatory mixed methods was employed, further described in Table 1. In explanatory mixed method design, the researcher first collects the quantitative data and then the qualitative data respectively. This means in explanatory mixed method design, quantitative data collection and its analysis have the priority. In addition, in order to correct the results from the quantitative data, the researcher uses qualitative data (Creswell, 2007). Within this framework, in order to determine the effects of the information sent via Zoom with regards to success, a pretest-posttest control group quasi-experimental design, which forms the quantitative aspect, was used. In quasi-experimental design, which is described as the best research design to explain cause-effect relationships, the treatment is performed after the pretest and finally, posttest is given to determine the effect on dependent variable. (Fraenkel, 2006). The selection of experimental and control groups, and the research design of quasi-experimental design according to some preevaluations and criteria, are provided in Table 2. In the quantitative aspect of the research, the effects of the information packs that were sent to students as support to the traditional teaching via Zoom in the experimental group were compared. In the qualitative part of the research, the case study, which belongs to the qualitative research tradition, was utilised. Qualitative research is an approach that uses an inductive attitude in social studies and underlines descriptive data collection techniques in natural environments together with the views of the participants (Bogdan, 2006). As for case study, it tries to reveal present examples of application by using the question "how" (Yıldırım & Şimşek, 2008). In this part of the research, students' opinions on the application process were taken. Study Group The participants of the study were secondary education 10th grade students, aged between 15-16. In the study, criterion sampling which is one of the purposive sampling methods was used. In the selection of the study group, the criteria that ensure the continuity of the experimental processes, the ease of accessibility to participants, the proximity of prior knowledge levels, and the provision of the necessary technological infrastructure, are taken into consideration. Accordingly, two (X and Y) out of three, 10th grade literature classes taught by the same teacher were chosen. Before the research, pretest was given to the students of these two classes to determine the prior knowledge and control the normality and the homogenity of their distribution. After the pretest, homogenity (Levene's Test F=0.002, p>.05) and normality (Kolmogorov-Smirnov test, p>.05) between the classes were observed. Before the experimental process, unrelated samples ttest was performed to see if there was a significant difference between the scores of the students in both groups, and as a result of the analyses, no significant difference was found in the arithmetic mean scores of the students (t(58) = 0.361, p > .05). In line with these results, it was decided to evaluate the two classes under two different groups; experimental and control groups. In this way, no discrepancies could be made between students and applications, and the interaction between the individuals in the same class was avoided. In separation of the classes as experimental and control groups, their possession of smartphone, internet, and the usage of Zoom were taken into consideration. It was established that in class X, 2 out of 30 students did not have smartphones, and while three students had smartphones, they did not have Zoom installed on their phones. However, in class Y, all 30 students had smartphones, while two students did not have Zoom installed on their phones. From the prior knowledge of the students, it was inferred that, in terms of technical requirements, class Y was deemed more advantageous. As a result, class X was designated as the control group with class Y designated as the experimental group. Students in group Y were informed about the purpose of the research and were asked if they wanted to participate in the research. Students who did not have the Zoom on their phones but wanted to participate in the study, were provided seven additional days to resolve it. At the end of this period, all the students, including the ones who had not used Zoom before, met the prerequirements. As detailed in Table 3, the study, which was conducted with 15 female and 15 male experimental group students, and 16 female and 14 male control group students, for a total of 60 student participants, ended with the complete participation of all students in the quantitiative research stage. As for the qualitative research stage, 30 students from the experimental group after the posttest participated. questions. During this phase, the curriculum for the face-to-face education was followed and questions to test different knowledge levels were prepared. The questions were clear and comprehensible, the answers expected for the questions were clear and required single answers which were covered in the information messages sent via Zoom. Qualitative data were collected by using open-ended question form one week after the posttest was given to 30 students in experimental group. The students were informed about the open-ended question form and were asked to answer the research question; "Write your ideas about the process of sending information messages via Zoom application and your suggestions, if any." The question to determine their opinions was given in written form to the students in the classroom environment under the supervision of the researcher to enable them to give detailed answers. Analysis of the Data The achievement test with short answers was given simultaneously to both experimental and control group at the begining and the end of the study. In achievement test, one point was given to each correct answer and zero points to each wrong or blank answer. The questions in achievement tests (pretest and posttest) were given in eight week intervals and evaluated. Since there was homogenity in classes, the distribution met the assumption of normality, and the study was conducted with two groups, t-test was utilised for the analysis of the data. To test the effectiveness of environments 2X2 split plot design was used and two factor variance analysis for mixed measures was used to analyze this research question. The significance of the difference between mean scores was calculated at the level of p=.05 and SPSS was used in the analysis of data. For the analysis of the written data obtained by open-ended question form which constituted the qualitative aspect of the research, categorical analysis which is one of the content analysis types and frequency analysis were utilised. In categorical analysis; (1) coding of the data, (2) forming categories, (3) organisation of categories, and (4) description of findings and interpretation stages were followed (Corbin & Strauss, 2007). Frequency analysis revealed the quantitative frequency of the data, thus determining the density and the importance of a particular factor (Ryan & Bernard 2000; Tavşancıl & Aslan, 2001). Hence, the qualitative data were digitalised and their reliability increased, biasness decreased, and comparison of the data was enabled (Yıldırım & Şimşek, 2008). RESULT AND DISCUSSION The findings are given below in the order of research methodologies and sub-problems. Findings for Quantitative Data To test the effectiveness of the learning environments, 2X2 split plot design was used. In the design, the first factor meant two separate experimental environments (traditional environment-Zoom as supportive technology to the traditional environment), second factor meant measurements (pretest-posttest) before and after the experiment (Büyüköztürk, 2007). Two factor variance analysis for mixed measurements was done to analyse the research question. Students' mean scores of pretest and posttest, depending on their learning environments and standard deviation value, are given in Table 4. While the mean success scores of the students in traditional environment before the experiment is 3.83, this value increased to 11.57 after the experiment. Mean success scores of the students in the environment which Zoom is used as a supplementary technology to traditional environment are 3.60 and 19.63 respectively. Accordingly, it can be declared that there is an increase in the success of the students learning in both the traditional environment and in the environment which the Zoom is used as a supplemantary technology. The results of the two factor ANOVA analysis which was made to analyse if the success scores of the students in two separate environments showed a significant difference are given in According to the findings, the success of the students in two different learning environments show a significant difference before and after the experiment, in other words, the common effect of being in a different environment and the factors of repeated measures on success is significant (F(1-58)=90.14, p<.001). This finding shows that both learning environments have different effects on increasing the success of students. The environment in which Zoom was used as a supportive technology to traditional environment has been more effective in the increase of success. Findings for Qualitative Data Students' views in experimental group about the process of sending information messages via Zoom, were collected by an open-ended question form. The ritten data were analysed using categorical analysis, which is one of the content analysis types, and frequency analysis, which is displayed in Table 6 with the results of students' opinions (S+student code): Table 6 Students' opinions about zoom in online English teaching and learning process Positive F % The will for similar applications for the other course 25 83.3 Realization of learning (conscious or unwittingly) 18 60.0 Increase of the interest using Virtual zoom with interaction directly with each other 14 46.7 Negative F % Signal Unstable In their area 7 23.3 excessive quotas during using this application 6 20.0 It was determined that majority of the students believe that using the application for all courses would be useful. As stated by S14 "The application would be fine in biology lesson" and S22 "if used in other lessons they could be more enjoyable… I could learn easily," students emphasize their will to see the application in their other courses. One of the important factors that is effective in their will is realization of learning. At this point it is remarkable that, besides learning after a conscious process, there are students who had expressed that learning can also take place unconsciously: S13 explains that "I didn't know that the last exam would be held. But I could answer most of the questions without revising at all," S21 shares that "I realised that I learnt unconsciously during the exam," and S17 states that "even reading the messages I got was enough, I didn't do any revision at all but I could remember most of the messages." Such statements all underline that learning can take place unconsciously. This shows that without any special effort, even by only following the posts, it can contribute to learning positively. The other positive opinion is about the images that are sent within the information messages. S11 shares that "images sent with the texts helped me remember them easily," and S16 reveals that "I wish they were all with images." As stated above, information messages sent with related images are received more favorably by the students. The most important factor that was characterised negatively by the students towards the process of sending information messages by means of the Zoom application is the timing of information messages. As can be seen in S10's statements, "we received some messages just in the middle of the lesson and this caused us to lose our interest in the lesson." Though these messages were limited in number, some students expressed negative opinion on the timing of the messages; as stated by S9, there is a need to "be careful with the timing of the messages." Another problem is sending messages within the group. Though the purpose in forming the group is known by the students, some messages which were out of purpose were sent as stated by K29: "some friends sent unnecessary messages in the group, so we warned them" and they were resolved within the group and without researcher's intervention. DISCUSSION In this research, which was conducted with 10th grade secondary education students, the determination of the impact of Zoom use on success in education process, which is one of the instant messaging application, and the opinions of the students towards the process was aimed. Depending on the purpose of the research, qualitative and quatitative data collection methods were employed and was designed in mixed research model which integrates the results of the study. After the analysis, the results are discussed within the literature and the recommendations are given under headings. Results for quantitative data. To determine the contribution of Zoom to education as a supportive technology, a pretest-posttest control group, quasi-experimental design was used. The data related to the effectiveness of information packages on success, sent via Zoom to the students in the experimental group as a support to traditional environment, were analysed by using two factor variance analysis for mixed measurements. The results indicated that there is an increase in the success of the students, both in the traditional environment and in the environment in which Zoom was used as a supportive technology. According to the results of the two factor ANOVA analysis, which was made to test whether these changes showed a significant difference, it was determined that the success of the students who studied in two separate environments showed a significant difference. This finding indicates that both learning environments have different impacts on the increase of students' success, and supporting the traditional environment with Zoom, has been more effective on students' success. The researches made on social networks and the integration of instant messaging have shown that the features such as: encouraging collaborative learning which contributes to learning process, active participation, learning anytime and anywhere, and informal communication, are common in all platforms (Arteaga Sánchez, (Arteaga Sánchez R. C., 2014). Although there aren't any experimental studies met on the use of Zoom, which is one of the instant messaging applications, towards its impact on academic success in educational environments, there are findings that show its support to collaboration and sharing of the content, provide an unstructured learning environment (Arteaga Sánchez R. C., 2014) The results of the study show that the application has potential to increase the success. Results for qualitative data. After the implementation of open-ended question form to students in experimental group, content analysis was done and students' opinions about the process were categorised and correlated accordingly. Majority of the students expressed that the application has a positive effect on motivation and its use in other courses would be useful. In their study, which they conducted through Zoom on language education, (Plana, 2013)have also found that the instant messaging application Zoom increase students' motivation and willingness to study in immersion programmes. Another important factor that 5279 In his study, (Smit, 2012) stated that instant messaging applications have potential to increase learning. In the study, students' statements that learning can also take place unconsciously besides a conscious process, are remarkable. In his study on social networks, also implied that learning can take place by observing others' studies and communications. Another factor towards the implementation process, which is responded positively to by students, is the images used to support information texts. Students stated that these images, which were sent with some information texts and were related to the texts, had a positive contribution to their learning. However, the fact that the learners learn better in environments when words and images are used together rather than single words (Mayer, 2003), is supported with theoretical basis (Dual coding, limited capacity, multimedia learning, etc.), and has been tested in many researches. It is observed that the most important factor that students refer as negative in relation to sending of information messages via Zoom application is the timing of the messages. Though in limited numbers, some statements of the students, particularly about the untimely messages that may cause distraction, shows that special care must be taken in the timing of the messages. Another negativity in the process of implementation is the messages within the group. Although the students were informed about the purpose of the group, there were unnecessary and disturbing messages; however, this was resolved within the group without the interference of the researcher. This shows that there is self-control within the group and students can overcome such situations between each other. In their research on the use of Zoom, Bouhnik and Deshen (2014) state similar problems, but unlike this study, the students' solution was silencing the group. At this point, in research on social networks and the use of mobile devices, it is mentioned that students' untimely and unnecessary messages may cause distraction among students and their study process could get negatively effected (Kusnekoff, 2015) CONCLUSION It is is early to know what the effect of using zoom application, which has an important place in the daily lives of youg people and has the qualities to be recognized as social network, it will have on education. As a result of this study, it is determined that the application has a positive impact on English Teaching and Learning Process, which is English learning need some practice directly so that zoom application give great scale for Video Conference and its use is welcomed substantially. It should not be disregarded that Zoom technology has the potential of a natural educational technology and the qualities to contribute to education as a supportive technonlogy.
5,688.2
2022-06-13T00:00:00.000
[ "Education", "Computer Science" ]
The ArDM Liquid Argon Time Projection Chamber at the Canfranc Underground Laboratory: a ton-scale detector for Dark Matter Searches The Argon Dark Matter (ArDM) experiment consists of a liquid argon (LAr) time projection chamber (TPC) sensitive to nuclear recoils resulting from scattering of hypothetical Weakly Interacting Massive Particles (WIMPs) on argon targets. With an active target of 850 kg, ArDM represents an important milestone in the quest for Dark Matter with LAr. We present the experimental apparatus currently installed underground at the Laboratorio Subterraneo de Canfranc (LSC), Spain. We show first data recorded during a single-phase commissioning run in 2015 (ArDM Run I), which overall confirm the good and stable performance of the ton-scale LAr detector. Introduction The existence of non-luminous, non-baryonic cold Dark Matter is by now well established [1]. Several experimental and theoretical indications favour Dark Matter which is supposed to be present in our Galaxy as a halo of the thermal relic from the Big Bang. A popular hypothesis, explaining these observations, is, that Dark Matter is made of Weakly Interacting Massive Particles (WIMPs). However, no such particles exist in the Standard Model and none has been directly observed at particle accelerators or elsewhere [2]. Hence the particle physics nature of the WIMPs Dark Matter remains unknown. A great variety of experiments for direct Dark Matter searches have been running in the recent years utilising different technologies (see e.g. [3] for an overview). The elastic scattering of WIMPs with masses of (10-1000) GeV/c 2 is supposed to produce nuclear recoils in the range of (1-100) keV [4]. Despite the large experimental effort to search for the rare nuclear recoils in a terrestrial experiment, a conclusive result is still missing, presumably due to the very low interaction probability. Present searches extend exposures on detector targets to an order of several thousand kg·day, with just a few expected background events in the region of interest. This drives developments to larger target sizes, favouring the deployment of liquid noble gas detector using xenon [5][6][7][8][9][10] and/or argon [11][12][13][14][15] in a TPC for their scalability, cleanliness, and background discrimination power. The ArDM experiment is designed for highest sensitivity in the WIMPs mass range above 100 GeV/c 2 . The detector consists of a vertical TPC using LAr as the WIMP target [11,[16][17][18] and 24 low radioactivity 8" Hamamatsu PMTs distributed in two equal arrays for light readout. The ArDM detector is able to detect signals produced by elastic scattering with LAr atoms in the active volume, of WIMPs or neutrons producing "nuclear recoils", and by background particles like γ or β producing "electron recoils". When the detector works in double phases (liquid and gaseous) mode, both nuclear and electron recoils generate scintillation light (S1) and electron-ion pairs. The electrons can be separated from their ions in an electric field and drift upwards the argon surface. After being extracted from the LAr to the gaseous argon (GAr) on top, these electrons are accelerated and the secondary scintillation light (S2), which is proportional to the amount of electrons extracted, is produced. Both S1 and S2, which are vacuum ultraviolet (VUV) light with a wavelength around 127 nm, can be wavelength shifted to visible range and read out by the PMT arrays. The pulse shape of S1 signal and the ratio S1/S2 are different for nuclear and electron recoils [19], which are used to reject the γ or β background events. In addition, the S1 signals is used to reconstruct the energy of the event, while S2 signals help to reconstruct the 3D position of the event. In 2015 a series of commissioning runs with gaseous and liquid argon targets (ArDM Run I) were undertaken in single phase mode to explore the functionality and performance of the detector. Since no electric field was applied, only S1 signals were collected. In this paper, we describe the design and setup of the ArDM detector and present results from gaseous data collected during ArDM Run I. Results from data taken in liquid are reported elsewhere [20,21]. The underground laboratory LSC In order to reach low background conditions, the ArDM experiment is installed at the underground Laboratorio Subterráneo de Canfranc (LSC) [22], located under the Mount Tobazo in the central Spanish Pyrenees. The rock overburden is about 2500 m water-equivalent. The laboratory is situated between an old, decommissioned, railway tunnel and the newly built Somport road tunnel. Access to the underground laboratory is given through one of the emergency safety links connecting both tunnels at regular distances. The recently refurbished LAB 2400 mainly consists of two experimental halls (A and B), with the dimensions of 15 × 40 m 2 and 10 × 15 m 2 respectively. Table 1 summarises the main parameters of the laboratory [22,23] [22]. Besides the main experimental halls the underground site also contains a clean room area equipped with various high-purity p-type coaxial germanium counters (HPGe counters). The semiconductor detectors are embedded in lead and copper shieldings and serve as a material screening facility for the experimental components. The ArDM detector as well as its cryogenic service installation are situated in a lowering of the concrete floor in Hall A at LSC (Sala A), which serves as a large containment pool in case of accidental loss of the LAr. A second smaller containment volume is created by thermally isolated panels just below the main detector vessel of ArDM. This volume is connected to a gaseous extraction line into the railway tunnel, which can be used for a removal of argon gas in case of an accident or emptying the target. The laboratory is also equipped with an emergency electrical power supply (Diesel generator) able to sustain the entire installations over several hours without intervention. This includes the ArDM cryocoolers consuming about 30 kW. Overview of the ArDM detector The main component of the ArDM experiment (see Figure1) consists of a cylindrical TPC installed in a LAr dewar of 1 m diameter. A layer of 10 cm of LAr is available around the target to shield particles entering from the outside. The detector active volume is confined by an optical surface made of high-reflectivity Polytetrafluoroethylene (PTFE) foils to collect as many photons as possible. The PTFE reflectors are coated with a thin layer of a wavelength shifter (WLS), to convert the argon scintillation VUV light to a range of maximal sensitivity of the PMTs (see Section 2.4 for details). The active target volume, defined by the drift cage, amounts to about 540 liters, corresponding to about 750 kg of LAr. For double-phase operation an approximately uniform vertical electric field is created in the active volume. By applying negative HV to the cathode electrons are drifted to the top where they are extracted into the gaseous phase of the detector producing the secondary signal S2. The drift cage has a shape of vertical cylinder, 112 cm in height and 80 cm in diameter, owning a flat section on the side to accommodate the large HV feedthrough. The drift cage is formed by 27 field shaper rings vertically arranged with a pitch of 40 mm. The rings are mounted onto seven 40 mm thick pillars made out of high-density polyethylene (HDPE). Top and bottom of the active volume are electrically closed by an extraction and cathode grid, respectively. The maximal design value of the cathode voltage is -100 kV creating a drift field up to about 1 kV/cm. A further grid is mounted 13 cm below the cathode grid biased to a voltage similar to the one for the PMTs as HV protection. During ArDM Run I no voltages were applied to the drift cage (E=0) and the detector was operated in single-phase mode with a slightly different geometry than for double phase operation creating an active LAr target of around 850 kg. Figure 1. Overview of the ArDM experimental setup at LSC. The inset to the left shows the schematic of the inner detector in single phase configuration. Cryogenic system The ArDM high purity cryogenic LAr target is placed in a triple-wall dewar vessel with a LAr layer for cooling [24]. This bath design, with a separation of clean and dirty LAr volumes is developed to shield direct heat input from the outside with the aim not to create any gas bubbles in the LAr target. Figure 2 shows the overall schematics of the cryogenic installation of ArDM with the main LAr volume on the right and the cryogenic services on the left. Both parts are insulated with separated vacua. The two hermetically closed circuits containing the high purity detector argon (red), as well as the LAr bath used for cooling (blue), are protected against overpressure by electrical and mechanical valves, as well as by rupture disks. The LAr bath can be cooled at the same time by three cryocoolers for faster cool-down, however only two are needed for normal operation, leaving the third as spare for safety or during maintenance of a cryocooler. The location of their three cold heads are in the top of the condenser vessels (upper left in Figure 2). Detector vessel The detector vessel is a three-wall dewar cryostat for LAr, made of stainless steel. The inner main volume, a vertical cylinder of 100 cm in diameter and ∼200 cm in height, can hold up to 1.4 m 3 of LAr. The target space is surrounded by an intermediate LAr layer, the cooling bath, having a thickness of 2 cm. The outer volume serves as insulation vacuum, surrounding the cooling bath. While the bath and the insulation vacuum are closed by welding, the main volume is closed hermetically with a 4-cm-thick stainless-steel top flange with indium seal. The top flange supports the detector structure hanging from it, and has various service ports with feedthroughs all based on vacuum CF flanges. It is covered with thermal insulation made of extruded polystyrene (initially Perlite, an non-inflammable, expanded natural mineral was chosen, but was subsequently replaced; see discussion in Section 2.5.1). The vessel has been constructed such that each of the three volumes can be evacuated to -1.0 barg independently. The main and the bath volume are tested up to +1.2 barg, before the installation underground at LSC. After installing the TPC inside, the main volume is pumped for several months to reduce outgassing from detector components. For cryogenic operation, the pressure in the main volume is maintained at +0.1-0.2 barg, to prevent air from leaking into the system. The main volume is protected ultimately with a rupture disk that opens at around +0.6 barg, while an electro valve is set to release the pressure at +0.35 barg. Regulation of the (vapour) pressure in the main volume is achieved by cooling the LAr through the vessel wall, by the sub-cooled LAr in the surrounding bath. The LAr in the bath is cooled by means of the cooling system, which is described in Section 2.3.3. Nominal operation parameters during ArDM Run I are summarised in Table 2. Vacuum system The vacuum insulation of the experiment is subdivided in two separate volumes as shown in Figure 2: one around the detector vessel (right, Vacuum Insulation 1) and the other around the cryogenic services (left, Vacuum Insulation 2). The instrumentation of both vacuum systems is identical: (1) a gate valve, (2) a turbo molecular pump (TMP) and ( fail-safe gate valve steered by compressed air closes in case of electrical power outage, and maintains the vacuum. In addition an electrical safety valve and an oil filter are installed between the TMP and the backing pump. Each volume is equipped with a pressure sensor for the range 0.05-2 bara and a thermal conductivity (Pirani) vacuum gauge for values down to ∼10 −4 mbara. Sequences for starting and stopping the pumping, including opening/closing of the gate valve, starting/stopping of the backing pump and TMP, are programmed as fully automatic processes of the ArDM process control system, based on a programmable logic control (PLC) (see Section 2.6). Cooling system When bath and the main vessel are filled with liquid argon, the heat input from the environment is found to be ∼500 W. A redundant cryo-cooling system consists of three Gifford-MacMahon cryorefrigerators, CRYOMEC AL300 1 , each having a cooling power of 266 W at 80 K with 50 Hz AC electrical supply. Therefore to maintain the thermodynamic conditions stable, it is sufficient to keep two cryocoolers operating, keeping one spare. The regulation of the thermodynamic conditions is achieved by regulating heaters placed onto each cold head, controlled by a proportional-integral-derivative controller (PID controller) integrated in the PLC. The full cryogenic cooling power is used to maximise condensation during LAr filling, where GAr at room temperature from standard 200-bar bottles is cooled down and liquefied into the main vessel. A filling rate of ∼70 L/day of LAr was achieved during the filling. Argon purification system The argon purification system is designed to provide sufficient cleaning power of the LAr target to exceed 1 ms for free electron lifetime, necessary for drift lengths of the order of 1 m at an electric field of >0.2 kV/ cm. This translates into an oxygen-equivalent impurity level of 0.1 ppb [25]. The experiment is equipped with two independent circuits to remove impurities trapped in both the liquid and the gaseous phases. In the sealed system of ArDM, the main source of impurities is outgassing from internal detector components. LAr purification system An internal cryogenic bellow pump recirculates the LAr of the main detector volume through a pure Cu-powder cartridge embedded in the LAr bath, at a speed of ∼150 L/h. This provides the main removal of electro-negative impurities, such as oxygen. The purification filter is a custom-made cartridge containing activated copper grains to bind oxygen in the chemical reaction: 2Cu + O 2 → 2CuO [24]. The double-bellow pump was also specially designed and constructed. The LAr flow rate can be adjusted by the pump frequency in the range 0.4-3.5 Hz. At the nominal frequency of 1 Hz and a total displacement of 48 cm 3 /cycle the flow rate reaches about 170 L/h, i.e. it takes about 8 hours for one volume exchange of 1.4 m 3 of LAr. GAr purification system The GAr is recirculated by means of an external pump and a room temperature getter cartridge at a speed of ∼4000 L/h. The pump is of a double diaphragm KNF pump 2 of the type N 0150.1.2 AN.12 E. The gas is taken from the top of the detector via a 4 m long heatable line and pumped through a commercial room-temperature getter of the type MicroTorr MC4500-902FV from the company SAES 3 . The return of the gas to the LAr system is done via a condenser immersed in the LAr bath. The temperature of the line is monitored by the PLC. Light readout system To detect VUV scintillation light at LAr temperatures we adopted a design of WLS coated reflectors combined with borosilicate windowed cryogenic PMTs. The PMT windows are coated with WLS as well, to detect directly impinging VUV light. The active cylindrical LAr target volume is contained entirely inside an optically closed surface. The photocathode coverage amounts to approximately 70% of the top and bottom readout planes, or ∼14% of the total inner surface. The wavelength shifter and the coating method Among a range of wavelength shifting chemicals, the organic WLS material Tetraphenyl-Butadiene (TPB) is considered for its fast response caused by the rapid process of radiative recombination of electron-hole pairs at the benzene rings in their chemical structure. Such a feature is required for recording undistorted waveform of the fast scintillation component that decays with a time constant of the order of a couple of nanoseconds, which is essential for the pulse shape discrimination of the electron recoil backgrounds from nuclear recoil signals. After R&D work on a range of organic WLS's such as p-Terphenyl, POPOP, PPO and bis-MSB [26,27], we choose TPB, 1,1,4,4-tetraphenyl-1,3-butadiene, for the best light yield in conversion of argon scintillation light and detection of the shifted visible light by bialkali photocathode. TPB is particularly well suited for the detection of VUV light due to the large Stokes shift. The fluorescence decay time is about 1.68 ns and, since no phonon is involved, the recombination process does not slow down significantly at cryogenic temperatures. TPB coatings can be made durable with good adherence to the substrate and some resistance to mechanical abrasion. The coatings are generally not soluble in water but can be removed when necessary by using toluene, chloroform (CHCl 3 ) or other organic solvents. TPB coatings have been exposed to high vacuum conditions for very long periods of time and show no evidence of significant change in the detector sensitivities. Different techniques have been tested for deposition of TPB on different substrates, and best results are obtained by the vacuum evaporation deposition method. The substrate to be coated is mounted in a vacuum chamber (evaporator) at a fixed distance above one or more crucibles filled with a certain amount of TPB. After pumping below 10 −5 mbar by means of a TMP, the crucibles are heated slowly over several hours to about 220 • C to evaporate the TPB. TPB diffuses isotropically in a 2π solid angle and is deposited by forming a molecular layer on the substrate surface. The layer thickness can be controlled by the amount of TPB filled in the crucibles. This technique is used for most of the TPB coated surfaces, i.e. the main reflector and PMT windows. Some less important components, e.g. the top/bottom reflectors bridging the space in between the PMTs, are dip-coated for convenience. In this case the substrate is immersed in a TPB organic-solvent solution and gently taken out and dried. TPB forms a layer on the surface but also is prone to crystallisation. The WLS coated reflectors The reflectors are all based on PTFE material and are deployed at 3 different sections of the detector. The main reflectors, which line the side of the drift cage, are made of 254 µm thick and 20×108 cm 2 large sheets of the PTFE fabric Tetratex R (TTX), produced by the company Donaldson Membranes, USA. A total of 13 such foils are used to entirely line the side walls of the drift cage. The TTX fabric is favoured against standard PTFE sheets due to the better adhesion of TPB. To mechanically support the soft fabric, a multi-layer plastic reflector film, Vikuiti TM ESR foil (Enhanced Specular Reflector) from the company 3M, USA, is sewed to the back side of the TTX sheets. The coating of the TTX sheets with TPB is done under vacuum evaporation deposition using a custom-made evaporator ( Figure 3). A layer thickness of 1 mg/ cm 2 is chosen [26,27], which means total amount of TPB deposited on 20 reflector foils including spares is 43 g while we evaporated total of 104 g of TPB in the crucibles. The top and bottom reflectors, covering the area between the PMTs, are made of 1 mm thick PTFE sheets. A large PTFE sheet is cut with water jet in a shape of a disc (900 mm in diameter) having 12 holes 20 cm in diameter where the spherical PMT windows are to be inserted. After the machining and cleaning the reflector sheet is dip-coated with TPB. The third reflector type, bridging the conical transition (∼15 cm) from the cylindrical section of the main reflector to the larger PMT mounting plate, consists of the same TTX foil as used for the main reflector, but is left uncoated in order not to convert VUV light produced outside the drift cage. With this third reflectors connecting between the two types as described above, the inner surface of the active LAr target volume is lined entirely with PTFE reflectors without gaps and overall more than 80% of the reflector surfaces are coated with TPB. The PMT arrays A total of 24 8" cryogenic Hamamatsu R5912-02MOD-LRI PMTs made from particularly radiopure borosilicate glass are used to assemble two identical PMT arrays installed mirror symmetrically. Figure 4 shows the layout of the PMTs in an array. Such an arrangement in a triangular symmetry is chosen for the largest coverage and the best symmetry in light collection around the axis of the cylindrical target volume. On a common stainless steel base support plate, each individual PMT is held with an independent supporting structure consisting of 2 stainless steel rings and 12 small polyethylene (PE) pads. The spherical part of the PMT glass is clamped by the PE pads. The supporting system is designed so that the thermal contraction in cryogenic temperatures of the stainless steel ring and the PE pads are compensated and that the displacement of the position of the contact to the PMT is virtually zero. The PMTs own bialkali photocathodes with a platinum (Pt) underlay to preserve electrical conductivity of the photocathode at cryogenic temperatures. An approximately flat quantum efficiency (QE) of ∼17% is obtained in the range of 360-430 nm. Due to the Pt underlay a reduction in sensitivity of roughly 25% has to be taken into account in comparison to typical bialkali photocathodes operated at room temperature. Each PMT features 14 dynode stages and the gain reaches 10 9 at an operation high voltage (HV) of 1.7 kV, as quoted by the manufacturer. The high gain feature helps in operating the PMTs at relatively low bias voltages, which is convenient in particular for the PMTs located in the pure argon gas where electric discharges can often be an issue. To match the front-end readout electronics a nominal gain of 5×10 7 is chosen, resulting in the bias voltages in the range 946-1345 V for 24 PMTs in cold operation. The voltage divider is made on an FR-4 4 printed circuit boards (PCB) using cylindrical metal film SMD resistors. A total resistance of 13.4 MΩ is chosen to reduce the divider current and consequently the power dissipated in LAr to minimize bubbles creation. A typical operation HV of 1.1 kV leads to the divider current of 82 µA and the dissipation of 90 mW/PMT. In order to ensure a good pulse linearity, large capacitors are connected in parallel to the resistor chain at the last five dynode stages, i.e. 22 nF for the dynodes 10 and 11, 47 nF for the dynodes 12-14. Polypropylene capacitors are used for their reliability at LAr temperature. The divider circuit is designed paying particular attention to electric fields created on the PCB, in order to avoid electrical discharges. While a relatively thick layer of TPB can be used for the PTFE reflectors, coating on the PMT window requires a more precise optimisation. The coating should not compromise the detection efficiency for the visible light as a large part of the photons falling onto the PMT is already converted on the main reflector. The thickness of the TPB coating layer was optimised for the sensitivity to both, the VUV and the shifted blue light by tests in the lab, scanning the performance of the TPB layer thicknesses in the range of 0.05-0.2 mg/cm 2 by measuring scintillation light of an α source in a gaseous argon test setup at room temperature. An increase of the efficiency is observed between 0.05 and 0.1 mg/cm 2 , while no significant difference is seen between 0.1 and 0.2 mg/cm 2 . Since no reduction of the efficiency to visible light was observed, finally we chose a layer thickness of 0.2 mg/cm 2 for the coating of the PMT windows. The TPB coating of the PMTs was performed at the Thin Film & Glass Group of CERN, using its evaporator capable of coating one 8" PMT at a time. In this setup the coating thickness is controlled by measuring mechanical oscillations of a crystal that is positioned beside the PMT, as the oscillation frequency changes as a function of the thickness of the deposited TPB layer. While the evaporation process takes less than one hour, the whole coating process requires a day per PMT mainly due to the evacuation of the evaporator taking several hours. The quality of the coating process is controlled in the laboratory at room temperature by means of a spectro-photometrical setup, consisting of a xenon lamp with diaphragm, a monochromator, a calibrated photodiode and a pico ammeter as illustrated in Figure 5. The incident wavelength can be selected between 200 to 1000 nm at an adjustable step size. The photocurrent of the PMT under test is measured using the pico ammeter, applying a positive bias voltage of 250 V to the first dynode to collect photoelectrons from the photocathode. Turning a mirror by 90 • the same measurement can be done with a reference photodiode with known QE. The QE of the PMT at wavelength λ can be calculated using where QE PD is the known QE of the reference photodiode, I PMT and I PD the measured currents of the PMT and the photodiode, respectively. shifting occurs, the about 15% increase of the QE in respect to the uncoated PMT is due to scattering in the TPB layer and back reflection of scattered light onto the photocathode. For this reason the values reported here for QE are somehow biased by optical parameters of the measurement setup, but serve as a good handle to compare the performance of different coatings. However, from a simple solid angle estimate of the emitted shifted light we conclude a conversion efficiency of the TPB layer close to unity (∼96.2%). Background and shielding system 2.5.1 Material screening Samples of detector components were screened for radioactive traces in the HPGe radiopurity screening facility of LSC (see Section 2.1) at the underground site. The recorded γ-spectra are analysed with emphasis on emission from the decay chains of 238 U, 232 Th, 235 U, as well as from the individual isotopes 40 K and 60 Co. A list of screened components and their description is given in Table 3 The screening campaign includes the PCB of the PMT voltage dividers, as well as various samples of HDPE. The specific activities (Bq/kg) of the materials are derived from the background subtracted γ spectra taken at various high-purity Germanium counters of the screening facility. Among the items of main interest are the 8" Hamamatsu R5912 PMTs as well as the innermost mechanical components of the detector. A GEANT4 Monte Carlo simulation is used to unfold effects from detection efficiencies and sample geometries of the screening setups. The method applied is described in [28,29] and references therein. The decay chains of the isotopes of 238 U, 232 Th, and 235 U studied in this work, are found, in fair agreement, to be in in equilibrium over their entire lengths. Their activities are determined from the weighted mean of identified γ lines. In the case that no lines is found, the most stringent upper limit is used. For better comparison of the results radioactive activities are translated into contaminations in (kg/kg) and are listed in Table 4, where 1 kru represents 10 3 decays/day/kg. These results confirm our extrapolations of the material purity derived from literature or data sheets which are used for the construction of the detector. Table 4 also lists the screening result for Perlite, originally planned to to be used for thermal insulating of the top part of the experiment. However due to its large contamination it has been replaced with extruded polystyrene. Neutron shield A passive neutron shield made of HDPE, fully surrounding the detector vessel, is built to reduce the neutron flux inside the detector volume coming from environmental sources such as surrounding rock of the underground lab. This shield structure has a shape of an equilateral octagon cylinder with a total height of 505 cm. See Figure 7. The lateral part of the shield consists of a stack of octagonal "rings" of PE each one being composed of eight trapezoidal tiles. The tangential outer and inner radius of the assembled ring are 121.2 cm and 71.2 cm respectively, resulting in a minimum wall thickness of 50 cm. The top cap is a pre-assembled unit having also a thickness (height) of 50 cm that can easily be opened and closed by crane. The bottom part is constructed underneath of the detector vessel integrated in the platform structure supporting the entire installation, having a maximum thickness of 72 cm to preserve 50 cm for parts having 12 cm high grooves to let horizontal beams of the platform pass through. To enclose the detector entirely with PE panels, two cylindrical 50-cm-long HDPE blocks are inserted in two horizontal pipes of Insulation Vacuum 2 connecting the detector vessel and the cryogenic service part. The total mass of the PE neutron shield amounts to ∼17 tonnes. The entire supporting structure, holding the 17 tonnes PE neutron shield, is designed to be earthquake tolerant, free of resonances below 10 Hz and sustaining horizontal accelerations on the order of 5 m/s 2 . To prevent the large mass of flammable PE from catching fire, the lateral surfaces of the shield are covered with fire protection panels, consisting of an aluminum sheet and a mineral wool thermal insulation layer. The top and bottom part is painted with fire retardant paint. Monte Carlo simulations based on flux and spectrum of neutrons in Hall A from surrounding rock suggest a reduction of the rate of neutrons hitting the inner detector volume of about 10 6 . Of the neutrons passing through the shield, only 12% have energies above 100 keV and hence are able to produce nuclear recoils above the threshold used for WIMP searches. Control and monitoring system The control and monitoring system serves to monitor as well as to manipulate the apparatus in a fail-safe and secure operational mode. The system is based on a PLC unit using the CERN designed process visualisation and control system (PVSS) and data acquisition (DAQ) framework for combining the data of the distributed monitoring systems and for their visualisation and storage [30]. Besides the sensors and hardware controllers, the installation also comprises redundant power sources, battery driven UPS systems as well as the main DAQ and data storage system. More than 100 sensors are permanently read out and more than 50 actuators are controlled via analog and digital I/O modules. I/O lines can be adapted for different sensors and actuators. A high reliability of operation is achieved by the use of dedicated hard-and software. About 50 temperature sensors, 20 vacuum and pressure sensors, as well as 3 oxygen deficiency sensors deliver the main information of the status of the experiment. To control LAr levels in the bath and in the dewar, capacitive level meters are used. The processor module provide real time applications like the PID regulation of the cryocoolers. The monitoring and the control of the HV power supply for the PMTs are also integrated in the system. Any controls, e.g. pump or shutter valve operation, cryogenic cooling power or the PMT HVs are embedded in an interlock operational concept controlled by the PVSS framework. The software also allows for real time changes of stored parameters or settings to remotely adjust to the desired operational point. The system is able to send out alerts in case of changes of logic conditions or values passing defined limits. The ArDM control system is connected to the LSC alarm system, like the start of the emergency power generator in case of power cuts. The status of the ArDM is continuously recorded by the PVSS software. The recorded data are backed up in a MySQL database on a separate Linux PC via Ethernet. The MySQL data is watched over internet by means of a dedicated web site. A crucial feature of the control system is the ability for remote control and monitoring. It provides the entry point for the remote operation of the experiment via a PC. Quantitative risk assessment Prior to cryogenic operation, a quantitative risk assessment (QRA) of the experimental setup has been performed by the Scientific Research center DEMOKRITOS [31], specialised in reviewing industrial installations like nuclear power plants. The QRA has been independely reviewed by an external company, NIER [32], Italy. Both reviews have been done in close collaboration with LSC. The assessment included the simulation of accidents by solving three dimensional transient dispersion problems involving double-phase cryogenic leaks. Accidents were classified into three levels, where only the third one was found to represent considerable risk to personnel at the underground site. The frequency for such an event was estimated to be 4×10 −5 per year [33]. A containment pool below the ArDM vessel was added, as well as an automatic gas extraction system. Completion of the safety instrumentation and procedures have been required as a final step before the start of full cryogenic operation of ArDM. DAQ system The DAQ system is designed and built to handle multi-kHz trigger rates and data rates up to ∼300 MB/s to comply with the expected event rate from 39 Ar events in an atmospheric argon target. Figure 8 illustrates a schematic diagram of the DAQ. It consists of four 8-channel, 12bit, 250 MHz FADC digitiser modules of the type CAEN V1720 for each of the PMTs, and was tested for trigger rates up to 3 kHz. During ArDM Run I single phase configuration sampling time windows of 8 and 4 µs are used for gaseous and liquid data taking, respectively, resulting in 2048 and 1024 samples per PMT per event. Total event sizes are 96 and 48 kB, respectively. The trigger rate is in the range of 1.3-2.1 kHz depending on the stage of completion of the external PE shield. This corresponds to data rates of 85-138 MB/s which could be handled dead time free by the DAQ system. Data communications of the DAQ system uses optical links. Each of the four digitiser board hosting six PMTs is connected to the DAQ PC via a dedicated optical link, each having a maximum bandwidth of 80 MB/s. The system thus theoretically is capable of handling total of 320 MB/s with four parallel optical links. The DAQ PC has two CPUs each consisting of eight cores, and is equipped with two CAEN PCI-Express card each hosting four optical link ports. The DAQ software consists of two custom programs, i.e. producer for reading data from the digitiser and writing them in a shared memory, and collector for writing the data to hard disks. An independent set of producer and collector runs per optical link. These total of eight programs are controlled by another program, manager, and are distributed over eight cores of the CPUs. Data files are written individually for each optical link in the unit of sub-event that contains six PMTs, truncated into chunks of 1 GB each. In order to reconstruct offline a full event from four independently recorded sub-events, internal clocks (TTT -Trigger Time Tag) of the four digitisers are synchronised and are reset at the start of the DAQ programs. An absolute time stamp of the event is recorded in each sub-event header with a resolution of the TTT-unit, i.e. 8 ns. The data storage system at the experimental site consists of 48 4-TB hard disks. Managed by a RAID6 controller, nearly 180 TB is available for data storage. The storage system is connected via optical links (1 GB/s bandwidth) to a dedicated DAQ PC, allowing for a writing speed of 500 MB/s. Employing a lossless compression method (pigz -parallel gzip) the storage size is reduced by a factor of 4 and allows for continuous data recording of three months at a rate of 2 kHz. Eventually, data is transferred via internet to the tape-based storage system CERN CASTOR [34]. System monitoring is done by a dedicated software package for realtime data processing and online monitoring. Raw data is processed in parallel during data taking in a second DAQ PC connected to the system via an additional 1 GB link. Its processing power can also be used to reduce storage size by close to real-time data sparsification or event filtering. For monitoring purpose the raw data is processed by the same software framework which is used for offline reconstruction, described in more detail in Section 3.3. From data summary files the most relevant parameters, e.g. hit maps, noise rates, hit time distributions, synchronisation controls, energy spectra, spatial and temporal distributions and much more, are calculated and automatically broadcasted as graphs or histograms via a dedicated web server integrated into the control system. Status messages from the DAQ, as well as messages from experimental services are also displayed at this site. The control system includes an automatic watchdog unit with defined alarm levels sending out email and SMS messages in case of problems. Trigger system The trigger signal is generated from both, the top and bottom PMT arrays. Figure 9 illustrates a block diagram of the trigger logic. Each of the 24 analogue PMT signals is split to equal parts into two, using a passive, resistive 50Ω divider. One output is fed to an input channel of a digitiser module, while the other is used for triggering. The entire trigger logic is made of CAEN electronics modules hosted in one VME crate. For data taking in ArDM Run I the physics trigger is created by either a logic AND or OR of signals from the top and the bottom array. A fixed discriminator threshold of 35 mV is applied to the analogue sum of the 12 PMTs in each array, corresponding approximately to 2 photoelectrons (pe) in the prompt light. After a trigger, new triggers are vetoed for a duration of 9 µs. This results in a dead time of less than 1%. For general monitoring purpose (pedestals, noise, dark counts...) a periodic calibration trigger is generated using a pulse generator at a constant frequency. Such a generator trigger is added during the data taking. The generator frequency is set to 20 Hz with the physics trigger rate of 1-2 kHz and the fraction of the generator trigger events is around 1-2% of all the triggers. The generator trigger can also be used for LED pulse calibrations of the PMTs. For Run I data taken at cold (cold gaseous or liquid target) the OR mode of the trigger was used thanks to a low dark count rate of the PMTs at low temperature (see Section 3.4), while the AND mode served for first test runs with the gaseous target at room temperature. Figure 9. Block diagram of the ArDM trigger system. Event reconstruction The reconstruction of the scintillation light signal is done individually for the 24 waveforms of the PMTs by a dedicated software framework including several levels of processing. Main tasks include pedestal calculations, cluster or hit finding, and signal calibrations. The pedestals are evaluated from the first 200 samples (800 ns) in each PMT trace in the time region before the trigger by an iterative scanning algorithm to remove noise from dark counts or pile up. The mean single photo electron (SP E) charges are used for calibration. Their determination from signals in the event tails is described in Section 3.4. Event reconstruction is driven by accurate photon counting. This is achieved by calculating the sizes (S i ) and times (T i ) of signal clusters or hits originated in the detection of individual or several (merged) photons. Cluster sizes are determined by summing adjacent samples above a threshold of 0.2 SP E in pulse height, corresponding to about 10 σ of pedestal fluctuations. Samples at the edges of signal clusters below this threshold are included, if their values are above 5% of the maximal peak height of the cluster. In the case that two or more photo-electrons are close in time, the individual signals are merged into one cluster or hit. All quantities for further data treatment are calculated from the clusters or hits found in the 24 channels, as well as from a sum of all, the so called virtualP M T . Figure 10 shows an example for the waveform of argon scintillation event on one PMT before and after cluster (hit) finding. The reconstructed quantities are stored in ROOT tree structured files consecutively numbered in the same way as the raw data files, containing also the same number of events (∼100k). All analyses described here are based on summing the reconstructed signal clusters over corresponding PMT channels and time ranges. E.g. the top and bottom light yields, L top and L btm , are calculated from the sum of all signal clusters found in the top and bottom PMTs, respectively. The total detected light is calculated from the sum of all hits L tot = L top +L btm . A vertical localisation parameter T T R is found from the ratio of the top to the total yield, T T R = L top /L tot . We use the T T R value to estimate the vertical location of an event. Calibration Data is calibrated in two steps to reconstruct the deposited energy from an event which is related to the measured charge from the PMTs. Firstly the calibration of the single photon response is derived individually for all 24 PMTs from spectra of the integrated cluster charge of signal pulses from event tails or dedicated light pulse runs. Secondly, the light yield calibration is obtained by means of radioactive isotopes, i.e. the 39 Ar, or by internal or external calibration sources, like 83m Kr or 57 Co. Single photon calibration The individual PMTs have their HV adjusted to obtain a gain around 5×10 7 , so that each PMT contributed approximately equally to the trigger defined by the fixed discriminator threshold of 35 mV (or approximately 2 pe). The charge recorded by the FADC for a single photoelectron with this gain is 4 pC after the one-to-one passive splitter. Terminated with 50 Ω, this equals to 0.2 nV·s or about 100 units for the integrated ADC counts. Taking typically three calibration runs where the HVs of all the PMTs are changed simultaneously, optimal HV values are determined from the fit to the gain curve obtained for each PMT. The single photoelectron responses over the 24 PMTs are adjusted within ±4% of each other. For the purpose of monitoring and later analyses, a calibration constant for each individual PMTs is determined from the single photoelectron peak position in histograms of cluster sizes, on a run-by-run basis as part of the primary data reconstruction. Histograms of the integrated cluster charge are produced for 20 bins in time of the acquisition window for each PMT. The single photoelectron pulses are found from the time bins corresponding to the tail of the slow scintillation light. After rejecting the time bins where the mean of the distribution is larger than two photoelectrons, arithmetic means of the modes of the remaining histograms are computed and taken as the calibration constant. Such a method without using fits provided a robust, automatic evaluation of the calibration constants during the long-term data taking. PMT gain The temperature dependence of the PMT gains, as well as the PMT dark count rates, are measured during the gradual cooldown of the detector. The PMT gains, which depend on small changes in the work function of the dynode material, are shown for one of each PMT from the top (PMT 1) and bottom (PMT 13) array in Figure 11. Starting from room temperature, the gain, initially set to a value of 10 7 at 1400 V bias, is found to increase until reaching a maximum of about 160% at 200 K before it starts to decrease with further falling temperature. At the temperature of LAr (about 87 K) the gain drops to about 60% of its value at room temperature if the bias voltage is left unchanged. Due to its vicinity to warmer parts in the top of the experiment, PMT 1 reached only about 170 K in this first cool down. After the (later) filling of the target with LAr the temperature of the top array PMTs also approaches the value of LAr more closely. The gains of the PMTs are calibrated periodically to the standard value of 10 7 by changing the individual HV settings. Dark count rates The rates of thermally emitted electrons from the photocathodes of the PMTs are determined from data taken with generator trigger superimposed to the physics trigger of LAr scintillation events. Here we describe a set of data (332k events) taken once the detector is under cryogenic conditions. The data is analysed for the number of signal clusters (mostly single photoelectron size) in each of the 24 PMT traces over the acquisition window of 4 µs. Random coincidences with LAr scintillation signals are removed by rejecting events with more than one signal cluster in a 48 ns window in the combination of top and bottom PMTs. Figure 12 on the left shows the cluster count distributions for individual PMTs and on the right for all 24 PMTs, both in logarithmic scales. Final dark count rates are evaluated by Poissonian fits to the distributions to the left and are plotted to the right in Figure 12. The ∼20% higher rate for the PMTs in the top array is due to a slightly higher temperature in the gas phase. It can also be observed that PMT 13 shows a higher rate which is likel due to a self emission of small quantities of light, also cross talking to its nearest neighbor. Averaging over all 24 PMTs, we obtain a value of 2.3 kHz per PMT at LAr temperature, which is found to be stable over time. The measured rate corresponds to only about 1% probability per 8" PMT to show one dark count during the acquisition time of 4 µs. A contribution from dark counts to the integrated pulse height of reconstructed scintillation signals can hence be neglected, which is an important property for the detection of LAr scintillation light with its long triplet decay time. all H 2 O molecules, which tend to stick to surfaces. Due to temperature-sensitive detector components the system could not be baked despite a general fully metal-sealed design. A vacuum of 10 −5 mbar could be reached after several weeks of pumping. The experiment was then commissioned with argon gas of the type ALPHAGAZ 2 (99.9999% purity) from 200 bar cylinders provided by Air Liquide, Spain. Detector response calibration sources Internal low energy calibration source A 100 kBq 83 Rb source was installed in a bypass circuit on the gaseous recirculation line, producing metastable 83m Kr atoms. The deexcitation of 83m Kr proceeds via two consecutive electromagnetic transitions, suppressed in their rates by large changes in nuclear spin. The first (32.1 keV) owns a half life of 1.8 hours followed by a ∼150 ns delayed 9.4 keV transition. Most of the de-excitation energy, in total 41.5 keV, is used to eject electrons from the Kr atom via internal conversion and Auger effect. Due to the low range in the energy and short lifetime, 83m Kr sources are well matched to the needs of highly sensitive detectors in low background environments. In the ArDM experiment radioactive 83m Kr atoms can be swept on demand into the gaseous phase of the main detector volume via the gaseous recirculation circuit. External calibration sources Other radioactive sources were applied by placing them at different positions close to the outside of the main detector vessel. For this purpose some of the PE bricks of the neutron shield can be removed temporarily. In addition two plastic tubes were installed inside the neutron shield serving as guides for vertical and horizontal scans. Mainly a 180 kBq 57 Co (122 keV γ) source was used for external calibration. Further-on the following sources were prepared for their use, but are not described in this work: 37 kBq 22 Na (0.511 and 1.27 MeV γ) a 37 kBq 60 Co (1.17 and 1.33 MeV γ) as well as a 20 kBq 252 Cf neutron source. Data with room temperature GAr target The ArDM detector was commissioned firstly at room temperature with a GAr target over a period of about 3 month. The basic functionality of the detector components was verified and found to work as expected. In particular the cleaning efficiency of the recirculation system was confirmed by the increase of the slow scintillation time constant from the initial value of 2.5 µs after filling, to 3.2 µs. This value is close to the undisturbed lifetime of the triplet excimer state. However, a contamination of the argon gas with α emitters, was observed after activating the recirculation system, causing an increase of the trigger rate from about 20 to 30 Hz. When the recirculation was stopped, a decrease of the α rate with a time constant of about 3.5 d was observed, confirming the presence of 222 Rn in the detector. While the presence of 222 Rn should be avoided in a dark matter detector due to production of long lived radioactive isotopes, here it could be used for the light yield study of spatially uniformly distributed events in the high energy region. Figure 13 shows a fit (red) of the total detected light spectrum of GAr data with 5 (Gaussian smeared) α lines on a small background. We interpret these events as daughter products from Rn isotopes generated from 238 U and 232 Th decay chains in the SAES recirculation cartridge (see Section 2.3.4). From these and other test source measurements ( 57 Co, 83m Kr), we confirmed the good linearity of the light detection system (inset in the figure) and a yield of roughly 0.8 pe/ keV for room temperature GAr. Figure 13. Spectrum of total detected light in the large signal region for room temperature GAr data. The inset shows the light yields (pe) of the 5 fitted α lines. Data with cold GAr target Following the measurements in warm gas, the experiment was cooled down for operation with a cold GAr target. Of main interest was the functionality of the cryogenic system, as well as the performance of the light detection system at low temperatures. The cool down of the setup was accomplished over a period of roughly one week by gradually filling the volume of the outer bath (see Section 2.3) with LAr. The argon gas in the main detector volume was kept at a pressure below the outer bath to prevent the formation of LAr in the main target. In this way temperature gradients and material stress on the detector components were also minimised. This method allowed for continuous monitoring of the detector functionality since the light readout could be kept running at all times. The temperature dependent gain curve of the PMTs (see Figure 11) was obtained from data taken during this period. Once the detector was in thermal equilibrium, the PMT gains were adjusted. The trigger rate was found to have increased to about 40 Hz, explained by the 3.5 times higher density of the argon gas at 87 K in comparison to room temperature. A calibration campaign using the 83m Kr calibration source was performed. The change in trigger rate after the injection of 83m Kr is shown in Fig. 14. The half-life of 1.82 hours obtained from an exponential fit is in good agreement with the literature value of 1.83 hours [35]. Figure 15 shows a histogram of the vertical localization variable T T R and the total detected light L tot of 270k events obtained during the 83m Kr injection. About 80% of the data is related to 83m Kr decays. The distribution of the events suggests a uniform light yield along the vertical axis and a spatially homogeneous distribution of the 83m Kr isotopes. This conclusion is also supported by the Gaussian width of the spectrum (peaking at 41.5 keV), which is close to the photo electron statistics, underpinning the functionality of the diffusive light collection. We note that the majority of triggers for these events are solely generated by the fast component (∼30%) of the scintillation light of the 32 keV transition of the 83m Kr de-excitation. Data with LAr target in single phase Data taking with a full liquid argon target was performed for a duration of 6 months. The filling process itself required several weeks to cool and condensate in total almost 2 tons of argon gas (ALPHAGAZ 2). The detector was readout at all times to continuously monitor its state. The filling level could be controlled by the capacitive level sensors, as well as by the increasing trigger rate due to the increasing target size. When the detector was full, a rate of 1.3 kHz was obtained as expected from the natural abundance of 39 Ar in the target (≈1 Bq/kg). The LAr purification system was operated continuously during the entire run and the recirculation circuit was found to improve additionally the thermodynamic stability of the system. The slow scintillation decay time (τ slow ) was monitored continuously during the run and was found stable at a value around 1.25 µs. A direct measurement of the amount of oxygen in the LAr target by means of a trace analyser of the type AMI 2001RS 5 has been performed towards the end of the run. The concentration of oxygen impurities was measured to be less than 0.1 ppm, at highest sensitivity of the instrument, suggesting the functionality of the purification system. More details can be found in [20]. Total detected light [pe] Several 83m Kr calibration campaigns were undertaken during the run periode by injecting the metastable atoms in the gaseous phase on top of the liquid argon. The subsequent distribution of 83m Kr atoms in the LAr volume can be derived from the T T R values of events below the 83m Kr peak. It was found that it took about 2 hours to obtain a spatially uniform distribution of the 83m Kr atoms in the liquid. The long dilution time for the Kr atoms in LAr indicates a good performance of cryogenic design, suggesting an undisturbed, convection free condition of the LAr target. The main panel in Fig. 16 shows 83m Kr data taken at the full LAr target about 4 hours after injection. The background, dominated by 39 Ar-β decays and external γ photons, has been subtracted from the histogram. The inset shows the same data, but taken already 30-60 min after injection, indicating the accumulation of 83m Kr decays close to the liquid surface of the LAr target. The total light spectrum from one of the 83m Kr campaigns, obtained 2 hours after injection (black dots), is shown in Figure 17 together with a fit function in red. The latter consists of a Gaussian on top of a background parametrisation by the sum of an 8th degree polynomial and an exponential. The background histogram before injection of the 83m Kr atoms is shown in grey. Only events from 0.3 < T T R < 0.6 are used for this graph. The cleanly detected signal from the low energetic 83m Kr events can be regarded as a proof of principle for recording and reconstruction of events at energies relevant for Dark Matter searches indicating the sensitivity of the ArDM detector to energies around ∼30 keV ee . The important experimental parameter of the light yield (pe/ keV) can be estimated using the data from the 83m Kr and 57 Co campaigns. The Gaussian fit to the 83m Kr peak yields a value of LY Kr = 1.1 pe/ keV. Similarly, a fit on the 57 Co data gives LY Co = 1.2 pe/ keV. The values obtained from both data sets agree well and we estimate a mean light yield of 1.1 pe/ keV. Errors were found to be dominated by systematics, originating in fluctuations of temperatures, purities, calibrations, as well as the choice of fit ranges, background parameterisations and others. In total we estimate the systematic error on light yields to be around 5%. The value for the light yield found in this work is lower than initially expected from Monte-Carlo simulations. In an extensive analysis of the measured light yield spectra and comparison to a description of the ArDM setup with a model of full light ray tracing, the Before Figure 17. Total light spectrum before (grey area) and 2 h after (black dots) the injection of 83m Kr into the LAr. The red curve is a fit to the data. attenuation length of the LAr target to its own scintillation light was determined to be of the order of 0.5 m. This was explained by the presence of optically active impurities not filtered by the purification system (see Ref. [20] for more details). In absence of VUV attenuation within the argon bulk, the light yield is predicted to be 2 pe/ keV. Detector stability One of the question addressed by the long-term operation of the detector was the durability of the thin evaporated layer of the TPB coatings on the inner surfaces, in total amounting to an area of about 3 m 2 . We analysed the variation of the position of the maximum of the total detected light and the light yield obtained from the 83m Kr calibration campaigns as a function of time. We calculated the maximum value of the background from a polynomial function which parametrises the background. Figure 18 shows the time evolution of the background maximum position within 6 months, obtained from data randomly sampled from the recorded runs. In addition, we compare the light yields (pe/ keV) in different 83m Kr calibration campaigns. Table 5 shows a summary for the five 83m Kr calibration campaigns during data taking with LAr. These measurements confirm the high level of stability of the ArDM setup. In order to study the performance of the TPB wavelength shifter material during ArDM Run I, we compare the light yield (pe/ keV) from 222 Rn and its progenies in room temperature GAr runs before and after data taking with LAr. From Table 6, we confirm the good linearity of the light detection system before and after LAr filling. Within systematic errors, the measured light yield from both data set is consistent with each other. Both the background maximum position and the light yield from different 83m Kr calibration campaigns show that the detector response was stable during data taking with LAr target. The good linearity and consistency of the light yield from 222 Rn and its progenies Table 6. The light yield from 222 Rn and its progenies in room temperature GAr runs before and after detector LAr runs. before and after LAr runs shows the TPB wavelength shifter coated on the light readout system did not deteriorate during data taking with LAr target. Conclusion The ArDM detector is the first dual-phase liquid argon TPC optimised for Dark Matter searches to be operated in a deep underground environment. In this paper, we describe the current experimental setup configured for single phase mode. The cryogenic system, the light readout, the neutron shield, the DAQ and trigger as well as data reconstruction are detailed. The detector was successfully commissioned underground at the Laboratorio Subterráneo de Canfranc, first with warm gas and then cold gas. Then, it operated filled with liquid argon in stable conditions over six months. From a look at these first data, the performance and functionality are confirmed. The light yield shows good linearity in a wide energy range, from several tens of keV to several MeV. Its absolute value is determined around 0.8 pe/ keV in GAr and around 1 pe/ keV in LAr. The ability to detect, trigger and analyse signals below 30 keVrelevant to Dark Matter searches is demonstrated. Following the successful operation reported in this paper, the detector has been recently upgraded for dual phase operation with a HV system, a new field cage and new reflectors. Furthermore, additional improvements e.g. in the purification system, are foreseen to improve the light yield. The ArDM achievements represent an important milestone towards sensitive WIMP searches with liquid argon targets and open the path towards 10-tons or more scale detector with nuclear recoil sensitivity.
14,019.6
2016-12-19T00:00:00.000
[ "Physics" ]
Integrated analysis of mRNA and viral miRNAs in the kidney of Carassius auratus gibelio response to cyprinid herpesvirus 2 MicroRNAs (miRNAs) are small, non-coding single stranded RNAs that play crucial roles in numerous biological processes. Vertebrate herpesviruses encode multiple viral miRNAs that modulate host and viral genes. However, the roles of viral miRNAs in lower vertebrates have not been fully determined. Here, we used high-throughput sequencing to analyse the miRNA and mRNA expression profiles of Carassius auratus gibelio in response to infection by cyprinid herpesvirus 2 (CyHV-2). RNA sequencing obtained 26,664 assembled transcripts, including 2,912 differentially expressed genes. Based on small RNA sequencing and secondary structure predictions, we identified 17 CyHV-2 encoded miRNAs, among which 14 were validated by stem-loop quantitative real-time reverse transcription polymerase chain reaction (qRT-PCR) and eight were validated by northern blotting. Furthermore, Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis of miRNAs-mRNA pairs revealed diverse affected immune signalling pathways, including the RIG-I-like receptor and JAK-STAT pathways. Finally, we presented four genes involved in RIG-I-like pathways, including host gene IRF3, RBMX, PIN1, viral gene ORF4, which are negatively regulated by CyHV-2 encoded miRNA miR-C4. The present study is the first to provide a comprehensive overview of viral miRNA-mRNA co-regulation, which might have a key role in controlling post-transcriptomic regulation during CyHV-2 infection. strand is degraded. Once incorporated into the RISC, the miRNA acts as a guide RNA to pair with specific target messenger RNAs bearing partially or fully complementary target sites, resulting in gene specific downregulation, either through enhanced translational inhibition or transcript degradation 21 . The miRNA target sites are usually located in the mRNA's 3′ untranslated region (UTR), and while an miRNA's association with the target mRNAs 3′ UTR does not need to be extensive, full complementarity to 2-7 or 8 bp of the miRNA, called the miRNA seed sequence, is generally required for effective downregulation 22 . A number of herpesviruses, including the pathogen of common carp Cyprinid Herpesvirus 3 (CyHV-3), express virus encoded miRNAs in infected cells or in vivo 23,24 . Viral miRNAs play a key role in suppressing the expression of host cellular mRNAs, which often encode antiviral factors, as well as regulating the expression of viral genes, including crucial factors involved in the latent-to-lytic transition in viral infection 25,26 . In addition, viral miRNAs can target viral mRNAs to trigger their downregulation. For example, MR5057-miR-3p, encoded by CyHV-3, targets the 3′ UTR of ORF123, resulting in a reduced level of CyHV-3 dUTPase 23 . Additionally, miR-S1-5p and miR-S1-3p are encoded by the same miRNA precursor of polyomavirus SV40, and direct the cleavage of early transcripts during infection and regulate viral T-antigen transcripts negatively 27 . The reduction in the T antigen effected by miR-S1-5p and miR-S1-3p is crucial for the replication of the virus, causing a decrease in the numbers of SV40-specific cytotoxic T lymphocytes in T cells 27 . Human cytomegalovirus encoded miRNA miR-UL112-1 downregulates the expression of the cytomegalovirus gene IE1, which plays a crucial role in establishing latent infection 28 . In summary, the literature indicates that viruses have evolved to make the use of virus-encoded miRNAs to regulate the expression level of their own genes for successful infection. In addition to "autoregulation" of viral target genes, several virus encoded miRNAs target host cellular mRNA 29 . However, their functions are poorly understood. Host cellular gene thrombospondin 1 (THBS1) is targeted by Kaposi's sarcoma associated herpesvirus (KSHV) miRNAs. THBS1 has functions in downregulating angiogenesis and the growth of cells by promoting transforming growth factor beta (TGF) 30 . The downregulation of THBS1 expression by KSHV miRNAs activates the survival and proliferation of KSHV-infected cells 30 . MiR-UL112-1, encoded by human cytomegalovirus, targets viral 31,32 and host cellular genes 33 . Through binding with the 3′ UTR of the major histocompatibility complex class 1-related chain B gene (MICB), miR-UL112-1 inhibits the expression of MICB and further decreases the susceptibility of virus-infected cells to killing by natural killer cells 33 . Epstein barr virus (EBV) encodes miR-BHRF1-3, which downregulates the expression of CXC-chemokine ligand 11 (CXCL11), an interferon-inducible T-cell chemoattractant that plays a crucial role in host defences against EBV. Cells take advantage of the suppression of CXCL11 to avoid T-cell recognition 34 . In recent years, virus-encoded miRNAs have attracted much research attention. However, many viral miRNAs have only been characterized in cell lines, and the roles of viral miRNAs in hosts in vivo may be very different to those in cell lines. In lower vertebrates, the kidney, with the highest concentration of developing B lymphoid cells, is an important organ involved in adaptive immunity 35 . Additionally, CyHV-2 propagates most efficiently in the kidney, with the highest degree of tissue damage 36,37 . In this study, the viral miRNAs encoded by CyHV-2 were characterized in the kidney of Carassius auratus gibelio. We used high-throughput sequencing technology to analyse viral miRNA and mRNA expression profiles of Carassius auratus gibelio in response to CyHV-2. Based on small RNA (sRNA) sequencing and secondary structure predictions, we identified 17 CyHV-2 encoded miRNAs. Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis of the miRNAs-mRNA pairs revealed the affected in immune signalling pathways. Finally, we presented a functional analysis of ORF4, ORF6, and IRF3, RBMX, and PIN1, which are targeted by miR-C4. Collectively, the present study is the first to provide a comprehensive overview of the viral miRNA-mRNA co-regulation, which might have a key role in controlling post-transcriptomic regulation during CyHV-2 infection. Results Overview of the deep sequencing of mRNA libraries. Expression profiling of the kidney of the control (T1K, T2K, and T3K) and moribund fish (T3K, T4K, and T5K) was carried using digital gene expression tag profiling (DGE). The major characteristics of these libraries are summarized in Table S1. A total of 573,340,100 raw reads were generated in the uninfected and infected groups. After filtering out the low quality reads, 564,633,312 clean reads remained. All clean reads were then assembled using the de novo assembly program Trinity 38 . The clean reads were assembled into 26,664 transcripts with an average length of 744 bp. The length distribution of these transcripts ranged from 201 to 14,287 bp. All the raw RNA-Seq data were submitted to the NCBI database (http://www.ncbi.nlm.nih.gov/geo/info/linking.html.) under accession number GSE90626. To compare the mRNA expression profile between the CyHV-2 pre-and post-infected in silver crucian carp kidney, the sequence data were analysed using the DEGseq software, with the criteria of fold changes >2 and false discovery rate (FDR) <0.001, to identify significantly differentially expressed genes. The expressions of 2912 genes changed significantly, including 1422 upregulated and 1490 downregulated genes (Table S4). Subsequently, GO and KEGG enrichment analysis were performed to analyse the functions of the genes that responded to CyHV-2 infection. The significantly enriched GO terms included extracellular region, plasma membrane, integral to membrane, heme binding, electron carrier activity, and proteasome complex (Fig. S1). Pathway enrichment analysis for the differentially expressed genes showed that the proteasome, neuroactive ligand-receptor interaction, calcium signalling pathway, and PPAR signalling pathways were enriched (Fig. S2). CyHV-2 encodes multiple miRNAs that are clustered in distinct regions of the viral genome. We also identified CyHV-2-encoded miRNAs. RNAs isolated from the kidney of moribund and healthy fish were analysed using next generation sequencing (NGS). A total of 10,714,657 reads were generated in the uninfected and infected groups, with over 90% of the sequences being valid reads (Table S2). Among them, about 4.02% of the sequences mapping to Rfam, and most of the sequences were within 20 to 24 nucleotides in length. The SCiENTiFiC RePoRtS | 7: 13787 | DOI:10.1038/s41598-017-14217-y remainder of the unmapped sequences likely represented non-coding RNAs, unrecognized miRNAs, or RNA degradation products. Seventeen potential CyHV-2 miRNAs were identified by NGS, based on sequence length, copy number, mapping to the CyHV-2 genome, and formation of stable hairpins. Secondary structure predictions 39 demonstrated that the potential CyHV-2 miRNAs could fold into the hairpin structures typical of pre-miRNAs ( Fig. 1). Table 1 shows the viral miRNAs, which ranged in size from 19 to 25 nucleotides and were detected at copy numbers from 3 to 417576. However, within the predominant consensus sequence for each miRNA, variability in the 3′ end of the miRNAs was common, as shown in Table S5, which lists all potential CyHV-2-derived miRNA and miRNA passenger strand reads. These miRNAs exhibited fewer sequence variations at the 5′ ends, while variations at the 3′ ends are fairly common. Similar 3′ and 5′ variability has been noted for many other herpesviruses [40][41][42][43] . Thus, in subsequent discussion, we considered the most abundant of all the isomiRs as the reference mature miRNA. Furthermore, miRNAs are generally not conserved between different viral species. Instances of conservation or high sequence similarity have only been observed between closely related viruses. We used BLASTN to align the putative CyHV-3 miRNA sequences to 17 CyHV-2 encoded miRNA; however, no homologous miRNAs were identified. The genomic locations of the 17 potential CyHV-2 miRNAs are depicted in Fig. 2. Similar to many other herpesviruses, the CyHV-2 miRNAs are generally distributed across the viral genome, with major clusters in two different regions. One cluster of seven miRNAs is located near ORF42, including miR-C3, miR-C4, and miR-C5. A second cluster exists near ORF114, including miR-C1, miR-C15, and miR-C16. Noticeably, two pre-miRNAs encode two different mature miRNAs, one of which was on the 5′ end and the other was on the 3′ end, including miR-C8 with miR-C9, and miR-C11 with miR-C12. Detailed sequences, loci, read numbers, and orientations of these miRNAs are shown in Fig. 2. The most abundant miRNA was miR-C5, and the second most abundant miRNA was miR-C4. Viral miRNAs displaying higher expression levels were selected for northern blotting analysis. The pre-miRNAs of miR-C4, miR-C5, miR-C6, miR-C10, miR-C14, miR-C15, and miR-C17 were detected, and mature miRNA of miR-C4 and miR-C5 were detected by northern blotting. The blot pattern showed two bands for pre-miR-C6, indicating that pre-miR-C6 might be generated by different mechanisms (Fig. 3B). Increasing evidence shows that viral miRNAs play crucial roles in host-virus interactions, and many studies have demonstrated that viral miRNAs can target viral or host genes 45 . Systematic analyses of the interactions between mRNA and miRNA could reveal information concerning the roles of miRNAs during virus infection 46 . In the present study, targets of miRNAs were identified based on sequence complementarities and the free energy of the predicted RNA duplex using miRanda and TargetScan. A total of 1108 miRNA-mRNA interactions were identified. The predictions showed that most miRNAs could regulate several target genes. For example, miR-C4 correlated with two viral genes and 70 host genes, and miR-C3 correlated with 174 host genes and five viral genes. Moreover, most mRNAs are associated with more than one miRNA, such as host genes caspase 8, which was targeted by miR-C8, miR-C12, and miR-C14; and carboxypeptidase D, which was targeted by miR-C8, miR-C13, and miR-C14 (Table S6). To detect the functional characteristics of the miRNA-mRNA interaction pairs, the mRNAs involved in interaction pairs were subjected to a GO and KEGG pathway analysis. GO analysis provided insight into the functions of genes in various biological processes 47 . Based on the GO functional analysis, the targets of the miRNAs were enriched in cell redox homeostasis, Rho GDP-dissociation inhibitor activity, nucleoside phosphate kinase activity, and thyroid gland development (Table S7, Fig. S3). In organisms, genes often interact with each other to exert their different roles in certain biological functions 48 . KEGG pathway analysis could aid our understanding of the biological functions of genes 49,50 . In the KEGG enrichment classification, Type II diabetes mellitus, the JAK-STAT signalling pathway, bacterial invasion of epithelial cells, and the RIG-I-like receptor signalling pathway were involved in the significantly enriched miRNA-associated pathways (Table S8, Fig. S4). We also noted several immune-related pathways that were significantly enriched, including the regulation of the RIG-I-like receptor signalling pathway (Table S8), which suggested the important role of CyHV-2 miRNAs in restricting innate antiviral immunity. Thus, GO and KEGG analyses provide a better understanding of the cellular components, molecular functions, and biological processes of target genes, and provided a reference for future research. To narrow the focus of our study to viral miRNA target genes that are relevant to anti-virus immunity, we analysed the RIG-I-like pathway, which plays a crucial role in the host innate immune responses against viral pathogen infections. In our study, 15 RIG-I-like receptor pathway-related genes were found to be differentially expressed, and three of these genes were regulated by miR-C4 (Table S9). Detection of the level of CyHV-2 and the DEGs in the kidney of silver crucian carp. The quantitative RT-PCR for CyHV-2 titration was performed as described previously 5 . qRT-PCR assays showed that the virus titres increased over time, and reached 10 7.6 at 72 h post infection (Fig. 4A). In addition, to validate the NGS data, certain differentially expressed genes were confirmed by qRT-PCR, including IRF3, RBMX, PIN1, MAPK7, MHC-I, and NF-κB. As shown in Fig. 4B, PIN1 and NF-κB mRNA levels were significantly downregulated after the infection, MHC-I, IRF3, and MAPK7 expressions were significantly upregulated following infection, whereas RMBX expression showed almost no variation. Validation of miRNA targets by luciferase activity. To reveal the pathways mediated by viral miRNAs, the target genes of viral miRNAs that are involved in the RIG-I-like pathway were analysed. Previous research revealed that viral miRNAs are regulators of the networks involved in regulating the RIG-I-like pathway 51 . Based on target prediction using TargetScan and miRanda, CyHV-2-encoded ORF4 and ORF6, and host gene IRF3, RBMX, and PIN1 genes were identified as the targets of miR-C4 (Table S9), and IRF3, RBMX, and PIN1 are involved in the RIG-I-like pathway. The binding sites of miR-C4 to ORF4, ORF6, IRF3, RBMX, and PIN1 in Carassius auratus gibelio are shown in Fig. 5A. To evaluate the functions of miR-C4 on target 3′ UTRs, dual-luciferase reporter constructs carrying wild-type or mutant ORF4, ORF6, IRF3, RBMX, and PIN1 3′ UTRs were cotransfected with miR-C4 mimics or negative control miRNA (a random miRNA sequence). The results showed that the luciferase activities of ORF4, IRF3, RBMX, and PIN1were significantly reduced by the miR-C4 mimics, but did not affect the mutant reporter (Fig. 5B,C). This result indicated that miR-C4 targets a sequence in the 3′ UTR of ORF4, IRF3, RBMX, and PIN1. Discussion In recent years, many herpesviruses have been found to encode miRNAs, including the pathogenic virus of common carp, CyHV-3. The aim of this study was to analyse the mRNA targetomes regulated by CyHV-2 encoded miRNAs, a closely related virus to CyHV-3. In the present study, we used NGS to characterize mRNAs and miR-NAs in CyHV-2 infected Carassius auratus gibelio kidneys. Seventeen CyHV-2 miRNAs were identified, and fourteen were confirmed by stem-loop qRT-PCR. We performed an integrative analysis of these data and obtained the complete set of CyHV-2 encoded miRNAs and host genes. The copy number of viral miRNA reads were quite low in many viruses 23,52,53 . The factors that contribute to miRNA abundance include the amount of transcript accumulation, the efficiency of miRNA processing, and the contribution of miRNA decay. Considering that almost all the 156 CyHV-2 ORFs are transcribed during infection, it is unsurprising that the majority of CyHV-2 transcripts sequenced represented mRNA degradation products. By comparison, fewer CyHV-2 miRNA reads mapped to non-coding regions; however, these were much more abundant in terms of read count. Additionally, miRNA stability can be influenced by cellular modifications, argonaute protein levels, exposure of the miRNAs to nucleases, and target abundance 54 . EBV miRNAs were documented to be differentially expressed in distinct cultured cell types, similar to observations in many studies of cellular miRNAs 55 . Some investigations revealed that four of the 24 miRNAs encoded by the rhesus cytomegalovirus (RhCMV) were detected exclusively in infected fibroblasts, while two were specific for infected salivary glands 42 . Thus, the distribution of CyHV-2 miRNAs may be regulated in a tissue-specific manner in vivo. In our study, the CyHV-2 miRNAs identified were detected by both NGS and qRT-PCR. In the NGS and qRT-PCR assays, miR-C4 and miR-C5 were the most abundant microRNAs during infection, and the remaining viral miRNAs were much less abundant (Fig. 3A, Table 1). The different abundance of CyHV-2 miRNAs enriched in the kidney of infected silver crucian carp suggested that these miRNAs might be regulated in a tissue-specific manner in vivo. Additionally, the high abundance of miR-C4 and miR-C5 suggested significant roles for these miRNAs during infection. It has been reported that viral miRNAs can target and downregulate host cellular mRNAs and/or viral mRNAs during virus infection [56][57][58][59] . Exploiting viral miRNA targets from the host or viral mRNA enabled us to screen for important candidate targets of CyHV-2 miRNAs, which indicated their roles in evading the host innate immune responses, such as antiviral signalling, inflammation, and apoptosis. The innate immune system in fish is regarded as the first line of defence against pathogens and is much more important in fish than in mammals 60 . The JAK/ STAT signalling pathway has been demonstrated to play an important role in the antiviral response of vertebrate 61 . However, the regulation of JAK/STAT transcription factor expression mediated by viral miRNAs has not been investigated 62 . The measles virus (MV) phosphoprotein can combine with the linker domain of signal transducer and activator of transcription 1 (STAT1), resulting in inhibit the JAK/STAT activation 63 . The Hepatitis C virus (HCV) core protein is required for the infectious viruses production through interaction with the JAK protein 64 . In this study, target prediction indicated that the viral miRNAs could target genes involved in the JAK-STAT pathway, including JAK1, STAT1, STAT6, and MYC1 (Table S8). The findings indicated a novel aspect of viral miRNA-mediated JAK/STAT signalling pathway regulation during virus infection. During infection, virus encoded miRNAs regulate the RIG-I antiviral pathway and host immune response. The results presented by Silva and Jones 65 suggested that the expression and production of the BHV-1-infected cell protein 0 (bICP0) is interfered with by Bovine Herpes Virus1 (BHV-1) miRNAs, which are expressed during latent infection and stimulated the RIG-I signalling pathway, which correlated with activated type I interferon signalling. However, they presented no valid evidence of the mechanism of how the latency-related gene-encoded miRNAs were recognized by RIG-I. In this study, we identified certain genes involved in the RIG-I-like pathway, including IRF3, RBMX, and PIN1, which are negative regulated by CyHV-2 miRNA miR-C4 (Table S6). In addition, validation of the miRNA-mRNA interaction pairs showed that PIN1 had a down-down regulatory pattern and IFR3 presented a down-up regulating pattern; RMBX expression hardly changed (Fig. 5B). This result mostly corresponded with the sequencing data. Extensive research has shown that hundreds of miRNAs interact with thousands of target mRNAs to maintain proper gene expression patterns under viral infection, and our results support this notion. The complexity of the miRNA-mRNA interaction network presents a great challenge for researchers to reveal the roles for specific miRNAs or miRNA-mRNA interactions in biological processes. For instance, PIN1, IFR3, and RMBX constituted a complex interaction network with miR-C4, miR-C12, and many other miRNAs (Table S6). Furthermore, miRNA-mRNA interaction is only one of the multiple mechanisms influencing the regulation of gene expression, and our results would not be sensitive in circumstances where multiple factors, in addition to miRNA-mRNA interactions, are involved. Although increasing evidence shows that viral miRNAs affect host innate immune responses to regulate virus infection 66 , the exact mechanisms of the roles of miRNAs in the host immune response to viral infection remain to be determined. Overall, our results demonstrated a series of complex sequential viral miRNA molecular signatures associated with CyHV-2 infection and provided a basis for future investigations. We identified 17 CyHV-2 encoded miRNAs. GO and KEGG pathway analysis of the reported viral miRNAs revealed the diversity of the affected immune signalling pathways, including the RIG-I-like receptor pathway and the JAK-STAT pathway. The post-transcriptional regulation of IRF3, RBMX, PIN1, and ORF4 by miR-C4 could affect the expression of those genes. Materials and Methods Fish and CyHV-2 challenge. Healthy silver crucian carp (approximately 10 cm in body length) were obtained from the Wujiang National Farm of Chinese Four Family Carps, Jiangsu Province, China. Initially, fish were temporarily reared at 23 °C for adaptation. After seven days of acclimation, fish were divided into two groups (30 fish per group) for intraperitoneal injection. The conditions were identical among the tanks and the fish were randomly distributed into the different tanks. Two groups were maintained in two aquariums and intraperitoneally injected with CyHV-2 suspended in PBS at a dose of 1 × 10 6 TCID50/g, which was applied and verified by previous challenge experiments 5 . As controls, fish were injected with PBS at the same dosage. After injection, all the fish were reared under the same conditions, fed with a diet according to a standard feeding scheme, and observed continuously to identify and collect moribund animals. Moribund fish at 72 h post-challenge were collected, and kidneys of control fish (T1K, T2K, and T3K) and moribund fish (T4K, T5K, and T6K) were sampled, each comprising three biological replicates, and three different individual kidney tissues, and immediately frozen in liquid nitrogen. RNA samples were prepared for transcriptome and gene expression analyses. All experiments were performed according to the guidance of the Care and Use of Laboratory Animals in China. This study was approved by the Committee on the Ethics of Animal Experiments of Shanghai Ocean University, China. RNA isolation, library construction, and sequencing. For the six transcriptome library constructions, the RNA preparation, library construction, and high-throughput sequencing were performed by LC-BIO (Hangzhou, China). The experimental procedure was as follows: total RNAs were extracted using the Trizol reagent (Invitrogen, CA, USA), following the manufacturer's instructions. The quantity and purity of the total RNA were analysed using a Bioanalyzer 2100 and RNA 6000 Nano LabChip Kit (Agilent, CA, USA) and had an RNA integrity (RIN) number greater than 7.0. For the sRNA-seq experiment, approximately 1 μg of total RNA was used to construct an sRNA library, according to the protocol of the TruSeq ™ Small RNA Sample Prep Kits (Illumina, San Diego, CA, USA). Single-end sequencing (50 bp) was performed on the Illumina Hiseq. 2500 platform following the vendor's recommended protocol. For the RNA-seq experiment, approximately 10 μg of total RNA was subjected to enrichment for poly (A)-tailed mRNAs using poly-T oligo attached magnetic beads (Invitrogen, MA, USA). Following purification, the mRNAs were fragmented into small pieces using divalent cations at an elevated temperature. The cleaved RNA fragments were then reverse-transcribed to produce the final cDNA library, according to the protocol of the mRNA-seq sample preparation kit (Illumina, San Diego, CA, USA). The library was constructed by pooling nine homogenized total RNAs from the kidney samples. Then, paired-end sequencing of the libraries was performed on an Illumina Hiseq. 2500 (LC Sciences, USA), following the vendor's recommended protocol. The length of the reads was 100 bp, and the average insert size for the paired-end libraries was 179 bp (the length of the adapter was 121 bp). SCiENTiFiC RePoRtS | 7: 13787 | DOI:10.1038/s41598-017-14217-y Pre-treatment of the sRNA-seq data. Small RNA libraries were constructed and sequenced as previously described 67 . Total RNA used to make the small RNA library was prepare according to the manufacturer's instructions of TruSeq Small RNA Sample Prep Kits (Illumina, San Diego, CA, USA). The sRNA libraries were then sequenced by Illumina Hiseq. 2500 50SE at the LC-BIO (Hangzhou, China). The raw reads were subjected to the Illumina pipeline filter (Solexa 0.3), and then the dataset was processed with ACGT101-miR (LC Sciences, Houston, TX, USA), to remove repeats, junk, adapter dimers, low complexity, and common RNA families (rRNA, tRNA, snRNA, and snoRNA). Subsequently, unique sequences of 19-25 nt were mapped to specific species miRNA precursors in miRBase 21.0 using Bowtie search to identify known and novel miRNAs. Subsequently, all the remaining sRNA sequences were searched against the CyHV-2 genome (GenBank accession no. AF332093.1). The small RNA sequences of putative CyHV-2 miRNAs were analyzed by a BLASTN search against the CyHV-2 genome, allowing one or two mismatches between each pair of sequences. To analyze the potential pre-cursor structures of 17 CyHV-2 miRNA candidates, each sequence, including a fragment of 60 to 70 bases flanking the sequence, was subjected to miRNA secondary structure prediction using the mFold online software (http://frontend.bioinfo.rpi.edu/applications/mfold/) with default parameters. De novo assembly and expression level calculation of the transcripts. The raw reads were cleaned by removing adapter sequences, empty reads, and low quality sequences (reads with over 10% unknown base pairs 'N'). The reads obtained were randomly decomposed into overlapping k-mers (default k = 25) for assembly using the Trinity software 38 . After assembling the transcripts, a locally installed BLAST all program 68 was used to search the assembled transcripts against the sequences in NCBI NR protein database (http://www.ncbi.nlm. nih.gov/protein/) 69 and the Swissprot database (http://www.uniprot.org/) 70 using an E-value cut-off of lower than 1e-10. Genes were tentatively annotated according to their best hits against known sequences. The clusters of orthologous groups (COG 71 ) and KEGG 72 annotation systems were used to analyse the biological pathways that the involved the assembled transcripts. Furthermore, Bowtie (version 0.12.7) 73-77 was used to map RNA-seq reads to all the assembled transcripts using the "single-end" method and the parameter "-v 3 -a -phred 64-quals" (allowing one read to be mapped to multiple transcripts). The perfectly mapped read counts were retained for expression level calculation using the following formula: expression level of a transcript (RPKM: reads per kilobase of exon model per million mapped reads) = Number of the reads mapped to the transcript/[Total number of the reads mapped to all the transcripts (in million) × the length of the transcript (in kilobases)]. The Prediction of miRNA Target Genes. miRNA target prediction algorithms TargetScan 50 (http:// www.targetscan.org/) and miRanda3.3a (http://www.microrna.org) were used to identify miRNA binding sites. Finally, the data predicted by both algorithms were combined and the overlaps were calculated. GO terms and KEGG pathways of these miRNAs and miRNA targets were also annotated. Northern blotting. Total RNA was extracted from tissues using an miRNeasy Kit (Qiagen) according to the manufacturer's instructions. Samples of 20 mg of total RNA were resolved using a 15% polyacrylamide gel containing 8 M urea and transferred to Hybond-N + nylon membrane (GE). After cross-linking with UV light, the membrane was pre-hybridized in DIG Easy Hyb granule buffer (Roche, Switzerland) for 30 min. Subsequently, the membrane was hybridized with a digoxigenin (DIG)-labelled DNA probe complementary to a specific miRNA sequence for 12 h at 40 °C. Signal detection was performed as described in the manual for a DIG High Prime DNA labelling and detection starter kit II (Roche, Switzerland). Validation of miRNA and mRNA expression by quantitative real-time reverse transcription polymerase chain reaction (qRT-PCR). For mRNA quantification, PrimeScript ™ RT Master Mix (Takara, Japan) was used to synthesize first-strand cDNA. qRT-PCR was performed with the SYBR Premix Ex Taq ™ (Takara), using gene-specific primers for IRF3, RBMX, and PIN1, β-actin was used as an internal standard (Table S3), The 2 −ΔΔCT method was adopted to analyze the expression of the different genes. All the expression data were subjected to a one-way ANOVA, and statistical significance was assumed at P < 0.05. For miRNA quantification, the Hairpin-it ™ MicroRNAs Quantitation PCR Kit (GenePharma, China) was used to quantify mature miRNAs according to the manufacturer's instructions. Total RNA was isolated from kidney organs of CyHV-2 infected fish, and 1 μg of total RNAs were used for cDNA synthesis. PCR amplification was performed in a 20 μL reaction containing 2 μL of cDNA, 10 μL of Real-time PCR Master Mix (FAM), 10 μM of miRNA specific Primer, and 10 μM of miRNA specific probe, 1U DNA polymerase. Synthetic miRNA (GenePharma, China) was used as the standard. Data was normalized to total RNA and used to determine the relative miRNA copy number per 1 μg of total RNA. Each reaction was performed in triplicate and the data were calculated as the mean ± SD as described above. All reactions were performed in triplicate on the CFX96 Real-time PCR Detection System (Bio-Rad, Hercules, CA, USA). Validation of miRNA targets by luciferase activity. The 3′ UTRs of ORF4, ORF6, IRF3, RBMX, and PIN1, containing miR-C4 binding sites, were amplified from silver crucian carp kidney cDNA, using the primers shown in Table S3. All the PCR products were cloned into the pGL3-Basic Dual-Luciferase miRNA Target Expression Vector (Promega, USA) via the Smal and Xhol restriction sites. The enzymes used for the cloning were purchased from Takara, China. Sangon (China) verified the DNA sequences of the constructs. Site-directed mutagenesis was performed on the 3′UTR reporter plasmid using the Fast Site-Directed Mutagenesis Kit (Tiangen, China), and the primers are shown in Table S3. Hela cells was cultured in MEM Medium (Gibco, USA) supplemented with 10% foetal bovine serum (Gibco, USA), at 37 °C with 5% CO 2 . miRNA mimics (miR-C4/negative control, GenePharma, China) were transfected separately into the cell line with a luciferase reporter vector containing target genes using Lipofectamine 3000 reagent (Invitrogen, USA) in 96-well plates. Hela cells were incubated for 24 h after transfection, and then the SCiENTiFiC RePoRtS | 7: 13787 | DOI:10.1038/s41598-017-14217-y Dual-Glo luciferase assay system (Promega, USA) and GloMax-Multi Detection System were used to detect the quantity of firefly and Renilla luciferase, respectively. The firefly luciferase activity was first normalized to the Renilla luciferase activity, and the ratios were then normalized to the levels in the empty vector controls.
6,896.4
2017-10-23T00:00:00.000
[ "Biology" ]
The mechanisms of feature inheritance as predicted by a systems-level model of visual attention and decision making Feature inheritance provides evidence that properties of an invisible target stimulus can be attached to a following mask. We apply a systemslevel model of attention and decision making to explore the influence of memory and feedback connections in feature inheritance. We find that the presence of feedback loops alone is sufficient to account for feature inheritance. Although our simulations do not cover all experimental variations and focus only on the general principle, our result appears of specific interest since the model was designed for a completely different purpose than to explain feature inheritance. We suggest that feedback is an important property in visual perception and provide a description of its mechanism and its role in perception. INTRODUCTION The perception of a briefly flashed target stimulus followed by a mask can be strongly impaired or, depending on the mask and the stimulus-onset asynchrony, the stimulus can be easily detectable. Theories of visual masking explain the impaired perception typically by an erosion of the target information, be it by temporal fusion, interruption or suppression through competition. In feature inheritance, however, the mask inherits a property of the target stimulus (e.g. Herzog & Koch, 2001). For example, a vernier, a tilted line, or a bar in apparent motion are presented for a short time and followed immediately by a grating comprising a small number of straight elements. The grating is perceived as offset, tilted, or moving. The perceived distortion (e.g. tilt) is much smaller than the actual property of the target. The target stimulus itself remains largely invisible. This effect cannot be easily explained by a simple temporal fusion since the property of the mask is only slightly distorted and the effect lasts for mask presentation times of about 300 ms. Moreover, when target and mask are very different in orientation, both appear visible (shine through). Thus, feature inheritance demonstrates that stimulus properties can act upon the properties of a following stimulus. The mechanism responsible for feature inheritance is still unclear, but some recent work addressed its neural correlate. Zhaoping (2003) explains feature inheritance by lateral figure-ground binding in V1 and shows that a vernier followed by a grating consisting of a few elements results in only one or two saliency peaks at the border of the grating, whereas a grating with several elements results also in a saliency peak at the center, suggesting no feature inheritance but shine through. However, the actual decoding of this saliency information into a percept or a decision has not been modeled and it remains open in how far V1 saliency is responsible for the perception of an offset or tilt. We have recently developed a computational model to explain most http://www.ac-psych.org Fred H. Hamker of the temporal phenomenology of feature inheritance (Ma, Hamker, & Koch, 2006). We varied the duration of target and mask presentation and tuned the parameters of the model to be consistent with observations. According to the model, a subsystem creates an inert hypothesis about the stimulus which is then tested against the later input. Cells further downstream, related to object perception, only fire when the hypothesis is confirmed. We will call this a strong hypothesis testing model. Although the model can account for several observations, the hypothesis-testing subsystem was specifically designed to explain feature inheritance. While this approach is typical for most computational models, fundamental insights can only be achieved if a model generalizes to other phenomena. Thus, we here apply a model of visual attention to the paradigm of feature inheritance to gain further insight into general mechanisms of visual perception. This model contains a mechanism of weak hypothesis testing by means of feedback, which implements feature-based attention and goal-directed search and resolves ambiguities (Hamker, 2005a;Hamker, 2005b;Hamker, 2006). Weak hypothesis testing refers to the rule according to which feedback is not necessary for brain areas to process the stimulus-driven feedforward signal. Feedback only modulates processing. Object substitution theory proposes that masking is a consequence of ongoing recurrent interactions between different levels of the cortical hierarchy (Di Lollo, Enns, & Rensink, 2000;Enns, 2002). The first stimulus is initially processed in a feedforward sweep. This sweep activates neurons at high levels which project back to earlier levels. With respect to feature inheritance, the features of a target can be incorporated into the activation pattern of a following mask if both are similar (Enns, 2002). At this level of abstraction, our model is very similar, if not identical, to object substitution theory. However, one key idea of the object substitution theory is that perception requires a confirmation of the perceptual hypothesis by comparing the hypothesis at the higher level with the ongoing activity at the lower level (Enns, 2002;Di Lollo et al., 2000). The exact mechanism of this comparison is critical, and requires a clear definition. Although, feedback has been emphasized in several models of visual perception, its exact mechanism significantly differs across these models. In the computational model of object substitution (CMOS) the input into the higher area is defined as the sum of feedback and feedforward (Di Lollo et al., 2000). A summation predicts the activation of cells at an early level by feedback from higher levels and thus, both, the actual signal and the top-down hypothesis are simultaneously activated at an early level. Several approaches treat vision as a generative process (Mumford, 1992;Olshausen & Field, 1997;Rao, 1999). According to this paradigm, feedback represents the predicted image and the feedforward signal the residual image which is obtained by subtracting the predicted image from the input image. A good match between the internal hypothesis and the actual input results in a weak feedforward signal and a mismatch in a strong signal. Thus, feedback primarily serves to "explain away" the evidence by suppressing the activity. This approach has been primarily used for the learning of receptive fields and object recognition. Its relevance for masking or feature inheritance has not been explored so far. Our approach, which shows some similarity to adaptive resonance (Grossberg, 1980), interactive activation models (McClelland & Rumelhart, 1981), Bayesian belief propagation and particle filtering (Lee & Mumford, 2003), predicts an enhancement if both signals are consistent with each other by increasing the gain of the feedforward signal. If both signals are not consistent no enhancement occurs, i.e., no gain change takes place. Perception in our model can be actively guided by an internal hypothesis, but a match between the visual observation and the internal hypothesis is not required for the activation of visual areas (weak hypothesis testing approach). Thus, a purely sensory-driven activation (with and without feedback) is sufficient to activate all model areas. Due to competitive interactions irrelevant information is inhibited (Hamker, 2004), similar as in the Biased Competition framework (Desimone & Duncan, 1995). We have termed this interaction of the top-down or feedback with the feedforward signal as population-based inference (Hamker, 2005a;Hamker, 2005b), since it implements an inference operation but differs in several aspects from a true Bayesian approach. In the following we will briefly introduce the model of attention and its mechanism of feedback. We then apply different versions of the model to simulate a typical feature inheritance experiment and derive conclusions about the role of feedback and memory in visual perception. The fact that human subjects can under some conditions report a masked, briefly flashed stimulus has lead to two alternative interpretations (Smith, Ratcliff, & Wolfgang, 2004). In the first one, stimulus properties get encoded in visual short-term memory (VSTM), http://www.ac-psych.org and its content represents the input for the decision process. In the second one, the decaying iconic trace provides the input for decision making. We will also discuss a third alternative. Here, memory provides a top-down signal which modifies the properties of visual areas. The decision however, is still based on the content of the iconic trace. We call this approach active hypothesis testing. We are specifically interested in the question if memory-based, active hypothesis testing is required for feature inheritance to occur, or if passive hypothesis testing by feedback, is sufficient. Thus, we have tested five different models, two where perception is only sensory-driven, and three where perception is hypothesis-driven. We obtain an internal hypothesis by memorizing a representation of the stimulus at different times. From the two models of sensorydriven perception, one can be categorized as passive hypothesis testing, since it contains feedback but no external top-down signal. In the other one, we removed feedback. Systems-level model of attention Our model of attention is an extension of an earlier model (Hamker, 2003;Hamker, 2004;Hamker, 2005a), which has been strongly constrained by several electrophysiological observations and anatomy. The present version operates with real input images. It has been applied on tasks such as object detection in natural scenes, change detection, visual search, and feature-based attention (Hamker, 2005b;Hamker, 2005c;Hamker, 2006). Since it has been extensively described in Hamker (2005b) we here give only a brief overview with emphasis on the relevant aspects for feature inheritance. Population-based inference We have developed a population-based inference ap- Model for visual attention. First, information about the content and its low level stimulus-driven salience is extracted. (Stimulus-driven saliency, however, will not be crucial for the results obtained here.) This information is sent further downstream to V4 and to IT cells which are broadly tuned to location. A target template is encoded in PF memory (PFmem) cells. Feedback from PFmem to IT increases the strength of all features in IT matching the template. Feedback from IT to V4 sends the information about the target downwards to cells with a higher spatial tuning. FEF visuomovement (FEFv) cells combine the feature information across all dimensions and indicate salient or relevant locations in the scene. The FEF movement (FEFm) cells compete for the target location of the next eye movement. The activity of the FEF movement cells is also sent to V4 and IT As long as the maximal activity within the population is lower than a threshold (e.g. A=1), the feedback signal  r i effectively increases the gain. On the population level, however, the local gain mechanism can result in the distortion of the population response and thus in a misperception. We have recently shown that our population based inference approach is general enough to explain also spatial effects such as the shift and shrinkage of receptive fields in area V4 prior to saccade (Hamker & Zirnsak, 2006). Simulation of the featureinheritance experiment We used a similar experimental procedure as Herzog Decision making Our model allows us to simulate the temporal course of activity in different brain areas. In order to close the gap between a continuous time varying signal and a finite decision of a human subject we will use a simple neural decision model, which reads out the population response in the orientation channel and determines if the mask is perceived as tilted or not. Models of decision making that accumulate the evidence over time have a long tradition in mathematical psychology leading to several models. For an overview see as well as Usher and McClelland (2001) and for a comparison of models refer to Ratcliff and Smith (2004). Despite many differences the general idea is very similar. All models accumulate the evidence from a time-varying input signal and stop when a criterion is reached such as the crossing of a threshold. In most decision making simulations the input of the model is Subjects probably learn what information is relevant in a particular experimental situation. In our model, we select the relevant information by weighting the activity, distributed across the feature space, with a Gaussian (Fig 4). following the common approach that the evidence for one choice reduces the evidence of the other choice (Mazurek, Roitman, Ditterich, & Shadlen, 2003). The accumulated evidence is computed within a laterally connected set of two neurons r 1 and r 2 : τ τ d dt r t I k w r t a w r t w r t r t d dt r t RESULTS We simulated five different models, (1) sensory-driv- Memory at 180-200 ms Fred H. Hamker neural response at different times leads to less target information in memory with increasing time (Fig 6A). Moreover, for all three models of hypothesis-driven perception, large orientation offsets lead to little or no influence of the target information on the population encoded in memory since only the strongest population enters memory. According to the first approach to the perception of masked visual stimuli, the memory content represents the input of the decision . Thus, this model predicts the perception of relatively strong tilts (Fig 6A). In many cases, the perceived tilt is about half of the veridical tilt, which is not consistent with the typical observation (Herzog & Koch, 2001). If we now consider the third approach to the perception of masked visual stimuli where memory modifies visual areas we observe for all three models that the IT activity is permanently distorted towards the target orientation ( Fig 6B). Encoded orientation information in the population activity at 300 ms after target onset with respect to the veridical orientation. The decoding of the encoded orientation in the population response has been done with a simple population vector method (Dayan and Abbott, 2001). (A) Decoded orientation relative to the mask in the PFmem cells. The memorization of the IT activity at different times reflects the sustained influence of the briefly presented target on the population response. The sustained influence is orientation dependent. If the orientation of target and mask differ strongly the information from the target is not memorized. Only when the memorization of the IT activity occurs at 100-120 ms, a target stimulus of an orientation offset of 40° or larger largely distorts the population. For orientation differences up to 30° some information of the target is still encoded by the population. (B) The population response in IT receives a small but sustained distortion, if a template has been memorized and used for top-down guidance. In the models with no memory or without feedback the information from the target stimulus has faded away at 300 ms after target onset. Note, the y-axis in panels A and B scales differently. (Herzog & Koch, 2001) might depend on their decision criterion. Subjects which are trained in fast decision making, such as playing ball games might use a low threshold and thus they perceive an influence of the target. In subjects using a conservative criterion (high threshold), the mask dominates the decision and the subject does not perceive the tilt, or the target presentation times have to be longer. This view of perceptual decision making is similar to masked response priming which can also be modeled by a neural accumulation process (Vorberg et al., 2003). Somewhat surprisingly is our observation that feedback-loops alone are sufficient to lead to feature-inheritance. Although the information of the target disappears at about 150-200 ms after target onset, feedback holds the target information sufficiently long to influence the decision with respect to the perceived orientation. We do not claim that feature inheritance necessarily occurs at the level of IT and V4. Our proposed feedback mechanism is a general mechanism of feedback and also acts from V2 to V1 and V4 to V2. Consistent with observations, the model predicts that feature inheritance only occurs within a limited range of an orientation difference between target and mask. Since we only used 20 cells to represent the orientation space and did not tune the width of the population response the exact range might be slightly different, e.g., subjects reported feature inheritance if elements are tilted by 7° (Herzog & Koch, 2001). At the level of the decision, the model of sensory-driven perception does not fundamentally differ from the model of hypothesis-driven perception. However, the model of sensory-driven perception without feedback does not provide sufficient evidence for a feature-inheritance effect. From our analysis we cannot exclude that other mechanisms than feedback can also account for feature-inheritance. The strength of our approach rather lies in its generality. Our model was designed for a completely different purpose, but nevertheless, without modification, it shows a feature-inheritance effect. We acknowledge that a comprehensive demonstration of the role of feedback in feature inheritance requires more simulations and perhaps also changes in the model, but at present, it appears important to us to identify general, universal mecha-nisms of perception as compared to specialized models tuned to a single experimental paradigm, such as our earlier model (Ma et al., 2006). Our model appears also consistent with the observation of a trace carried over a sequence of invisible elements (Otto, Öğmen, & Herzog, 2006). Other experiments have revealed that the locus of spatial attention influences feature inheritance (Sharikadze, Fahle, & Herzog, 2005). Offsets at the attended edge of the grating influence performance whereas offsets of non-attended elements do not show a strong influence. This is probably not easy to test with orientations, since local orientation differences typically pop-out. However, these results provide additional constraints for models of feature inheritance. The present discussion about models of visual perception is dominated by extremes such as purely feedforward models and models that require reen- (Rockland, Saleem, & Tanaka, 1994). Furthermore, feedback can act as fast as 10 ms (Hupé, James, Girard, Lomber, Payne, & Bullier, 2001). Given that a final decision typically requires to integrate information over time, there is little room for a decision purely based on feedforward evidence. We rather suggest the following scenario: http://www.ac-psych.org Other phenomena, such as the change of temporal perception, might also depend on feedback. Our model predicts a decrease in the time for a perceptual decision, if target and mask are similar. Two aspects of our model seem to be primarily involved in this speed up. First, the reentrant connections in the visual areas and second, the integration of the relevant features for the perceptual decision. Present evidence suggests, that not the pure similarity of features, but the task relevance of the features is the cause of enhanced processing speed (Scharlau & Ansorge, 2003;Enns & Oriet, 2007;Scharlau, 2007). Thus, it appears that the integration of the relevant features, i.e. the evidence, is the crucial process involved in the increase of processing speed. In the present version of our model the definition of which features are relevant is predetermined. It would be very interesting to explore how learning could lead to an automatic selection of relevant features for a given task. Feedback might also be crucial for the relatively long duration of iconic memory, a high-capacity form of storage, lasting for at least a few hundred milliseconds (Coltheart, 1983). Iconic memory seems to be essential for visual awareness (Koch, 2004), probably by providing the substrate for the collection of evidence. This transfer from iconic memory to visual awareness is not understood so far. It is not clear if integration alone (sensory-driven perception) is sufficient or if a form of active hypothesis testing is required, as suggested by inattentional blindness experiments (Mack & Rock, 1998). The fact that passive hypothesis testing seems to be sufficient to explain feature inheritance by our model does not exclude the possibility that at a higher level, such as the transition to awareness, active hypothesis testing is required. However, is appears unlikely that a strong form of hypothesis testing occurs early in the visual pathway. Since our model is very simple with respect to the shape of objects the present version does not allow strong predictions in other masking paradigms. However, since classical models of backward masking (Breitmeyer, 1984;Breitmeyer & Öğmen, 2000;Öğmen, Breitmeyer, & Melvin, 2003) are based on local, lateral connections, it might be interesting to further explore the role of feedback in masking. Object substitution theory provides a first important step into this direction. However, object substitution is at present a more general framework and it requires a clear definition of many underlying computational mechanisms. Our model could lead to a partial refinement of object substitution, since we have given evidence that the mechanism of feedback can be well described as a gain increase on the feedforward signal. Anyway, more detailed neural models with feedback appear a promising tool to further study the role of feedback in masking.
4,702.6
2008-07-15T00:00:00.000
[ "Biology", "Psychology" ]
On the Numerical Analysis of Unsteady MHD Boundary Layer Flow of Williamson Fluid Over a Stretching Sheet and Heat and Mass Transfers A thorough and detailed investigation of an unsteady free convection boundary layer flow of an incompressible electrically conducting Williamson fluid over a stretching sheet saturated with a porous medium has been numerically carried out. The partial governing equations are transferred into a system of non-linear dimensionless ordinary differential equations by employing suitable similarity transformations. The resultant equations are then numerically solved using the spectral quasi-linearization method. Numerical solutions are obtained in terms of the velocity, temperature and concentration profiles, as well as the skin friction, heat and mass transfers. These numerical results are presented graphically and in tabular forms. From the results, it is found out that the Weissenberg number, local electric parameter, the unsteadiness parameter, the magnetic, porosity and the buoyancy parameters have significant effects on the flow properties. Introduction The study of non-Newtonian fluids has attracted many researchers owing to its enormous applications in industrial and engineering sectors, especially in manufacturing and processing industries. Unlike Newtonian fluids, Kahshan et al. [1], non-Newtonian fluids are more complicated because there is no single construction relation that can be used to explain them all, Hussanan et al. [2]. The relationship between the shear stress and rate of strain is non-linear at a given temperature and pressure in a non-Newtonian fluid. To this end, these fluids cannot be modelled using the classical Navier-Stokes equations. Non-Newtonian fluids have the ability to: (i) shear-thin or shear thicken, (ii) exhibit thixotropy, (iii) allow stress relaxation, (iv) creep in a nonlinear manner, (v) develop normal stress differences, and (vi) exhibit a threshold for the shear stress before it starts to flow. Many non-Newtonian fluids possess one or more of these characteristics. Non-Newtonian fluids are generally classified into three main categories, namely: (i) the differential type, (ii) the rate type and (iii) the integral type. The detailed descriptions of each category can be found in Cioranescu et al. [3]. In non-Newtonian fluids, pseudoplastic fluids are the most frequently encountered fluids. But as expected, Navier-stokes equations alone are insufficient to describe the rheological properties of these fluids. Therefore, to overcome this challenge, many rheological models like Maxwell model, Jeffrey model, Ellis model, Power law model, Carreau model, among others, have been developed. The Williamson fluid model is another powerful model used to explain the rheological properties of pseudoplastic fluids. A pseudoplastic is a shear-thinning fluid and has less resistance at high strain rates like polymer solution, paint, blood, plasma. In rheology, shear thinning is the pseudoplastic fluid whose viscosity decreases under shear strain. The shear thinning fluid behaviour is usually exhibited by inks which are used in inkjet printing as discussed by Miccichè et al. [4] and Dybowska-Sarapuk et al. [5]. A lot of authors have investigated the flow of non-Newtonian fluids due to their vast applications especially when with suspensions of nano-sized particle. Some work on nanoparticles was done by Mozaffari et al. [6], Mozaffari et al. [7], Darjani et al. [8] and Xing et al. [9]. Considering a flow underlying spreading surface through a non-Darcian porous medium, Elgazery [10] anlayzed the effects of internal heat generation/absorption of a non-Newtonian Casson fluid with suspension of gold and alumina nanoparticles. Hsiao [11] presented a study for thermal energy extrusion system conversion problem with electric hydromagnetic heat and mass mixed convection of a viscoelastic non-Newtonian Carreau-Nanofluid with radiation and viscous dissipation effects. Khan et al. [12] studied the influence of the nanoparticles and uniform magnetic field on the slip flows in arterial vessels with blood conveyed through hollow arterial tubes described as a third grade non-Newtonian fluid. An investigation on the multislip effects on the magnetohydrodynamic mixed convection unsteady flow of microploar nanofluids over a stretching/shrinking sheet along with radiation in the presence of a heat source was done by Abdal et al. [13]. Adesanya et al. [14] investigated the steady flow of the non-Newtonian fluid via the inclined channel heated isothermally at the boundaries. Williamson [15] pioneered the discussion of the flow of pseudoplastic materials and proposed a model equation to describe the flow of pseudoplastic fluids and experimentally verified the results. Nadeem et al. [16] presented a paper modelling a two-dimensional Williamson fluid flow over a stretching sheet. Nadeem and Hussain [17] explored the effects of heat transfers on the Williamson fluid over a porous exponentially stretching sheet surface. The study also considered the two cases of heat transfer, namely, the prescribed exponential order surface temperature case and the prescribed exponential order heat flux case. Khan et al. [18] described the effect of thermal radiation on the thin film nanofluid flow of a Williamson fluid over an unsteady stretching surface with variable fluid properties. Ijaz Khan et al. [19] developed a model for a boundary layer stagnation point flow of an electrically conducting Williamson fluid in the presence of a constant magnetic field. Monica et al. [20] presented an analysis that dealt with the study of stagnation point flow of a Williamson fluid over nonlinearly stretching sheet with thermal radiation. Malik et al. [21] studied the Williamson fluid past a stretching cylinder with combined effects of variable thermal conductivity and heat generation/absorption. Mabood et al. [22] performed an analysis for MHD Williamson nanofluid flow over a continuously moving heated surface with thermal radiation and heat source. Dawar et al. [23] did an analysis of the flow of a Williamson fluid taken over a linear porous stretching sheet under the influence of thermal radiation. Kurmar et al. [24] carried out a mathematical analysis of two-phase boundary layer flow and heat transfer of a Williamson fluid with particle suspension over a stretching sheet. Shateyi et al. [25] investigated a Casson fluid flow in the presence of free convection of combined heat and mass transfer towards an unsteady permeable stretching sheet with thermal radiation, viscous dissipation and chemical reaction. Hayat et al. [26] studied the unsteady two-dimensional boundary layer flow of an incompressible Williamson fluid over an unsteady permeable stretching surface with thermal radiation. Khan et al. [27] examined the influence of chemically reactive species and mixed convection on the magnetohydrodynamic Williamson nanofluid induced by a nonisothermal cone and plate in a porus medium. Recently, Panezai et al. [28] examined the influence of thermal radiation on two-dimensional incompressible MHD mixed convective heat transfer flow of Williamson fluid flowing over a porous wedge. Hsu et al. [29] numerically analysed the heat and mass transfer characteristics of the influence of uniform blowing/suction and MHD (magnetohydrodynamic) on the free convection of non-newtonian fluids over a vertical plate in porous media with internal heat generation and soret/dufour effects. Kebede [30] presented an analytic approximation to the heat and mass transfer characteristics of Williamson nanofluid flow. Reddy et al. [31] studied MHD flow and heat transfer characteristics of Williamson nanofluid over a stretching sheet with variable thickness and variable thermal conductivity. Lastly, Megahed [32] researched on Williamson boundary layer fluid flow and heat transfer due to a nonlinearly stretching sheet. Many researchers have been attracted to study viscous flows due to a stretching sheet. This has been necessitated by their several applications in polymer processing industries, environmental pollution, biological processes and aerodynamic extrusion of plastic sheets, among other applications. Sakiadis [33] pioneered the discussion of the fluid flow due to a stretching surface. Crane [34] extended the work of Sakiadis to study the problem of fluid flow of Blasius type due to a stretching sheet. The literature on the stretching sheet topic is quite extensive and hence cannot be listed here in detail. The most recent works of great researchers regarding the flow over stretching sheet can be found in Hsiao [35,36], Shateyi [37], Sharma et al. [38], Shamashuddin et al. [39], Nagalakshm and Vijaya [40]. Motivated by the above mentioned studies as well as the vast applications of the different types of non-Newtonian fluids, the current study seeks to investigate unsteady free convection boundary layer flow of an electrically conducting Williamson fluid over a stretching sheet. The sheet is saturated with a porous medium under the combined effects of viscous dissipation, chemical reaction, thermal radiation as well as a uniform magnetic field. It is well known that fluid dynamics problems are analysed through material and virtual experimentation, Vedovoto et al. [41]. Material modeling entails the construction of benches and models of the physical problems, as well as instrumentation for gathering information. However, the virtual experimentation requires mathematical modeling which leads to models with differential equations, integral equations and/or intro-differential equations. It is remarked that valuable information about the problem is provided by virtual experimentation. Numeric algorithms and quantitative information about the problem are generated through computational modelling. The data is then used to perform the visualization of the simulation results ad the statistical analysis of the numerical experiment. To that end, the current study also seeks to use the virtual experimentation approach. We are also employing recently developed numerical technique known as the spectral quasi-linearization method. Therefore, novelty of the current paper lies in the application of the numerical technique to solve this two dimensional, unsteady free convection boundary layer flow of an incompressible electrically conducting Williamson fluid over a stretching sheet saturated with a porous medium. The innovation of this study lies in putting together more factors into the basic Williamson fluid model to give a through analysis of the model. Mathematical Formulation We consider a two dimensional, unsteady free convection boundary layer flow of an incompressible electrically conducting Williamson fluid over a stretching sheet saturated with a porous medium. The x-axis is taken parallel to the stretching sheet and the y-axis is in the vertical direction. We assume that the sheet is moving with a velocity U w = ax 1−ct , where a > 0 is a stretching rate along the x-axis. If a < 0, then it becomes a shrinking velocity constraint. The the constant ct is such that ct < 1. A uniform transverse magnetic field B = (0, B 0 , 0) and uniform electric field E = (0, 0, −E 0 ) are applied to the flow region as shown in Figure 1. The magnetic Reynolds number is assumed to be much less than unity and hence the induced magnetic field is negligible in comparison to the applied magnetic field. This is made possible by the fact that the fluid is assumed to be slightly conducting. In this study we also neglect the Hall effect/currents. We note that the magnetic field is weaker than the electric field and also that the magnetic field obeys Ohm' , whereJ is the Joule current, σ is the electrical conductivity and V is the velocity. We take into account frictional heating in the from of viscous dissipation. Thermal radiation and chemical reaction are also considered in this study. We neglect the induced magnetic field and the hall effect. The relevant governing flow equations through the conservation laws of mass, linear momentum, energy and mass by using the boundary layer approximations give the following Hayat et al. [26] ∂u ∂x + ∂v subject to boundary conditions Here u and v are the velocity components in the x and y directions, respectively, ν is the kinematic viscosity, Γ is the relaxation time, g is the acceleration due to gravity, T and C are the fluid temperature and concentration, respectively, T ∞ and C ∞ are the respective ambient temperature and concentration, β and β c are the respective coefficients of thermal and concentration expansion, σ is the electrical conductivity, ρ is the fluid density, µ is the dynamic viscosity, κ is the permeability parameter, c p is the specific heat at constant pressure, q r is the radiative heat flux, K r is the chemical reaction rate, K is thermal conductivity and lastly, D is the mass diffusivity. V w is expressed by As now known, V w express the mass transfer at the surface with V w < 0 for injection and V w > 0 being for suction. The surface temperature T w (x, t) and the surface concentration C w (x, t) are given respectively as: where c ≥ 0. T 0 (0 ≤ T 0 ≤ T w ) and C 0 (0 ≤ C 0 ≤ C w ) are the reference temperature and concentration respectively. By employing the Rosseland approximations, we have where σ * is the Stefan-Boltzmann constant and k 1 is the mean absorption coefficient. Applying Taylor's series, we have where T ∞ , is the ambient temperature and the energy equation now becomes: Similarity Transformation Following Hayat et al. [26], among others, we introduce the following dimensionless variables. with the velocity components given by u = ∂ψ ∂y and v = − ∂ψ ∂x . The continuity equation is satisfied and the resulting governing partial differential equations together with the boundary conditions become: 1 1 The corresponding boundary conditions become: where S = c a is the unsteadiness parameter, f w = V 0 √ aν is the suction/injection parameter, and M is the magnetic field is the Eckert number. The flow characteristics which are of engineering significance are the skin friction coefficient, the local Nusselt number and the Sherwood number. These are respectively, defined as: (Hayat et al. [26]) Upon applying the necessary expressions for τ w , q w , and j w , we get the following: Method of Solution In this section we present a spectral method based on a quasi-linearization method called the spectral quasi-linearization method (SQLM), Motsa et al. [42]. The SQLM combines two methods, the quasi-linearization method (QLM) and a Chebyshev spectral collocation method (CSCM). The QLM, a Newton-Raphson based quasi-linearzation method which was introduced by Bellman and Kalaba [43], is used to linearize the governing non-linear equations. The CSCM is used to integrate the resulting iterative sequence of linear differential equations. Quasi-Linearization With reference to Equations (10)-(12), we derive the quasi-linearization formula by considering a system of differential equations where (17) at the previous and the current iteration levels, respectively. Assuming the difference between the solution at the previous and the current iteration levels and its derivatives is small enough, linear Taylor series expansion of Equation (17) about some previous solution, upon simplification yields Applying the formula Equation (18) to the system Equations (10)-(12) results in an iterative sequence of linear equations a 0,s f s+1 + a 1,s f s+1 + a 2,s f s+1 + a 3,s f s+1 + a 4,s θ s+1 + a 5,s φ s+1 = R 1,s , where the variable coefficients are given by The terms on the right are defined as follows: Chebyshev Differentiation We approximate the solutions of Equations (10)-(12) by functions of the form where is the Lagrange interpolating polynomial. For convenience in the application of the collocation method, we transform the semi-infinite physical domain [0, ∞) in the η-direction to [−1, 1] in the x-direction using the linear transformation η = l ∞ 2 (1 + x). (l ∞ is a number chosen to be large enough that boundary conditions at ∞ hold). Evaluating the approximating functions (22) and their derivatives at Chebyshev-Gauss-Lobatto points where D = 2 l ∞ D, D is an (N + 1) × (N + 1) Chebyshev differentiation matrix, Trefethen [44], F = [ f (η 0 ), f (η 1 ), · · · , f (η N−1 ), f (η N )] T and T denotes matrix transpose. θ and φ have similar expressions for derivatives. Evaluating Equations (19)-(21) at the collocation points and approximating derivatives with Chebyshev derivatives gives, in vector matrix form, the system    where A 11 = a 0,s D 3 + a 1,s D 2 + a 2,s D + a 3,s , A 12 = a 4,s , A 13 = a 5,s Results and Discussions The spectral quasi-linearization method was used to generate the results to be discussed in this section. All the numerical results were obtained using MATLAB 2016. For all the computations, the default parameters considered, unless otherwise stated, are N = 60, Pr = 1.0, We = 0.1, E 1 = 0.3, E c = 0.5, R = 0.3, λ 1 = 0.1, λ 2 = 0.1, Sc = 0.1, γ = 1.0 and f w = 0.3. The error infinity norms defined by are used to confirm the convergence of the SQLM solution. The accuracy of the SQLM is tested using the residual error infinity norms defined as follows: Figure 2 shows the convergence and accuracy of the SQLM. We note that an increase in the number of iterations results in a decrease of the error infinity norm and method converges after six iterations. Also, the decrease in residual error infinity norms against the number of iterations confirms the accuracy of the numerical method. The SQLM achieves an accuracy of order 10 −15 after five iterations, showing that the method is highly accurate. Figure 3 shows the sensitivity of temperature and concentration profiles with the unsteadiness parameter S and the chemical reaction parameter γ, respectively. It is shown that for values of S ≥ 0, the temperature profiles show the expected trend, satisfying boundary conditions (13) and (14). For values of S < 0, the profiles behave unexpectedly. Similarly, the concentration profiles show the desired pattern for values of γ > 0, satisfying the underlying boundary conditions (13) and (14). We remark that these chaotic and high sensitivity behaviour happen when the method has not yet reached the convergence stage. Once convergence has been reached, the behaviour becomes normal as expected. This can clearly be observed in subsequent figures. Figure 5 shows the graphs of the dimensionless concentration profiles for different values of the Schmidt number Sc and the chemical reaction parameter γ. We can observe that increasing Sc results in a decrease in the fluid concentration. Physically, since Sc is dependent on fluid mass diffusivity, high values of Sc correspond to a decrease in mass diffusion hence the concentration profile is reduced. The concentration profiles are also decreased by increasing the chemical reaction. This is due to the fact that a higher chemical reaction results in a decrease in chemical molecular diffusivity leading to a reduction in mass diffusion. The decrease in concentration of the diffusing species will result in the thinning of the concentration boundary layer. The influence of the magnetic field parameter on the fluid velocity and temperature is presented in Figure 6. We notice that increasing the values of the magnetic field parameter has an effect of slowing down the fluid motion. Physically, the presence of a magnetic field creates a drag-like force called the Lorentz force that opposes the motion of the fluid and resists the velocity of the fluid. The variation of the magnetic parameter M on θ(η) shows an opposite trend. We observe that the temperature profiles increase as the magnetic field parameter increases. This is because, as mentioned earlier on, an increase in M reduces the magnitude of the velocity profiles in the boundary layer, hence will cause a rise in the temperature in the boundary layer. Figure 7 shows that an increase in the magnetic parameter causes an increase in the concentration field. We noted earlier that increasing the magnetic parameter reduces the velocity profiles in the boundary which in turn induces the diffusion of the particles in the boundary layer. The increment of the thermal buoyancy parameter λ 1 causes a decline in the fluid concentration. The effects of the thermal radiation parameter R and the Prandtl number Pr on the temperature profiles are depicted on Figure 8. We can see that the fluid temperature increases with increasing values of the thermal radiation parameter R. Pr is inversely proportional to thermal diffusivity. Increasing the values of Pr causes a decrease in the temperature field. Physically, high values of Pr are associated with low thermal conductivity which reduces conduction and hence the thermal boundary layer resulting in a decrease in fluid temperature. As shown in Figure 9, the temperature profiles decrease with the increase in the values of the buoyancy parameters λ 1 and λ 2 . It is clearly noted that an increase in thermal buoyancy parameter causes a decrease in the thermal boundary layer thickness and consequently the fluid temperature decreases due to the buoyancy effect. From Figure 10 it can be seen that the velocity and temperature profiles decrease as the unsteadiness parameter S increases. The effect of E 1 on the fluid velocity is the opposite of that of S. Figure 11 depicts that increasing the values of the local electric parameter E 1 results in an increase in both the velocity and concentration fields. Figure 12 shows the effects of increasing the values of Eckert number E c on the temperature profiles and λ 1 on velocity profiles. The Eckert number characterize the influence of self heating of a fluid as a consequence of dissipation effects. Viscous dissipation due to internal friction of the fluid will cause an increase in the fluid temperature. Increasing the value of the thermal buoyancy parameter λ 1 leads to an increase in the velocity profiles. The examination of influence of the emerging parameters on the physical quantities of engineering importance, namely the skin friction coefficient, Nusselt number and Sherwood number is depicted in Tables 1-3. Table 1 shows that the skin friction increases as the values of S, M, Pr, γ, Sc and E c are increased. The opposite trend is observed for the parameters E 1 , λ 1 , R and We. In Table 2, we can see that the heat transfer rate increased by the increasing values of S, E 1 , λ 1 , λ 2 , Pr and R. The heat transfer rate is depressed as the the values of M, γ, Sc and E c . Table 3 shows that the coefficient of the Sherwood number is an increasing function of E 1 , λ 1 , λ 2 , γ, Sc, R and E c and a decreasing function of S, M, Pr and We. x for different values of the parameters We, S, M, E 1 , λ 1 , λ 2 , Pr, R, E c , γ and Sc when η = 0.5. x for different values of the parameters We, S, M, E 1 , λ 1 , λ 2 , Pr, R, E c , γ and Sc when η = 0.5. Conclusions A Williamson fluid is characteristic of a non-Newtonian fluid model with shear thinning property. The model constitute a coupled system of nonlinear partial differential equations. The transformed coupled system of dimensionless non-linear ordinary differential equations was successfully solved using the spectral quas-linearization method. The use of error infinity norms and residual error infinity norms confirmed that the numerical technique is convergent and accurate, respectively. In this present study, we numerically analyzed the effects of viscous dissipation, thermal radiation, chemical reaction and uniform magnetic field on the unsteady boundary layer flow of an electrically conducting Williamson fluid over a stretching sheet. We conclude the most important findings of the study as follows: 1. The SQLM is a very efficient and accurate method. 2. The fluid velocity and the momentum boundary layer decrease with respective, increases in the Williamson parameter, unsteadiness parameter, magnetic parameter, Eckert number as well as the Prandtl and Schmidt numbers. 3. The fluid velocity and the momentum boundary layer increase with increasing values of the electric parameter, buoyancy parameters, thermal radiation and chemical reaction parameter. 4. The fluid temperature increases as the values of the magnetic parameter, thermal radiation parameter, electrical parameter and Eckert number increase. 5. The fluid temperature is a decreasing function of the buoyancy parameter, Prandtl number, unsteadiness parameter as well as the Williamson number. 6. The stretching parameter, chemical reaction parameter, suction, Schmidt number, buoyancy parameters and the Williamson number were found to reduce the concentration profiles. 7. The concentration was observed to be increasing as the values of the magnetic parameter, injection and Eckert number increase. 8. The skin friction increases with the increase of the unsteadiness parameter, magnetic parameter, Prandtl number, Schmidt number, chemical reaction parameter, and thermal radiation parameter. 9. However, the skin friction decreases with increasing values of the Eckert number, buoyancy parameters, thermal radiation and the Williamson number. 10. The wall temperature gradient decreases with the increasing values of the Williamson number, suction, magnetic parameter, chemical reaction parameter, Schmidt number and Eckert number. 11. The study observed that the Nusselt number increases with the increase of the unsteadiness parameter, electric parameter, buoyancy parameters, Prandtl number, thermal radiation parameter, and the Williamson number. 12. The unsteadiness parameter, magnetic parameter, the Prandtl number and the Williamson number cause the wall concentration gradient to decrease. 13. Lastly, the Sherwood number increases as the values of the electric parameter, buoyancy parameters, chemical reaction, Schmidt number, thermal radiation and Eckert number increase. The joy of the current paper lies in the accuracy and fast convergence of the used numerical technique. Therefore, the method may also be valid for other complex nonlinear boundary value problems even with chaotic behaviour. We recommend that these results can be used as a standard example for other applications in engineering and applied sciences.
5,889.8
2020-06-02T00:00:00.000
[ "Physics", "Engineering" ]
Predictions of quantum gravity in inflationary cosmology: effects of the Weyl-squared term We derive the predictions of quantum gravity with fakeons on the amplitudes and spectral indices of the scalar and tensor fluctuations in inflationary cosmology. The action is R +R2 plus the Weyl-squared term. The ghost is eliminated by turning it into a fakeon, that is to say a purely virtual particle. We work to the next-to-leading order of the expansion around the de Sitter background. The consistency of the approach puts a lower bound (mχ > mϕ /4) on the mass mχ of the fakeon with respect to the mass mϕ of the inflaton. The tensor-to-scalar ratio r is predicted within less than an order of magnitude (4/2 < N2r <12 to the leading order in the number of e-foldings N). Moreover, the relation r ≃ –8nT is not affected by the Weyl-squared term. No vector and no other scalar/tensor degree of freedom is present. Introduction Inflation is a theory of accelerated expansion of the early universe [1][2][3][4][5][6][7][8], which accounts for the origin of the present large-scale structure. It explains the approximate isotropy of the cosmic microwave background radiation and allows us to study the quantum fluctuations as sources of the cosmological perturbations that seed the formation of the structures of the cosmos [9][10][11][12][13][14][15]. It also provides a rich environment where we can develop knowledge that might allow us to establish a nontrivial connection between high-energy physics and the physics of large scales. Inflationary cosmology is often studied with the help of a matter field that drives the expansion by rolling down a potential V (φ) (for reviews, see [16][17][18]). Alternatively, gravity itself can drive the expansion, as in the Starobinsky R + R 2 model [2] and the f (R) JHEP07(2020)211 theories [19,20]. The predictions end up depending strongly on the model, specifically the choices of V (φ) and f (R). In single-field slow-roll inflation, potentials with a plateau lead to a scalar power spectrum that is compatible with current observations [21][22][23]. In particular, the Starobinsky R + R 2 model works well at the phenomenological level. However, once R 2 is introduced, it is hard to justify why the square C µνρσ C µνρσ of the Weyl tensor C µνρσ is not included as well, since it has the same dimension in units of mass. We can spare the other quadratic combinations, such as R µν R µν and R µνρσ R µνρσ , since they are related to R 2 and C µνρσ C µνρσ by algebraic identities and the Gauss-Bonnet theorem. Thus, we are lead to consider the action which we briefly refer to as "R + R 2 + C 2 theory". The trouble with (1.1) is that the C 2 term is normally responsible for the presence of ghosts. Immediate ways out are to expand the physical quantities in powers of β [24], which is equivalent to assume that the ghosts are very heavy, and/or restrict to situations where the ghosts are short-lived. This approach amounts to "living with ghosts" [25], but does not eliminate the problem. If we want to work with the R + R 2 + C 2 theory, we must explain how to treat C 2 in order to remove the ghosts, at least perturbatively and at the level of the cosmological perturbations. Here we use the procedure of eliminating them in favor of purely virtual particles [26,27]. This procedure originates in high-energy physics, where the requirements of locality, renormalizability and unitarity result in consistency contraints on perturbative quantum field theory. The simplest way to think of the idea is as follows. A normal particle can be real or virtual, depending on whether it is observed or not. As far as we know, a particle that is always real does not exist. What about a particle that is always virtual and can never become real? We can think of it as a purely virtual quantum [28] or a fake particle, i.e., a particle that mediates interactions among other particles, but is invisible to our detectors. And by that we mean invisible in principle, not just in practice. Perturbative quantum gravity can be formulated as a unitary theory of scattering if the action (1.1) is quantized in a new way [26], by eliminating the would-be ghost in favor of a purely virtual particle, called fakeon [27]. In the expansion around flat space, the fakeon is introduced by replacing the Feynman iǫ prescription (for a pole of the free propagator) with an alternative prescription that allows us to project the corresponding degree of freedom away consistently with the optical theorem. This means that the loop corrections are unable to resuscitate the degree of freedom back. Moreover, the prescription is compatible with renormalizability [26,27]. A fakeon mediates interactions, but does not belong to the spectrum of asymptotic states. In this sense it is a "fake degree of freedom". Note that it removes a ghost at the fundamental level, without advocating its irrelevance for practical purposes. Incidentally, the calculations of Feynman diagrams with the fakeon prescription in quantum gravity [29,30] are not harder than analogous calculations for the standard model. Nevertheless, quantum field theory is formulated perturbatively, commonly around flat space. To study inflation and cosmology it is necessary to work on nontrivial back-JHEP07(2020)211 grounds. This raises the issue of understanding purely virtual quanta in curved space. A simplification comes from the fact that in cosmology we do not need to go as far as computing loop corrections, as argued in ref. [31], although we have to study the quantum fluctuations. In this paper, we show that we can work with the classical limit of the fakeon prescription/projection, which amounts to taking the average of the retarded and advanced potentials as Green function G f for the fake particles [32], combined with a certain wealth of knowledge on how to use this formula and interpret its consequences. Note that the quantum fakeon prescription cannot be inferred from (1.2), because (1.2) is not a good propagator in Feynman diagrams [28]. As said, the predictions of the popular models of inflation are model dependent. On the other hand, in high-energy physics the constraints of locality, unitarity and renormalizability leave room for a limited class of interactions, scalar potentials, and so on, to the extent that the theory of quantum gravity emerging from the idea of fake particle is essentially unique (when matter is switched off) and contains just two independent parameters more than Einstein gravity. They can be identified as the masses m φ and m χ of a scalar field φ (the inflaton) and a spin-2 fakeon χ µν . The triplet graviton-scalar-fakeon exhausts the set of degrees of freedom of the theory. From the cosmological point of view the physical modes are the usual curvature perturbation R and the tensor fluctuations. The extra degrees of freedom are turned into fake ones and projected away. In particular, no vector fluctuations, or additional scalar and tensor fluctuations survive. We show that the consistency of the picture in curved space leads to a lower bound m χ > m φ /4 on the mass m χ of the fakeon with respect to the mass m φ of the inflaton. To the next-to-leading order, the amplitude A R and the spectral index n R − 1 of the scalar fluctuations depend only on m φ (and the number N of e-foldings). Instead, the amplitude A T and the spectral index n T of the tensor fluctuations do depend on m χ . The bound m χ > m φ /4 narrows the window of allowed values of n T and the tensor-to-scalar ratio r = A T /A R to less than one order of magnitude and makes the predictions quite precise, even before knowing the actual values of m φ and m χ . Inflationary cosmology in higher-derivative gravity with ghosts have been studied in refs. [33][34][35][36][37][38][39]. Typically, the ghost sector is quantized by means of negative norms. Extra spectra are predicted, which may or may not be suppressed on superhorizon scales. Inflation has been considered in nonlocal theories of gravity as well [40], where the classical action contains infinitely many free parameters. The cosmological perturbations in those scenarios have been studied in [41,42]. The gain achieved by means of fakeons is that no ghosts are present and the number of independent parameters is kept to a minimum. Whenever there is an overlap, we find agreement with the results derived in the other approaches. This occurs, for example, when H/m χ or m φ /m χ are sufficiently small to suppress the effects of the fakeons in our theory and the effects of the ghosts in the theories of refs. [38,39], where H is the value of the Hubble parameter during inflation. Even when H or m χ are not large, we can still relate JHEP07(2020)211 some results, due to the universality of the low-energy expansion. For example, we can do so for any quantity that has a convergent, resummable expansion for small H/m χ or m φ /m χ . In the limit m χ /m φ → ∞, the results we find agree with those of the theory R+R 2 [19,43]. We make the calculations in two frameworks and show that the final results match. In the first approach, which we call inflaton framework, the scalar field φ is introduced explicitly to eliminate the R 2 term, while the C 2 term is unmodified. The scalar potential coincides with the Starobinsky one. In the second approach, which we call geometric framework, both R 2 and C 2 are present. The C 2 term does not affect the FLRW metric, so in both approaches the background metric coincides with the one of the Starobinsky theory. The differences arise in the action of the fluctuations over the background. The map relating the two frameworks is a field-dependent conformal transformation, combined with a time reparametrization. A third formulation, where the scalar φ and a spin-2 fakeon χ µν are introduced explicitly in order to eliminate both higher-derivative terms R 2 and C 2 is also available [30], but will not be studied here. The paper is organized as follows. In section 2, we briefly review the formulation of quantum gravity with fakeons and present the two frameworks just mentioned. In section 3, we study the tensor and scalar fluctuations in the inflaton framework. The fakeon projection, which allows us to make sense of the term C 2 , is briefly introduced in section 2 and discussed in detail in section 4. In section 5, we make the calculations in the geometric framework. In section 6, we study the vector fluctuations and show that they are projected away altogether at the quadratic level. Section 7 contains the summary of our predictions and section 8 contains the conclusions. In appendix A, we derive the map relating the inflaton framework to the geometric framework and show that the results agree. In appendix B we show that the curvature perturbation R can be considered constant on superhorizon scales for adiabatic fluctuations of the energy-momentum tensor. Quantum gravity with fakeons In this section we introduce the theory and the two frameworks we are going to work with. We begin by recalling a few basic features of the fakeons. Being purely virtual quanta, they are particles that mediate interactions, but do not belong to the physical spectrum of asymptotic states. Expanding around flat space, they are introduced by means of a new quantization prescription for the poles of the free propagators [26], alternative to the Feynman iǫ prescription. The physical subspace V is obtained by projecting the fake degrees of freedom away. The theory is unitary in V , where the optical theorem holds. What makes the projection consistent to all orders [27] is that the fakeon prescription does not allow the loop corrections to resuscitate back the states that have been projected away. The prescription makes sense irrespective of the sign of the residue at the pole of the propagator. Yet, it requires that the real part of the squared mass be positive. Indeed, fakeons cannot cure tachyons, but only ghosts. The no-tachyon condition is the main requirement we have to fulfill and its analogue on nontrivial backgrounds is going to play an important role. JHEP07(2020)211 The projection must also be performed at the classical level. An action like (1.1) is physically unacceptable as the classical limit of quantum gravity, because it has undesirable solutions. Yet, (1.1) is the starting point to formulate quantum gravity as a quantum field theory. It is local and provides the Feynman rules that allow us (together with the Feynman prescription for physical particles and the fakeon prescription for fake particles), to calculate the loop diagrams and the S matrix. An action of this type is called "interim" classical action [32]. The true classical action S class is obtained from the interim classical action S inter by projecting the fake degrees of freedom away. At the classical level, the projection is achieved by means of the classical limit of the fakeon prescription. Precisely, S class is obtained by: (i) solving the field equations of the fakeons (derived from S inter ) by means of the fakeon Green function; and (ii) inserting the solutions back into S inter . In the perturbative expansion around flat space, the fakeon Green function is the arithmetic average of the retarded and advanced potentials [32]. We will see that this piece of information is enough to derive the fakeon Green function on nontrivial backgrounds. The plan of the paper is to calculate the effects of inflationary cosmology on the fluctuations of the cosmic microwave background radiation at the quadratic level. Since we do not need to work out loop corrections, we can quantize the projected action S class , rather than projecting the quantum version of S inter . This simplification saves us a lot of effort. The good feature of S class is that it no longer contains the fake degrees of freedom, by construction, so in principle it can be quantized with the usual methods. The nontrivial counterpart is that S class is not fully local, due to the nonlocal remnants left by the fakeon projection. Because of this, the quantization of S class is not as simple as usual, also taking into account that we must perform it on a nontrivial background. However, in a variety of lucky cases, which include those studied in this paper, it is possible to treat the nonlocal sector of S class in a relatively simple way and extract physical predictions with the procedure described above, either because the nonlocal sector of S class does not affect the quantities we are interested in, or because it affects them only at higher orders. Summarizing, the simplest way to proceed, which we adopt in the paper, is as follows. First, we work out the classical action S class of quantum gravity, by projecting the interim action S inter . Second, we quantize S class with the usual methods, paying special attention to the nonlocal sector, anticipating that in the end it does not create too serious difficulties. Now we give the interim classical actions S inter of quantum gravity in the two approaches we study in the paper. The projection S inter → S class and the quantization of S class will be performed in the next sections, after expanding around the de Sitter background. The higher-derivative form of the interim classical action is where C µνρσ denotes the Weyl tensor, M Pl = 1/ √ G is the Planck mass, Φ are the matter fields and S m is the action of the matter sector. The no-tachyon condition (i.e., the requirement that the free propagator around flat space does not have tachyonic poles) determines the signs in front of C µνρσ C µνρσ and R 2 . JHEP07(2020)211 The degrees of freedom of the gravitational sector are the graviton, a scalar field φ of mass m φ and a spin-2 fakeon χ µν of mass m χ . The reason why χ µν must be quantized as a fakeon is that the residue of the free propagator has the wrong sign at the χ µν pole, so the Feynman prescription would turn it into a ghost, causing the violation of unitarity. On the other hand, φ can be quantized either as a fakeon or a physical particle, because the residue at the φ pole has the correct sign. In this paper, we assume that φ is a physical particle (the inflaton). For simplicity, we have omitted the cosmological term in (2.1). We will do the same throughout the paper. Once it is included, the theory is manifestly renormalizable, like Stelle's theory [44], because the fakeon prescription does not modify the ultraviolet divergences [26,27]. With the help of an auxiliary field ϕ, we can write S QG in the equivalent form Making the Weyl transformation 16π/3, we can diagonalize the quadratic part and obtain the new action and is the Starobinsky potential. The action (2.4) is not manifestly renormalizable. In fact, it is as renormalizable as (2.1) -once the cosmological term is reinstated -, because it is related to (2.1) by a (perturbative and nonderivative) field redefinition. The geometric framework is defined by the interim actions (2.1) or (2.2), while the inflaton framework is defined by (2.4). In the rest of the paper, we switch the matter sector S m off. If needed, its effects can be studied along the guidelines outlined in the next sections. We do not review the details on the parametrizations of the fluctuations and their transformations under diffeomorphisms, which are easy to find in the literature (see for example [17][18][19]). JHEP07(2020)211 3 Inflaton framework (R+scalar+C 2 ) In this section, we study the tensor and scalar fluctuations in the inflaton framework. The action is (2.4), with the potential (2.6). The Friedmann equations arë where H =ȧ/a is the Hubble parameter. The de Sitter limit is the one where H is approximately constant. It is easy to show that the constant value it tends to is m φ /2. Indeed,Ḣ ≃ 0 in the first equation (3.1) giveṡ φ ≃ 0. On the other hand, if we insertφ ≃ 0 (and soφ ≃ 0) in the third equation (3.1) we obtain V ′ (φ) ≃ 0, which has two solutions: φ ≃ 0 and φ → −∞. The first possibility gives the trivial case, since φ ≃ 0,φ ≃ 0 in the second equation (3.1) give H ≃ 0. The second possibility is the right one, since φ → −∞,φ ≃ 0 in the second equation The expansion around the de Sitter background is an expansion in powers of √ ε. This can be proved by studying the solution of the equations (3.1) around the de Sitter metric. Leaving the details to appendix A, here we just mention the properties that we need to proceed. It is possible to show that η = O( √ ε) and In other words, each time derivative raises the order by √ ε, so the expansion in powers of √ ε is also an expansion of slow time dependence. Moreover, we have (see formulas (A.7), suppressing bars). The last line is the expansion of −aHτ , where τ is the conformal time, defined by with the initial condition chosen to have τ = −1/(aH) in the de Sitter limit ε → 0. Tensor fluctuations To study the tensor fluctuations, it is convenient to parametrize the metric as where u = u(t, z) and v = v(t, z) are the graviton modes. JHEP07(2020)211 Let u k (t) denote the Fourier transform of u(t, z) with respect to the coordinate z, where k is the space momentum. The quadratic Lagrangian obtained from (2.4) is plus an identical contribution for v, where k = |k|. To simplify the notation, we understand that u 2 stands for u −k u k ,u 2 foru −kuk , etc. We extend this convention to mixed products such as uu, which can be interpreted either as u −kuk oru −k u k . It is possible to eliminate the higher derivatives by considering the extended Lagrangian Here f (t), h(t) are functions to be determined, and S, which may stand for S −k (t) or S k (t), denotes an auxiliary field. The equivalence of L ′ t and L t is due to the fact that L ′ t = L t when S is replaced by the solution of its own field equation. The higher derivatives disappear in the sum L t + ∆L t , because the term proportional toü 2 cancels out. Next, we perform the field redefinitions where α(t) and β(t) are other functions to be determined. We use the freedom to choose f , h, α and β to write L ′ t in a convenient form, such that it contains a unique, nonderivative term mixing U and V . Specifically, we reduce the Lagrangian L ′ t to the form and γ, ω 2 , Ω 2 and σ are other functions of time, while M is constant and has the dimension of a mass. Since γ is going to be positive, V is the fakeon and U is the physical excitation, up to the mixing due to L The fakeon projection amounts to solving the V field equations by means of the fakeon prescription and inserting the solution back into L ′ t . In all the cases considered here, this is achieved by determining the solution G f (t, t ′ ) of ΣG f (t, t ′ ) = δ(t − t ′ ) as the arithmetic average of the retarded and advanced potentials, where Σ is an operator of the form JHEP07(2020)211 F i (t) being functions of time. A certain detour allows us to get to the results we need here without even knowing the explicit expression of G f (t, t ′ ), which is derived in section 4, where the projection is discussed in detail. If we take where H 0 is a constant, we obtain the decomposition (3.11) with The constant H 0 is in principle arbitrary, but a remarkable choice, vanish in the de Sitter limit. There, U and V decouple and It is relatively straightforward to derive the power spectrum of the fluctuations in this limit. The V equation of motion is As said, we need to solve it by means of the fakeon prescription and insert the solution back into the action. Since (3.17) is homogeneous and U -independent, the solution is just V = 0. Using V = 0, formula (3.10) gives u = U , so we obtain a Mukhanov action that coincides with the one of Einstein gravity with a scalar field, apart from the overall factor. The u two-point function in the de Sitter limit is . Details on the derivation of (3.18) are given below. Formula (3.18) makes us already appreciate that the result depends on the mass m χ of the fakeon in a nontrivial way. JHEP07(2020)211 Quasi de Sitter expansion. Formulas (3.14) and (3.15) are exact, i.e., they do not assume ε small. From now on, we work to the first order in ε, where we can use approximate formulas. Observe that (3.14) and (3.15) depend on m χ , H, ε and m φ (through H 0 = m φ /2). However, the last three quantities are related by (3.4), so we can eliminate one of them. The price of this is that we introduce terms proportional to √ ε, which are unnecessary at this level. It is possible to avoid it by switching to a slightly different parametrization. Specifically, if we choose and The V equation of motion is now Anticipating that the solution for V is of order ε, we have dropped higher-order terms proportional to εV , εV from (3.22). Let Σ −1 f denote the fakeon Green function G f (t, t ′ ), i.e., the solution of ΣG f (t, t ′ ) = δ(t − t ′ ) defined by the fakeon prescription (see section 4). Then the solution of (3.22) can be written as Inserting this expression into the Lagrangian (3.21), we can see that the nonlocal contribution due to L (3.25) At this point, it is straightforward to work out the Mukhanov action. Defining JHEP07(2020)211 and switching to the conformal time (3.5), the w action to order ε derived from (3.25) reads where the prime denotes the derivative with respect to τ . Power spectrum and spectral index. Formula (3.27) tells us that the conjugate momentum of w is p = w ′ , so after turning w, p into operatorsŵ,p, we impose the equal time quantization condition where we have reinstated the subscripts k. As usual, we write the Fourier decomposition whereâ † k andâ k are creation and annihilation operators. The limit k/(aH) → ∞ of (3.27) allows us to define the Bunch-Davies vacuum state |0 . From formula (3.27), we see that the only difference with respect to the result obtained in the de Sitter limit is a rescaling of k. Thus, we require Using the condition (3.30), we can work out the modes v k and obtain having used the third formula of (3.4), where H (1,2) νt are the Hankel functions. For the purpose of computing the power spectrum, we need to work out the leading behavior in the superhorizon limit |kτ | → 0. There we have The redefinitions (3.10) tell us that to compute the u two-point function we also need the fakeon V k , which is given by formula (3.24). While the general discussion of the fakeon Green function is left to section 4, here we can quickly get to the result we need as follows. In the superhorizon limit |kτ | → 0 we can ignore the term proportional to k 2 /a 2 in the expression (3.20) of σ. Once we do this, we can commute σ and Σ −1 f in (3.24), because the commutator gives corrections of higher orders in ε. Moreover, recalling that Σ 0 U k = O(ε), because U k solves the Mukhanov equation of the projected Lagrangian L prj t of formula (3.25), Σ −1 f just multiplies U k by 1/(γm 2 χ ). Collecting these facts, we have, in the superhorizon limit and discarding higher orders, The power spectrum P u of each graviton polarization is defined by The two-point function can be evaluated in the superhorizon limit from (3.10), (3.31) and (3.33). We find where ψ 0 is the digamma function. The power spectrum of the tensor fluctuations, matched with the usual conventions, is P T = 16P u . Replacing |τ | by 1/k * , where k * is a reference scale, it is common to write where A T and n T are called amplitude and spectral index (or tilt), respectively. We find where γ E is the Euler-Mascheroni constant and we have used the first formula of (3.4) to eliminate H. Scalar fluctuations Now we study the scalar fluctuations in the inflaton framework. We work in the comoving gauge, where the φ fluctuation δφ is set to zero and the metric reads After Fourier transforming the space coordinates, (2.4) gives the quadratic Lagrangian (3.38) As before, Ψ 2 stands for Ψ −k Ψ k ,Ψ 2 forΨ −kΨk , and so on. Since Φ appears algebraically, we eliminate it by means of its own field equation. We remain with a Lagrangian that depends only on B and Ψ. The field redefinitions is the sum of a term proportional to U V plus one proportional toU V . In addition, L (U V ) s vanishes in the de Sitter limit. We do not give the full expression of L s here, but stress that after the redefinition (3.39) it admits a series expansion in powers of k and √ ε. In particular, where γ is defined in (3.19). As before, V is the fakeon and U is the physical excitation. Quasi de Sitter expansion. In the de Sitter limit ε → 0, we find Note that υ and the coefficient of V 2 in (3.41) are positive definite. Again, we see that the fakeon V decouples. Its own equation of motion sets it to zero, so the Lagrangian of U coincides with the usual Mukhanov expression, normalization included. This means that the power spectrum of the scalar fluctuations coincides with the one of Einstein gravity in this limit. To order η ∼ √ ε, we find Since Recalling that in the comoving gauge the curvature perturbation R coincides with Ψ, we can derive the power spectrum P R , defined by JHEP07(2020)211 Inserting the solution for U into the left formula of (3.39), we find ln P R (k) = ln A R + (n R − 1) ln k k * , (3.45) where the amplitude A R and the spectral index n R − 1 are respectively. We see that the mass m χ of the fakeon does not affect the result to the order we are considering. Finally, from (3.36) we derive the tensor-to-scalar ratio , defined by the fakeon prescription, where Σ is given in formula (3.23). For the purposes of this paper, it is sufficient to invert Σ in the de Sitter limit a(t) = e Ht , where H is treated as a constant. We keep H generic to make the discussion easily adaptable to the geometric framework. We will use the information that H is m φ /2 in the de Sitter limit (in the inflaton framework) only later. It is convenient to switch to a symmetric operator by noting that We want to prove that the fakeon solutionĜ f (t, t ′ ) of where sgn(t) is the sign function, J n denotes the Bessel function of the first kind and In principle, we could add solutions of the homogeneous equation, which are the functions J ±inχ (ǩ), multiplied by constants. The job of the projection is to determine those JHEP07(2020)211 constants uniquely. Because it comes from quantum field theory, the fakeon projection is known perturbatively around flat space, in four-momentum space. However, a notion of four-momentum is not immediately available in curved space. Fortunately, there are three limits whereĜ f is known, which are k/(aH) → ∞, k/(aH) = 0 and a = constant. The limit k/(aH) → ∞ gives the flat-space case once we switch to conformal time. The limit k/(aH) → 0 gives the flat-space case if we keep the cosmological time. The case a = constant is precisely flat space, but is not relevant here, since we are interested in the de Sitter background. Hence, necessary conditions are that the solution (4.3) reduces to the known expressions [32,45] in both cases k/(aH) → ∞ and k/(aH) = 0. Any of these two conditions is also sufficient. The other condition can be seen as a consistency check. Switching to conformal time τ = −1/(aH), equation (4.2) can be written as For k|τ | large we obtain Solving it by means of the arithmetic average of the retarded and advanced potentials, we find [32,45] It is easy to check that (4.3) does satisfy (4.6) when k|τ |, k|τ ′ | ≫ 1. As said, the most general solution of (4.2) is equal to (4.3) plus solutions of the homogeneous equation, multiplied by constant coefficients c 1 and c 2 . Now we know that those coefficients must vanish, to match (4.6) for k|τ |, k|τ ′ | large. This proves that (4.3) is the correct fakeon Green function. A consistency check is given by the limit k → 0. There, (4.2) turns into an equation similar to (4.5), provided we keep the cosmological time t instead of switching to τ . Consequently, the solution (4.3) must tend to [32,45] 1 2Hn χ sin Hn χ |t − t ′ | . It is easy to check that this is indeed the k → 0 limit of (4.3). From (4.1) we derive the fakeon Green function Consistency condition We have determined the fakeon Green function in curved space by referring to two situations where the problem becomes a flat-space one, which are k/(aH) → ∞ and k/(aH) → 0. JHEP07(2020)211 As mentioned in the introduction, purely virtual particles are subject to a consistency (notachyon) condition in flat-space, i.e., their squared mass should be positive. Formula (4.6) shows that this requirement is always satisfied for k/(aH) → ∞, while formula (4.7) shows that it is satisfied for k/(aH) → 0 if n χ is real. Recalling that H is m φ /2 in the inflaton framework, the condition reads which is a lower bound on the mass of the fakeon with respect to the mass of the inflaton. When (4.9) holds, the oscillating behavior of (4.7) suppresses the contributions with One may wonder if it is meaningful to impose a condition stronger than (4.9), for example require that the time-dependent squared mass be positive for all values of k/(aH). To discuss this issue, let us consider the Lagrangian that gives the fakeon Green function of formula (4.2), which iŝ , with dh/dt ′ = f 2 , leaves the kinetic term invariant, but changes the squared mass. Specifically, the transformed Lagrangian reads where This transformation law shows that the signs of m(t) 2 and M (t ′ ) 2 do not have a reparametrization-independent meaning, in general, so a squared mass that becomes negative in some time interval is not necessarily a sign of a lack of consistency. In passing, it is easy to verify that if the masses are independent of time, then the condition of positive square mass is independent of the parametrization. Indeed, if m(t) 2 is t-independent and positive, the most general reparametrization where ρ and θ are arbitrary real constants of integration. Since f 2 must be real and identically positive, M 2 must also be positive. Summarizing, a necessary condition for the fakeon projection in the inflationary scenario is that the fakeon squared mass be positive in the superhorizon limit: JHEP07(2020)211 This condition also leads to (4.9) in the case of the scalar fluctuations. Indeed, consider the Lagrangian L (V ) s given in formula (3.41). Making the change of variables (4.13) the W equation of motion takes the form for some involved rational function m(t) 2 of H 2 /m 2 χ and k 2 /(aH) 2 , equal to (4m 2 χ − H 2 )/4 in the superhorizon limit. Thus, (4.12) gives again the bound (4.9). As we show in section 6, the vector fluctuations give the same bound. The same bound is also found in the geometric framework. It is conceivable that, if (4.9) were violated, the theory would predict a rather different large-scale structure of the universe, or a different scenario would have to be envisaged to produce the present situation. The stronger requirement that m(t) 2 be positive for every k makes sense if we believe that the cosmological time plays a special role. Then we still find the bound (4.9) for the tensor fluctuations, while a stronger bound is obtained in the case of the scalar fluctuations. Studying the coefficient m(t) 2 of W in (4.14) numerically, we find that it is positive for all values of As soon as m χ 0.312m φ , there exists a finite k domain where m(t) 2 has negative values. When m χ satisfies (4.9) but not (4.15), there is a time interval ∆t ∼ ln(k/m φ )/m φ , comparable with the duration of inflation, where the fakeon Green function is "tachyonic" and its nonlocal contribution is no longer negligible. In the rest of the paper, we take (4.9) as the consistency condition for the fakeon projection in inflationary cosmology, because it is universal and reparametrization independent. Yet, the issues just mentioned suggest that there is a chance that it might be conservative. The formulas of the power spectra do not depend on it, but (4.15) narrows the window of allowed values of the tensor-to-scalar ratio r a little bit more than (4.9) (see section 7). Geometric framework (R + R 2 + C 2 ) In this section we study the geometric framework, which is sometimes known in the literature as Jordan frame. The higher-derivative equations of the background metric, derived from (2.1) with the FLRW ansatz, can be written in the simple forṁ where ε is again −Ḣ/H 2 . It is worth to stress that ε, H, a and the cosmological time t are different from those of the inflaton framework, although we denote them by means of the same symbols. The match between the two frameworks is worked out in detail in appendix A. JHEP07(2020)211 The quasi de Sitter approximation of (5.1) requires ε ∼ m 2 φ /(6H 2 ) ≪ 1, so H is no longer related to m φ in the de Sitter limit, where actually m φ ≪ H. As far as the mass m χ is concerned, it can be either of order H or of order m φ . This means that we have two types of quasi de Sitter expansions, depending on whether m χ ∼ H or m χ ∼ m φ . We study the scalar and tensor fluctuations in both. The two possibilities can also be understood as follows. The de Sitter metric is not an exact solution of the field equations of the theory R + R 2 + C 2 . It is an exact solution in two cases: (i) when we ignore both R and C 2 ; and (ii) when we ignore just R. In other words, the term R 2 is leading with respect to the term R, while the term C 2 can either be of order R or of order R 2 (as far as the fluctuations are concerned). The first case is studied by expanding in powers of ε with ξ = H 2 /m 2 χ fixed. The second case is studied by expanding in powers of ε with ζ ≡ m 2 χ /m 2 φ fixed. The relation between m φ , H and ε is It can be found by writing down the most general expansion for m 2 φ /H 2 in powers of ε, differentiating it and applying (5.1) to determine the coefficients. If needed, (5.2) can be extended to arbitrarily high orders (an asymptotic series being obtained). m χ ∼ H: tensor fluctuations We start from the tensor fluctuations. Parametrizing the metric as in (3.6), the quadratic Lagrangian obtained from (2.1) is plus an identical contribution for v, where Expanding around the de Sitter background with ξ = H 2 /m 2 χ fixed, the first nonvanishing contribution to the spectral index n T turns out to be O(ε 2 ). For this reason, we work out the predictions to the second order in ε included. Expanding the Lagrangian (5.3), we find where U = 2 3ε u. JHEP07(2020)211 The important point of (5.5) is that the unique higher-derivative termÜ 2 is multiplied by ε, so the fakeon projection can be handled iteratively. The change of variables allows us to cast the Lagrangian in the form SinceĖ ≃ εE for |kτ | small [as in (3.32)], the corrections εO(k 2 |τ | 2 , εĖ,Ë, · · · ) of (5.6) are either O(ε 5/2 ) or give subleading contributions in the superhorizon limit k |τ | ≪ 1. This means that we do not need to specify them for our purposes. At this point, it is sufficient to upgrade the steps from formula (3.26) to formula (3.32) to the appropriate order, with the substitutions U → E,k → k, γ → 1. We find The power spectrum of the tensor fluctuations is P T = 16P u , with P u defined by (3.34). Using the definition (3.35), the amplitude and the spectral index are m χ ∼ H: scalar fluctuations Now we discuss the scalar fluctuations in the geometric framework by expanding in powers of ε to the next-to-leading order with ξ = H 2 /m 2 χ fixed. We switch directly from (2.1) to the action (2.2) (with S m → 0), to remove the higher derivatives without changing the metric that couples to matter. We isolate the background value of ϕ from its fluctuation Ω by writing where Υ is defined in (5.4). The gauge invariant curvature perturbation R is We work in the spatially-flat gauge, where Ω is an independent field and Ψ is set to zero. This means that the metric is JHEP07(2020)211 After Fourier transforming the space coordinates, the quadratic Lagrangian reads (5.13) The field Φ appears in (5.13) as a Lagrange multiplier, so we integrate it out by solving its own field equation and inserting the solution back into the action. So doing, we obtain a two-derivative quadratic Lagrangian for B and Ω, which we then expand around the de Sitter background by means of (5.2). Making the field redefinitions we obtain an action that is regular for ε, k → 0. Its ε = 0 limit is We note that at this level V appears algebraically and can be integrated out. This means that the fakeon projection can be handled iteratively in ε. After integrating V out, every m χ dependence disappears to the first order in ε. In particular, if we define the action becomes The redefinition (3.26) with U → E, γ → 1 and ν t → ν s , where ν s = 3 2 + 2ε, (5.17) gives the action (3.27) withk → k. Inserting the solution for E into (5.15), (5.14) and then (5.11), and using the definition (3.45), we find in the superhorizon limit, Together with (5.8), formula (5.18) gives the tensor-to-scalar ratio JHEP07(2020)211 to the next-to-leading order in ε. More explicitly, we get, after inverting (5.2), So far, we have assumed ε small and ξ arbitrary. However, we see from (5.8) and (5.20) that higher orders of ε carry higher powers of ξ. To write (5.21) we have used Conservatively, formula (5.21) is reliable as long as 3εξ ≃ m 2 φ /(2m 2 χ ) is reasonably smaller than one. However, we may argue that the overall factor in front of (5.21) is exact. In the next two subsections we show that it is indeed so. m χ ∼ m φ : tensor fluctuations Now we study the tensor fluctuations in the geometric framework with ζ = m 2 χ /m 2 φ fixed. The metric is still parametrized as (3.6) and the quadratic Lagrangian obtained from (2.1) is (5.3), plus an identical contribution for v. After replacing m 2 χ with m 2 φ ζ, we use (5.2) to eliminate m 2 φ and then expand in ε. We work out the leading and next-to-leading orders in ε. As in subsection 3.1, we eliminate the higher derivatives of (5.3) by considering the extended Lagrangian L ′ t = L t + ∆L t , where ∆L t is defined in (3.9). If we perform the redefinitions we obtain As usual, we have just written the ε → 0 limit of L JHEP07(2020)211 the Mukhanov action is (3.27). The fakeon Green function G f (t, t ′ ) can be discussed as in subsection 4, with the replacements and the solution is still (4.8). The consistency condition (4.12) gives again (4.9). Aside from the changes (5.24), everything works as before and we find, from the V field equation of L ′ t , The fakeon average can be worked out with the procedure of subsection 3.1. Recalling that the terms in the square bracket of (5.25) are subleading or of higher orders in ε, we obtain that V does not contribute in the superhorizon limit |kτ | ≪ 1. Inverting (5.2) to restore the m 2 φ dependence of the overall factor, the power spectrum P T = 16P u of the tensor fluctuations gives the amplitude while the spectral index n T is O(ε 2 ). m χ ∼ m φ : scalar fluctuations Now we study the scalar fluctuations in the geometric framework with ζ = m 2 χ /m 2 φ fixed. We replace m 2 χ with m 2 φ ζ, use (5.2) to eliminate m 2 φ and then expand in powers of ε. We work to the next-to-leading order in ε. We eliminate the higher derivatives of (2.1) by means of (2.2). The metric is still parametrized as (5.12) in the spatially-flat gauge Ψ = 0. The curvature perturbation is (5.11) and the ϕ fluctuation Ω is defined by (5.10). The quadratic Lagrangian obtained from (2.2) is (5.13). Defining and expanding to the next-to-leading order in ε, we obtain the decomposition (3.40) with where g i , i = 1, 2, 3, are regular functions of k/(aH) and ζ, which tend to finite values in both limits k/(aH) → 0, ∞. Moreover, g 1 tends to zero for k/(aH) → 0 and g 3 tends to zero for k/(aH) → ∞. JHEP07(2020)211 The expression of L (V ) s of (5.27) is to the leading order, which is sufficient for our present purposes. The discussion about the fakeon Green function proceeds as before. It is easy to check that the consistency condition (4.12) coincides with (4.9). Clearly, the fakeon projection implies V = O(ε). This means that the projected Lagrangian is just L (U ) s to the order of approximation we are considering, i.e., we can drop both L The Mukhanov action is (3.27) with γ → 1,k → k and ν t → ν s = (3/2) + 2ε. The power spectrum P R gives the amplitude (5.28) and the spectral index n R − 1 = −4ε. Again, the dependence on m χ drops out. Combining this result with (5.26), the tensor-to-scalar ratio is which agrees with (5.21) for ζ large. Vector fluctuations In this section we study the vector fluctuations and show that they are set to zero by the fakeon projection at the quadratic level. For definiteness, we work in the geometric framework, but equivalent results are obtained in the inflaton framework. We parametrize the metric as where ∂ i B i = 0 and ∂ i E i = 0. A gauge invariant quantity is We choose a gauge where E i = 0 and rewrite the metric as where C = C(t, z) and D = D(t, z) are the independent vector modes. After Fourier transforming the space coordinates, the quadratic Lagrangian L v obtained from (2.1) is given by plus an identical contribution for D, where ζ = m 2 χ /m 2 φ . As before, C 2 stands for C −k C k , C 2 forĊ −kĊk , and so on. After the redefinition V = kC 2m χ a 1/2 , JHEP07(2020)211 the Lagrangian turns into The kinetic term has the wrong sign, so V needs to be quantized as a fakeon. Since V does not couple to any other field at this level, the fakeon projection sets it to zero. Therefore, the vector modes do not contribute to the two-point functions. Note that these conclusions hold without expanding around the de Sitter background. The consistency condition (4.12) is studied by requiring that the coefficient of V 2 in (6.4) be positive in the superhorizon/de Sitter limit, which gives again (4.9). Summary of predictions and connection with observations In this section we summarize the predictions and make contact with observations. We express the results in terms of the number of e-foldings, which is defined by where t i is the time when ε(t i ) = ε and t f is when inflation ends, ε(t f ) = 1. It is convenient to work in the geometric framework, where we can use (5.1) and (5.2). Then we translate the formulas to the inflaton framework by means of the map of appendix A. Expressing every quantity as a function of ε, (7.1) gives The O(ε 0 ) corrections are not very meaningful, because they depend on the upper bound of integration and ε(t f ) = 1 is just a conventional choice. To the leading order, we can take in the geometric framework. Note that in the inflaton framework we have instead N ≃ √ 3/(2 √ ε), as can be shown using (A.6). Once expressed in terms of N , the predictions obtained in the two frameworks agree (see appendix A 3) The formula of n T comes from (3.36), since in this particular case the inflaton framework is more powerful than the geometric framework. We see that the predictions for A R and n R − 1 coincide with the ones of the R + R 2 model. Instead, the predictions for A T , r and n T are smaller by a factor 2m 2 χ /(m 2 φ + 2m 2 χ ). Note that (7.3) implies the relation r ≃ −8n T , (7.4) which is known to hold in single-field slow-roll models independently of the scalar potential V (φ). 1 It is a nontrivial fact that it does not depend on m χ , besides N and m φ . The bound (4.9) on m χ is also a prediction of the theory, required by the consistency of the fakeon projection with inflationary cosmology. Because of it, the tensor-to-scalar ratio r and the spectral index n T are predicted within less than one order of magnitude. Precisely, 1 9 r ≃ − 2N 2 3 n T 1. The allowed values of r are shown in figure 1, where the vertical lines denote the minimum and maximum values of N in the range n R = 0.9649 ± 0.0042 at 68% CL [23]. The windows (7.5) are compatible with the data available at present, which give r < 0.1 [23]. The results of this paper also provide corrections to the amplitudes, which can be used to estimate the theoretical errors. From (7.2) we find 1 See for example ref. [46]. JHEP07(2020)211 so we obtain With N = 60, the first correction to A T is between 0.3% (m χ = m φ /4) and 2.5% (m χ → ∞). Although A R and n R − 1 do not depend on m χ in our approximation, they will at higher orders. Conclusions We have worked out the predictions of quantum gravity with fakeons on inflationary cosmology. By expanding around the de Sitter background the amplitudes and spectral indices of the scalar and tensor fluctuations have been calculated to the next-to-leading orders, comparing different frameworks, which lead to matching results. The physical content of the theory is exhausted by the two power spectra. The vector degrees of freedom, as well as the other scalar and tensor ones, are handled by means of the fakeon prescription and projected away. The methodologies we have developed to deal with this operation appear to be generalizable to higher orders. The local, renormalizable, unitary, perturbative quantum field theory of gravity considered in this paper depends only on four parameters: the cosmological constant, Newton's constant, m φ and m χ . The values of the cosmological constant and Newton's constant are known. It will be possible to derive the values of m φ and m χ from n R and r once new cosmological data will be available [47]. At that point, the theory will be uniquely determined and all other predictions (tensor tilt, running of the spectral indices, and so on) will be stringent tests of its validity. The consistency of the approach puts a lower bound on the mass m χ of the fakeon with respect to the mass m φ of the scalar field. The tensor-to-scalar ratio r is determined within less than an order of magnitude. Moreover, the relation r = −8n T is not affected by m χ within our approximation. A separate analysis is required to study the case where the consistency bound on m χ is violated and work out the consequences of the violation on the physics of the primordial universe. Finally, the investigation of this paper and the results we have obtained shed light on the problem of understanding purely virtual particles in curved space. JHEP07(2020)211 The relations can be extended to arbitrary orders, if needed. We findη = O(ε 1/2 ) and d nε /dt n =H n O(ε (n+2)/2 ), which justifies the organization (3.3) of the expansion around the de Sitter background in the inflaton framework. In particular, invertingε(ε) we get respectively. We can omit them for our purposes, since they do not affect the quadratic action and the two-point functions. We recall that the action is expanded around a solution of the equations of motion (which is then expanded around the de Sitter metric -which is not an exact solution), so the linear terms in the fluctuations are absent. Switching from one framework to the other, the corrections just mentioned affect the cubic terms, but not the quadratic ones. From (A.9) we derive the transformation of the curvature perturbation R. Observe that, given a scalar Y = Y 0 + δY , where δY denotes the fluctuation around its background value Y 0 , the combination JHEP07(2020)211 where φ 0 is the background value of φ = φ 0 + δφ, such that W 0 = e −κφ 0 . We recall that in section (3.2) the comoving gauge δφ = 0 was used, so we just hadR =Ψ there. Equations (A.11) and (A.12) prove that R =R, so R is also invariant when we switch frameworks. This fact, together with (A.8), ensures that the power spectra calculated in the paper coincide in the two frameworks. We can use the formulas (A.6) to check it explicitly to the orders we have been working with. Comparing (3.36) with (5.8), (5.26) and (5.9), we find A T (H,ε) = A T (H, ε),n T (ε) = n T (ε). B Superhorizon evolution In this appendix we show that the curvature perturbation R can be considered constant on superhorizon scales for adiabatic fluctuations of the energy-momentum tensor, in particular after the metric fluctuations exit the horizon and before they re-enter it. We start by showing this result in the inflaton framework. Consider the energy momentum tensor T µν with components where W = Ψ + Φ and the contributions of the scalar field φ are moved into T µν . It is possible to show that formulas (B.2), together with the Friedmann equations
12,783.4
2020-07-01T00:00:00.000
[ "Physics" ]
Comparison of early surgical outcomes of robotic and laparoscopic colorectal cancer resection reported by a busy district general hospital in England Robotic platforms provide a stable tool with high-definition views and improved ergonomics compared to laparoscopic approaches. The aim of this retrospective study was to compare the intra- and short-term postoperative results of oncological resections performed robotically (RCR) and laparoscopically (LCR) at a single centre. Between February 2020 and October 2022, retrospective data on RCR were compared to LCR undertaken during the same period. Parameters compared include total operative time, length of stay (LOS), re-admission rates, 30-day morbidity. 100 RCR and 112 LCR satisfied inclusion criteria. There was no difference between the two group’s demographic and tumour characteristics. Overall, median operative time was shorter in LCR group [200 vs. 247.5 min, p < 0.005], but this advantage was not observed with pelvic and muti-quadrant resections. There was no difference in the rate of conversion [5(5%) vs. 5(4.5%), p > 0.95]. With respect to perioperative outcomes, there was no difference in the overall morbidity, or mortality between RCR and LCR, in particular requirement for blood transfusion [3(3%) vs. 5(4.5%), p 0.72], prolonged ileus [9(9%) vs. 15(13.2%), p 0.38], surgical site infections [5(4%) vs. 5(4.4%), p > 0.95], anastomotic leak [7(7%) vs. 5(4.4%), p 0.55], and re-operation rate [9(9%) vs. 7(6.3%), p 0.6]. RCR had shorter LOS by one night, but this did not reach statistical significance. No difference was observed in completeness of resection but there was a statically significant increase in lymph node harvest in the robotic series. Robotic approach to oncological colorectal resections is safe, with comparable intra- and peri-operative morbidity and mortality to laparoscopic surgery. Despite the falling incidence rates of colorectal cancer due to national screening programmes, colorectal cancer remains the third most common cancer in both sexes, accounting for 10% of all malignant disease and conspicuously holding second rank in global cancer-related deaths at 9.4% 1 . While neo-adjuvant chemoradiotherapy has afforded 20-35% of patients with rectal cancer a complete pathological response 2 complimented with a judicious 'watch and wait' policy 3,4 , resectional surgery remains the curative treatment modality of choice for colon and rectal cancer.The last few decades have seen a shift from conventional open access surgery to advanced laparoscopic techniques owed to the benefit of reduced tissue trauma, attenuated stress response, reduced post-operative pain, early ambulation, shorter in-patient stay and a more desirable cosmesis, without compromising long-term oncological outcomes.As such, minimally invasive surgery is now considered the gold-standard approach.However, laparoscopic techniques are reliant on the experience of the assistant and are plagued by unstable images and difficulty in achieving good, sustained exposure, especially in the confined space of the pelvis. Inclusion and exclusion criteria Selection criteria for RCR centred on factors conducive to minimal access surgery, not exhaustive of body mass index (BMI) ≤ 35, a non-hostile abdomen and adequate physiological reserve for a sustained pneumoperitoneum.Only confirmed cancers of the colorectum, and lesions with suspicious histopathological features without conclusive invasion, but incongruent radiological features for benign disease were considered for analysis.All malignancies were primary and only resections with curative intent have been included.Emergent, palliative, beyond TME (except for isolated pre-sacral fascial dissection), exenterative and those requiring simultaneous non-pelvic visceral/organ resections were excluded.As were open, laparo-endoscopic procedures. Variables Patient demographics: Age, Sex, BMI, American Society for Anaesthesiology (ASA) grading and World Health Organisation (WHO) performance status were collected from patients' notes and electronical medical records.Tumour characteristics and intraoperative parameters identified for comparison included surgical procedure, stoma formation, access time (from first skin incision to completion of robotic docking), console time (from completing docking to commencement of extraction site), total operative time (TOT: from first skin incision until suturing of last incision) and conversion (open extension of the initially planned incision and/or switching to laparoscopic approach from robotic commencement).Outcome measures include 30-day post-operative complications conferring to the Clavien-Dindo classification.Anastomotic leaks were considered in cases of clinical or radiological features of anastomotic dehiscence.Prolonged ileus was defined as an ileus exceeding four days duration.Requirement for transfusion was used as a surrogate marker of significant blood loss, as this was inconsistently documented, or marked as negligible.Length of in-patient stay (LOS) measured in nights, re-admission rates and 90-day mortality was also recorded.Specimen quality, measured by completeness of resection and lymph node yield is presented. Pre-operative All patients underwent a standard preoperative workup and discussion in the local colorectal cancer multidisciplinary meeting.Mechanical bowel preparation and oral antibiotics were given for all left-sided procedures while oral antibiotics alone was prescribed for all right and semi-obstructive left-sided lesions. Surgical technique For all left-sided resections, a modified Lloyd-Davies position was adopted, with 23 degrees head down for RCR and more extreme cephalid dependence for LCR.For right-sided resection, patients were positioned supine or in Lloyd-Davis as per surgeons' preference and the extent of intended lymphadenectomy.For RCR, pneumoperitoneum was achieved via Veress needle insufflation or open access via intended stoma or extraction site.Open Hassan technique was used to establish pneumoperitoneum in all laparoscopic cases.Standard medial to lateral dissection respecting the avascular embryological planes and high pedicle division between haemolocks was performed for all resections.All anastomoses were stapled.Left-sided anastomosis were fashioned end-to-end with 29 mm intra-luminal circular stapler (Touchstone International Medical Science Co. Ltd, Suzhou, China).Most right-sided anastomoses were performed extra-corporally as a standard Barcelona with PROXIMATE® TLC75 linear stapler (Ethicon).Seven right-sided RCR were performed intracorporeally as iso-peristaltic, using SureForm 60 linear stapler (Intuitive Surgical, Inc., Sunnyvale CA, USA) and robotic-oversew of enterostomy.Specimen extraction was via Pfannenstiel incision, extension of midline port or through intended stoma site. All robotic resections were performed using the single console DaVinci Xi (Intuitive Surgical, Inc., Sunnyvale CA, USA).A four-port technique (Fig. 1) with 7-8 cm of separation and an additional 12 mm assistant port (Airseal ®, Applied Medical, USA) was used for all resections, as was the two right and one left-handed instrument configuration. Post-operative management Post-operative instructions applied for both LCR and RCR unless otherwise contraindicated, was via the ERAS protocol 12 .Nasogastric tubes were removed prior to anaesthetic reversal, while peritoneal drains (rarely present) and Foley catheters were withdrawn on postoperative day one.Anticoagulation for venous thromboembolism prophylaxis was started 6 h after surgery.Early ambulation and a non-restrictive diet were encouraged from post-operative day one.Discharge was dependant on adequate pain control with oral analgesia, bowels opening, mobility and stoma independence. Statistical analysis All statistical analysis was performed using GraphPad Prism (Version 9.4.1,GraphPad Prism Software, U.S.A).Continuous data is presented as mean and standard deviation, and differences tested using Student's t-test, unless of a non-Gaussian distribution.Test of proportions was assessed using Fisher's exact test unless otherwise stated, with p value set at 0.05 significance. Ethical approval Barking, Havering and Redbridge NHS Trust research ethics committee exempted this study from ethics approval and informed consent due to the fact that this study was an audit and no patient identifiable data was used.This study follows institutional guidelines on information and research governance.This study is registered with trust audit and research department with the unique identifying number L-227-22.This study has been reported in line with the 'Strengthening the reporting of cohort studies in surgery' (STROCSS) criteria 13 . Informed consent Barking, Havering and Redbridge NHS Trust research ethics committee exempted this study from ethics approval and informed consent due to the fact that this study was an audit and no patient identifiable data was used.This study follows institutional guidelines on information and research governance.This study is registered with trust audit and research department with the unique identifying number L-227-22. Results Between February 2020 and October 2022, a total of 308 resections for malignant disease of the colon and rectum were performed electively or semi-electively by the designated surgeons (Fig. 2).96 cases did not satisfy the inclusion criteria and thus data is presented on the remaining 112 LCR and 100 RCR consecutive cases.The study period incorporated both UK Covid-19 isolation phases, reflecting the reduced operative numbers indicated. Demographic Data The demographic data for each group is presented in Table 1.There was no difference in the age (p 0.12), sex (p 0.67), ethnicity (p > 0.95), or peri-operative risk profile of patients in the LCR and RCR groups, as measured by previous abdominal surgery (p 0.17), BMI (p 0.55), ASA (p 0.55) and WHO (p 0.48) performance status. Presentation Symptomatic patients in both cohorts were predominantly referred by their primary healthcare provider, via the expedited cancer pathway.A statistically higher proportion of patients attended as emergencies in the RCR group, as symptomatic anaemia or overt lower-gastrointestinal bleed.Acute appendicitis accounted for one emergency presentation in the RCR arm and both emergency presentations in the LCR group.All patients with impending obstruction were defunctioned prior to definitive planned treatment.Those requiring emergency resection have been excluded from analysis. Collectively eight patients had a history of malignant disease of the colorectum; six were diagnosed with a non-anastomotic metachronous tumour during their 5-year surveillance period, one had a complete pathological response to neoadjuvant chemoradiotherapy three years previously and had entered surveillance without surgery and one patient developed a new tumour 10 years after the initial primary.Two patients in the RCR cohort had malignant disease detected during surveillance for inflammatory bowel disease (IBD). Tumour site and characteristics 104 cancers were present in 100 individuals in the RCR cohort, accounting for four cases with synchronous tumours.Two of these patients required dual resection, while the proximity of the cancers in the other two permitted a single resection.There were no synchronous tumours in the LCR group.Tumour characteristics are outlined in Table 2.There were no statistical difference in tumour distribution (p 0.44), grade (p 0.72), clinical stage (p 0.5), histology (p 0.58), or size (p 0.69), between the RCR and LCR groups.www.nature.com/scientificreports/Twenty three patients did not have confirmed invasive disease prior to resection, Seventeen in the RCR and 6 in LCR (p < 0.05).Surgery was offered following multi-disciplinary consensus, based on highly suspicious clinical, histological and radiological features, inability to excise endoscopically and individual choice.All within the LCR and 12 in the RCR were subsequently diagnosed with invasive disease on assessment of the resected specimen. Operative data For the purpose of analysis resections were categorised as (1) Right sided, (2) Left sided; encompassing left hemicolectomies (LH), high anterior resections (HAR) and Hartmann's (HP), (3) Pelvic; comprising low anterior resections (LAR) and abdominoperineal excision of the rectum (APER) and (4) Multi-quadrant; subtotal colectomies (STC), pan-proctocolectomies (PPC) and dual resections.The latter was necessary in two of the four patients with synchronous tumours.Both underwent a right hemicolectomy, with either a HAR or APER.Overall and resection specific comparisons are presented in Tables 3 and 4 respectively. More pelvic resections were performed in the laparoscopic group because in our learning curve for robotics high anterior resections and right hemicolectomies were required prior to a TA300 course and learning TME + splenic flexure mobilisation. Seven of the 45 robotic right resections had intracorporeal anastomoses.Five patients in each group were converted to a laparotomy due to under-staging of tumour or technical difficulties hindering progress (p > 0.95).Only one of these patients had previous had abdominal surgery.Stoma formation (p > 0.95), neoadjuvant pelvic irradiation (p 0.33), and sphincter preservation (p 0.50) rates either due to excision or permanent discontinuity, were also comparable between robotic and laparoscopic approach. There was considerable variability in both the access [Med 28 (IQR 21-39, range 89)] and console time [Med 140 (IQR 86.3-217), range 570] irrespective of the laterality of the resection.There was a significant difference in the overall TOT between the minimally invasive approaches (median 247.5 vs. 200 min, p < 0.05) in favour of LCR.This advantage did not hold true when comparing pelvic (p 0.21) and multi-quadrant resections (p 0.66). Oncological parameters There was no difference in the completeness of resection between RCR and LCR (p.0.04).However, the lymph node retrieval was significantly increased (p 0.02) with robotic approach as compared to laparoscopic approach, in particular for multi-quadrant resections.A total of 4 R1 resections were identified among 100 robotic surgery patients.These resections occurred in 3 patients.One patient had synchronous right-sided colon cancer and low rectal cancer.The right-sided specimen was classified as an R1 resection due to a positive lymph node and a positive circumferential margin (0.5 mm posteriorly at site of previous tumour perforation).This synchronous resection was performed early in the learning curve.The other 2 R1 robotic resections were identified in the www.nature.com/scientificreports/cohorts of 2 other surgeons.One was a right hemicolectomy that was understaged radiologically.The last R1 resection was in a low rectal resection also performed early in the learning curve.Six (3%) patients did not have malignancy in the resected specimen.One patient had appendiceal LAMN but no residual disease, and five had only suspicious preoperative disease (HGD). Complications and convalescence The overall morbidity (Clavien-Dindo I-IV) and complications requiring intervention or re-operation (Clavien-Dindo III and IV) for RCR and LCR are likewise comparable [10.8% vs. 9.9%, p 0.82] (Table 3).Specifically, there were 7 (6.9%) and 5 (4.4%) anastomotic leaks in the RCR and LCR groups respectively (p 0.55), all of which required surgery in the RCR cohort, with 100% anastomotic preservation.By contrast, three of the five LCR anastomotic dehiscence required surgery beyond antibiotics and nutritional support.LOS for RCR was on average one day shorter than LCR, but this did not reach statistical significance (p 0.09).Readmission rates however remained similar between both groups (p > 0.9). Discussion Robotic surgery has gained significant traction in recent years, particularly in the field of rectal and pelvic surgery, owing to its enhanced visualization, improved articulation, and greater precision and ergonomic advantages over laparoscopic approaches.Systematic reviews suggest robotic TME for rectal cancer has comparable oncological and recovery parameters to laparoscopic, open or TaTME approach 14 with reduced rates of conversion compared to LCR 15 , akin to the data presented in this series, where no difference in CRM positivity between the two minimally invasive approaches for right, left and rectal lesions was seen. In addition we are reporting a small but significant increase in lymph node harvest in robotic resections.While the immaturity of our data prevents extrapolation of survival advantage, an increased LN yield is not unique in the literature 9,10 and longitudinal studies report comparable medium and long-term oncological outcome with RCR 16,17 . This paper presents our unit's initial experience with utilizing a robotic platform for surgical treatment of colorectal cancer.We report on the first 100 consecutive robotic colorectal cancer resections performed by a robotic team comprising three surgeons.This experience is compared to a cohort of patients representing all laparoscopic resections performed by the same team within the same period.While acknowledging the early phase of adopting this technology, the findings suggest that robotic colorectal surgery is safe, feasible to the established laparoscopic approach.These findings align with other studies reported on robotic colorectal surgery [18][19][20] .Despite the prolonged surgical times for non-pelvic RCR, RCR was not associated with increased VTE, conversion, peri-operative morbidity, unscheduled re-operation and mortality rates, reflecting the findings of a recent systematic review of randomised control trials comparing laparoscopic and robotic resections 21 .The operative times for pelvic and multi-quadrant resections inclusive of pelvic resections did not require additional operative time, signifying the widely accepted advantage of muti-articulating limbs in pelvic dissection.Based on our experience in the setting of surgical expertise with established laparoscopic approach in daily practice we consider that level of comfort can be reached in 30-50 cases.We noticed form our experience that high volume experience with laparoscopic surgery is a very important factor in reducing the robotic learning curve.Further learning curve studies need to be done to analyse the performance of surgeons who began their practice with robotic surgery coming from an open or laparoscopic surgical background. In addition, we report a reduction in the average length of in-patient stay by one day.While this did not reach statistical significance, this may potentially offset the cost of increased intra-operative time and disposables associated with robotic surgery.The earlier discharge and reduction in morbidity has been reported by other centres 22,23 .The explanation for reduction in hospital stay is beyond the scope of this study, but is likely to be multifactorial, including patient expectation and reduced post-operative pain leading to early ambulation.It is feasible that unconscious bias in peri-operative management may have influenced earlier discharge, but the analogous re-admission rates between LCR and RCR suggests the clinical appropriateness of discharges.Earlier series report longer in-patient stay and higher rates of admission to high dependency units with robotic resections.While the latter may reflect a true clinical need, over judicious intensivist support when utilising new surgical techniques was observed with the induction of laparoscopic surgery, which may also account for this observation.Increased utilisation of level 2 and 3 post-operative support was not the practice or the experience of this unit. The inherent bias and limitation of retrospective data analysis is acknowledged by the authors.In addition, we have included all procedures from the introduction of the DaVinci Xi and thus incorporated the learning curve of all three surgeons with the implication of underestimating the potential benefit of the robotic approach.The novelty of robotic surgery is reflected in the high variability in peritoneal access, docking and console time reported.Having the largest case series reported by a district general hospital in England our results are in line with the one obtained in tertiary centres shows that technique is feasible and reproducible in smaller centres with appropriate training. As the similarity of our patient groups suggests, it is reasonable to anticipate that with enhanced experience, the average intraoperative time will ultimately align with the LCR.Indeed, it has been demonstrated that RCR has a flatter learning curve with both inexperienced and experienced minimally invasive surgeons and that the operative duration advantage of LCR diminishes with increased RCR case load, such that RCR TME becomes a briefer procedure 24 .Thus, a high-volume and a standardised system of operation, may aid to further reduce the cost of robotic surgery over time 25 .This is a comparative series and, as such, no power calculations were performed to determine the numbers required to avoid type two errors and, as a non-RCT, unintentional bias may have also influenced the potential advantages demonstrated. Conclusions This single-center study, representing the largest case series over two years at a district general hospital in England, investigated the non-inferiority of a robotic approach compared to laparoscopy for colorectal cancer treatment.Although the robotic data included cases from the learning curve, it demonstrated favourable surgical outcomes, including increased lymph node yield and shorter hospital stays, compared to laparoscopy.These findings suggest that robotics is a viable and effective alternative to laparoscopy for colorectal surgery, and the learning curve for experienced laparoscopic surgeons might be shallower with robotics.However, further research on the learning curve is warranted. Table 1 . Demographic data and presentation.
4,338.6
2024-04-22T00:00:00.000
[ "Medicine", "Engineering" ]
Feature Selection with the Boruta Package This article describes a R package Boruta , implementing a novel feature selection algorithm for finding all relevant variables . The algorithm is designed as a wrapper around a Random Forest classification algorithm. It iteratively removes the features which are proved by a statistical test to be less relevant than random probes. The Boruta package provides a convenient interface to the algorithm. The short description of the algorithm and examples of its application are presented Introduction Feature selection is often an important step in applications of machine learning methods and there are good reasons for this.Modern data sets are often described with far too many variables for practical model building.Usually most of these variables are irrelevant to the classification, and obviously their relevance is not known in advance.There are several disadvantages of dealing with overlarge feature sets.One is purely technical -dealing with large feature sets slows down algorithms, takes too many resources and is simply inconvenient.Another is even more important -many machine learning algorithms exhibit a decrease of accuracy when the number of variables is significantly higher than optimal (Kohavi and John 1997).Therefore selection of the small (possibly minimal) feature set giving best possible classification results is desirable for practical reasons.This problem, known as minimaloptimal problem (Nilsson, Peña, Björkegren, and Tegnér 2007), has been intensively studied and there are plenty of algorithms which were developed to reduce feature set to a manageable size. Nevertheless, this very practical goal shadows another very interesting problem -the identification of all attributes which are in some circumstances relevant for classification, the so-called all-relevant problem.Finding all relevant attributes, instead of only the non-redundant ones, may be very useful in itself.In particular, this is necessary when one is interested in understanding mechanisms related to the subject of interest, instead of merely building a black box predictive model.For example, when dealing with results of gene expression measurements in context of cancer, identification of all genes which are related to cancer is necessary for complete understanding of the process, whereas a minimal-optimal set of genes might be more useful as genetic markers.A good discussion outlining why finding all relevant attributes is important is given by Nilsson et al. (2007). The all-relevant problem of feature selection is more difficult than usual minimal-optimal one.One reason is that we cannot rely on the classification accuracy as the criterion for selecting the feature as important (or rejecting it as unimportant).The degradation of the classification accuracy, upon removal of the feature from the feature set, is sufficient to declare the feature important, but lack of this effect is not sufficient to declare it unimportant.One therefore needs another criterion for declaring variables important or unimportant.Moreover, one cannot use filtering methods, because the lack of direct correlation between a given feature and the decision is not a proof that this feature is not important in conjunction with the other features (Guyon and Elisseeff 2003).One is therefore restricted to wrapper algorithms, which are computationally more demanding than filters. In a wrapper method the classifier is used as a black box returning a feature ranking, therefore one can use any classifier which can provide the ranking of features.For practical reasons, a classifier used in this problem should be both computationally efficient and simple, possibly without user defined parameters. The current paper presents an implementation of the algorithm for finding all relevant features in the information system in a R (R Development Core Team 2010) package Boruta (available from the Comprehensive R Archive Network at http://CRAN.R-project.org/package=Boruta).The algorithm uses a wrapper approach built around a random forest (Breiman 2001) classifier (Boruta is a god of the forest in the Slavic mythology).The algorithm is an extension of the idea introduced by Stoppiglia, Dreyfus, Dubois, and Oussar (2003) to determine relevance by comparing the relevance of the real features to that of the random probes.Originally this idea was proposed in the context of filtering, whereas here it is used in the wrapper algorithm.In the remaining sections of this article firstly a short description of the algorithm is given, followed by the examples of its application on a real-world and artificial data set. Boruta algorithm Boruta algorithm is a wrapper built around the random forest classification algorithm implemented in the R package randomForest (Liaw and Wiener 2002).The random forest classification algorithm is relatively quick, can usually be run without tuning of parameters and it gives a numerical estimate of the feature importance.It is an ensemble method in which classification is performed by voting of multiple unbiased weak classifiers -decision trees.These trees are independently developed on different bagging samples of the training set.The importance measure of an attribute is obtained as the loss of accuracy of classification caused by the random permutation of attribute values between objects.It is computed separately for all trees in the forest which use a given attribute for classification.Then the average and standard deviation of the accuracy loss are computed.Alternatively, the Z score computed by dividing the average loss by its standard deviation can be used as the importance measure.Unfortunately the Z score is not directly related to the statistical significance of the feature importance returned by the random forest algorithm, since its distribution is not N (0, 1) (Rudnicki, Kierczak, Koronacki, and Komorowski 2006).Nevertheless, in Boruta we use Z score as the importance measure since it takes into account the fluctuations of the mean accuracy loss among trees in the forest. Since we cannot use Z score directly to measure importance, we need some external reference to decide whether the importance of any given attribute is significant, that is, whether it is discernible from importance which may arise from random fluctuations.To this end we have extended the information system with attributes that are random by design.For each attribute we create a corresponding 'shadow' attribute, whose values are obtained by shuffling values of the original attribute across objects.We then perform a classification using all attributes of this extended system and compute the importance of all attributes. The importance of a shadow attribute can be nonzero only due to random fluctuations.Thus the set of importances of shadow attributes is used as a reference for deciding which attributes are truly important. The importance measure itself varies due to stochasticity of the random forest classifier.Additionally it is sensitive to the presence of non important attributes in the information system (also the shadow ones).Moreover it is dependent on the particular realization of shadow attributes.Therefore we need to repeat the re-shuffling procedure to obtain statistically valid results. In short, Boruta is based on the same idea which forms the foundation of the random forest classifier, namely, that by adding randomness to the system and collecting results from the ensemble of randomized samples one can reduce the misleading impact of random fluctuations and correlations.Here, this extra randomness shall provide us with a clearer view of which attributes are really important. The Boruta algorithm consists of following steps: 1. Extend the information system by adding copies of all variables (the information system is always extended by at least 5 shadow attributes, even if the number of attributes in the original set is lower than 5). 2. Shuffle the added attributes to remove their correlations with the response. 3. Run a random forest classifier on the extended information system and gather the Z scores computed. 4. Find the maximum Z score among shadow attributes (MZSA), and then assign a hit to every attribute that scored better than MZSA. 5. For each attribute with undetermined importance perform a two-sided test of equality with the MZSA. 6. Deem the attributes which have importance significantly lower than MZSA as 'unimportant' and permanently remove them from the information system. 7. Deem the attributes which have importance significantly higher than MZSA as 'important'. 8. Remove all shadow attributes.9. Repeat the procedure until the importance is assigned for all the attributes, or the algorithm has reached the previously set limit of the random forest runs. In practice this algorithm is preceded with three start-up rounds, with less restrictive importance criteria.The startup rounds are introduced to cope with high fluctuations of Z scores when the number of attributes is large at the beginning of the procedure.During these initial rounds, attributes are compared respectively to the fifth, third and second best shadow attribute; the test for rejection is performed only at the end of each initial round, while the test for confirmation is not performed at all. The time complexity of the procedure described above in realistic cases is approximately O(P • N ), where P and N are respectively the numbers of attributes and objects.That may be time consuming for large data sets; still, this effort is essential to produce a statistically significant selection of relevant features. To illustrate the scaling properties of Boruta algorithm we performed following experiment using Madalon data set.It is an artificial data set, which was one of the NIPS2003 problems.(Guyon, Gunn, Ben-Hur, and Dror 2005) The data set contains 2000 objects described with 500 attributes.We generated subsamples of Madelon set containing 250, 500, 750, . . ., 2000 objects.Then for each subsample we created seven extended sets containing respectively 500, 1000, . . ., 3500 superficial attributes obtained as a uniform random noise.Then we performed standard feature selection with Boruta on each of 64 test sets and measured the execution time.The results of the experiment are displayed in Figure 1.One may see almost perfect linear scaling for the increasing number of attributes.On the other hand execution times grow faster than the number of objects, but the difference is not very big and it seems to converge to linear scaling for large number of objects. The timings are reported in CPU hours.Using the values from the largest data set, one can estimate the time required to complete Boruta run on a single core of modern CPU to be one hour per one million (attribute × objects). One should notice that in many cases, in particular for a biomedical problems, the computation time is a small fraction of the time required to collect the data.One should also note, that the prime reason for running the 'all-relevant' feature selection algorithm is not the reduction of computation time (altough it can be achieved if the data set pruned from non-informative attributes will be subsequently analysed numerous times).The main reason is to find all attributes for which their correlation with decision is higher than that of the random attributes.Moreover, while Boruta is generally a sequential algorithm, the underlying random forest classifier is a trivially parallel task and thus Boruta can be distributed even over a hundreds of cores, provided that a parallel version of the random forest algorithm is used. Using the Boruta package The Boruta algorithm is implemented in Boruta package. R> library("Boruta") R> library("mlbench") R> data("Ozone") R> Ozone <-na.omit(Ozone) The algorithm is performed by the Boruta function.For its arguments, one should specify the model, either using a formula or predictor data frame with a response vector; the confidence level (which is recommended to be left default) and the maximal number of random forest runs. One can also provide values of mtry and ntree parameters, which will be passed to randomForest function.Normally default randomForest parameters are used, they will be sufficient in most cases since random forest performance has rather a weak dependence on its parameters.If it is not the case, one should try to find mtry and ntree for which random forest classifier achieves convergence at minimal value of the OOB error. Setting doTrace argument to 1 or 2 makes Boruta report the progress of the process; version 2 is a little more verbose, namely it shows attribute decisions as soon as they are cleared. Figure 2 shows the Z scores variability among attributes during the Boruta run.It can be easily generated using the plot method of Boruta object: R> plot(Boruta.Ozone) One can see that Z score of the most important shadow attribute clearly separates important and non important attributes. Moreover, it is clearly evident that attributes which consistently receive high importance scores in the individual random forest runs are selected as important.On the other hand, one can observe quite sizeable variability of individual scores.The highest score of a random attribute in a single run is higher than the highest importance score of two important attributes, and the lowest importance score of five important attributes.It clearly shows that the results of Boruta are generally more stable than those produced by feature selection methods based on a single random forest run, and this is why several iterations are required. Due to the fact that the number of random forest runs during Boruta is limited by the maxRuns argument, the calculation can be forced to stop prematurely, when there are still attributes which are judged neither to be confirmed nor rejected -and thus finally marked tentative. For instance 1 : R> set.seed(1)R> Boruta.Short <-Boruta(V4 ~., data = Ozone, maxRuns = 12) 1 The number of steps and the seed were intentionally selected to show this effect in the familiar data set.Due to slight differences between Windows and Linux versions of randomForest package, which probably arise due to compilation, the actual results of the procedure described above might differ slightly from the results shown here (these were obtained in R version 2.10.0 and randomForest version 4.5-33 on x86-64 Linux workstation).q q q q q q q q q randMin V2 V3 Running Boruta (execution may take a few hours): R> set.seed(7777)R> Boruta.Madelon <-Boruta(decision ~., data = Madelon) R> Boruta.Madelon Boruta performed 51 randomForest runs in 1.861855 hours.20 attributes confirmed important: (the rest of the output was omitted) One can see that we have obtained 20 confirmed attributes.The plotZScore function visualizes the evolution of attributes' Z scores during a Boruta run: R> plotZHistory(Boruta.Madelon) The result can be seen on Figure 3.One may notice that consecutive removal of random noise increases the Z score of important attributes and improves their separation from the unimportant ones; one of them is even 'pulled' out of the group of unimportant attributes just after the first initial round.Also, on certain occasions, unimportant attributes may achieve a higher Z score than the most important shadow attribute, and this is the reason why we need multiple random forest runs to arrive at a statistically significant decision. The reduction of attribute number is considerable (96%).One can expect that the increase of accuracy of a random forest classifier can be obtained on the reduced data set due to the elimination of noise. It is known that feature selection procedure can introduce significant bias in resulting models. For example Ambroise and McLachlan (2002) have shown that, with the help of feature selection procedure, one can obtain a classifier, which is using only non-informative attributes and is 100% accurate on the training set.Obviously such classifier is useless and is returning random answers on the test set. Therefore it is necessary to check whether Boruta is resistant to this type of error.It is achieved with the help of cross-validation procedure.The part of the data is set aside as a test set.Then the complete feature selection procedure is performed on the remaining data set -a training set.Finally the classifier obtained on the training set is used to classify objects from the test set to obtain classification error.The procedure is repeated several times, to obtain estimate of the variability of results. Boruta performs several random forest runs to obtain statistically significant division between important and irrelevant attributes.One should expect that ranking obtained in the single RF run should be quite similar to that obtained from Boruta.We can check if this is the case, taking advantage of the cross-validation procedure described above. Madelon data was split ten times into train and test sets containing respectively 90% and 10% of objects.Than, Boruta was run on each train set.Also, three random forest classifiers were generated on each train set: first using all attributes, the second one using only these attributes that were selected by Boruta, and the third one using the same number of attributes as found by Boruta, but selected as a top important by the first random forest trained on all attributes.Finally, the OOB error estimate on a train set and the error on a test set for all classifiers was collected. The results are shown in the mean of the differences -0.182 As one may expect, the feature ranking provided by plain random forest agrees fairly well with Boruta results.This explains why the simple heuristic feature selection procedure in random forest, namely selecting a dozen or so top scoring attributes, works well for obtaining good classification results.Nevertheless, this will not necessarily be a case when dealing with larger and more complex sets, where stochastic effects increase the variability of the random forest importance measure and thus destabilize the feature ranking. One should note that the Boruta is a heuristic procedure designed to find all relevant attributes, including weakly relevant attributes.Following Nilsson et al. (2007), we say that attribute is weakly important when one can find a subset of attributes among which this attribute is not redundant.The heuristic used in Boruta implies that the attributes which are significantly correlated with the decision variables are relevant, and the significance here means that correlation is higher than that of the randomly generated attributes.Obviously the set of all relevant attributes may contain highly correlated but still redundant variables.Also, the correlation of the attribute with the decision does not imply causative relation; it may arise when both decision attribute and descriptive attribute are independently correlated with some other variable.An illustrative example of such situation was given by Strobl, Hothorn, and Zeileis (2009).Users interested in finding a set of highly relevant and uncorrelated attributes within the result returned by Boruta may use for example package party (Strobl et al. 2009), caret (Kuhn 2008;Kuhn, Wing, Weston, Williams, Keefer, and Engelhardt 2010), varSelRF (Diaz-Uriarte 2007, 2010) or FSelector (Romanski 2009) for further refinement. Summary We have developed Boruta, a novel random forest based feature selection method, which provides unbiased and stable selection of important and non-important attributes from an information system.Due to the iterative construction, our method can deal both with the fluctuating nature of a random forest importance measure and the interactions between attributes.We have also demonstrated its usefulness on an artificial data set.The method is available as an R package. Figure 1 : Figure1: The scaling properties of Boruta with respect to the number of attributes (left) and number of objects (right).Each line on the left panel corresponds to the set with identical number of objects and on the right panel it corresponds to the set with identical number of attributes.One may notice that scaling is linear with respect to number of attributes and not far from linear with respect to the number of objects. Figure 3 : Figure3: Z score evolution during Boruta run.Green lines correspond to confirmed attributes, red to rejected ones and blue to respectively minimal, average and maximal shadow attribute importance.Gray lines separate rounds. One should consider increasing the maxRuns parameter if tentative attributes are left.Nevertheless, there may be attributes with importance so close to MZSA that Boruta won't be able to make a decision with the desired confidence in realistic number of random forest runs.Therefore Boruta package contains a TentativeRoughFix function which can be used to fill missing decisions by simple comparison of the median attribute Z score with the median Z score of the most important shadow attribute: Figure 2: Boruta result plot for ozone data.Blue boxplots correspond to minimal, average and maximum Z score of a shadow attribute.Red and green boxplots represent Z scores of respectively rejected and confirmed attributes. Table 1 . One can see that both the OOB error as well as the error on the test set is consistently smaller for random forest runs performed on the reduced set of attributes.This observation is verified by a t test: Table 1 : Cross-validation of the error reduction due to limiting the information system to attributes claimed confirmed by Boruta.
4,645.2
2010-09-16T00:00:00.000
[ "Computer Science" ]
The elasticity of demand for public health expenditure in South Africa : a cointegration approach To effectively evaluate the elasticity of demand for public health expenditure in South Africa, this study utilised a demand function approach to specify the functional relationship between public health expenditure, real GDP and other non-income explanatory variables. The study uses a double-log linear regression model to capture the elasticity of public health expenditure in respect of model explanatory variables. Empirical results suggest that public health expenditure, GDP, life expectancy and medical inflation were cointegrated over the period of the analysis. The findings also confirmed that the coefficients of these variables were statistically significant and of the expected signs. Specifically, the results reaffirm the importance of GDP and life expectancy as key determinants of health expenditure, both with an elasticity value above unity. The importance of medical inflation was also confirmed although its effect appears small. Introduction There is a widespread belief that health care costs will increase significantly over the next decade as people live longer and are expecting to be given unconditional access public health care.This implies that, any formal structure or system for delivering health care will eventually be confronted with the inevitable situation of a limited amount of resources to serve a growing patient population (Zere et al., 2001).In South Africa, public health care expenditure accounts for approximately 4 per cent Gross Domestic Product (GDP) and supports a growing population of medically uninsured persons.South Africa suffers from a quadruple burden of disease where diseases of development such as communicable diseases co-exist with an expanding problem of chronic diseases and trauma (National Treasury, 2009).According to a report by the Financial and Fiscal Commission (2012), communicable and noncommunicable diseases, injury and trauma continue to prevent faster development with HIV/AIDS, tuberculosis and malaria posing the greatest challenges. Despite the rising allocations and progress made with the delivery of public health services, the health system continues to be challenged by the large burden of disease, not being adequately prevented (National Treasury, 2009).This has prompted policy makers and researchers in health care to mobilise for additional resources to address inequalities in access to health care. Given the fact that a large proportion of public health spending is in district health and hospital services, it is important that a proper assessment of funding requirements of rendering these services is investigated and addressed.Failure to assess the full cost of providing these services will compromise the objectives of addressing inequality and access to health care.Stuckler et al. (2011) used multivariate regression analysis to examine the determinants of health care funding allocations among the South African provinces between 1996 and 2007.Their study found that the burden of disease was increasingly negatively correlated to funding allocations and that it explained less than a quarter of the variation in allocations among provinces.Moreover, in the case of HIV the study found that since 2002 the scale of the inverse association increased substantially as HIV prevalence rose while allocations showed no response to the growing burden. Econometric modelling of health expenditure has been among the most commonly used methods of regression based analysis in public health research.This approach has been used in studies by Hitiris and Posnett (1992), Hansen and King (1996), Milne and Molana (1991) and Newhouse (1977) to measure the response of health budgets to changes in non-income explanatory variables. Since Newhouse (1977) drew attention to the correlation between per capita health care spending and per capita GDP, a number of economists have been attracted to study the relationship between public health expenditure and its effect on health care provisioning.A study by Hitiris and Posnett (1992) assessed the determinants and effects of health expenditure in developed countries between 1960 and 1987.The study found a negative correlation between expenditure on health care and burden of disease, as measured by crude mortality rates. A study by Gerdtham et al. (1992) revealed that health care expenditure differ substantially across countries, regardless of how it is measured.The results of regression analyses used to explain the observed differences in health care expenditure across countries indicate that health care expenditure increases proportionally more than aggregate per capita income. A study by Kleinman (1974) and Newhouse (1977) established a strong and positive correlation between national income and expenditure on health care.This is consistent with the findings of Milne and Molana (1991) and Hitiris and Posnett (1992), of a research examining the determinants of aggregate health care expenditure.These studies borrowed what is essentially a demand function approach in specifying their models.That is, per capita health care expenditure is hypothesised to be a function of per capita income (GDP) and other non-income variables such as HIV prevalence. The National Department of Health (NDOH) has driven a number of initiatives to develop and strengthen the health care system in South Africa.Some of these initiatives include, among others, the White paper for the transformation of the health system in South Africa (1997), the promulgation of the National Health Act (2003) which formalised the legal status of the District Health System (1994), the National Strategic and Ten Point plan (1999)(2000)(2001)(2002)(2003)(2004), the Negotiated Service Delivery Agreement (2010-2014), the National Core Standards for Health Establishments in South Africa (2011), the re-engineering of primary health care (PHC) services and the clinic building programme (2011), and implementation of the National Health Insurance (NHI) (2012).These and many other initiatives have increased access and care for the majority of vulnerable South Africans. However some of the primary gains have been compromised by a multiplicity of factors, including a quadruple burden of disease, low morale among health personnel, inadequate management systems and gaps between policy intentions and actual implementation (Schneider et al., 2007).As a result some of the health outcomes such as infant mortality, immunisation rates, early childhood malnutrition, maternal and crude mortality rates are poor and not proportional with the per capita rates of health expenditure.It is against this background that this study seeks to investigate the elasticity of public health expenditure in South Africa, with the view to identify the key determinants of public health expenditure that are intensive. Through regression analysis the study seeks to measure and demonstrate the variation between public health expenditure and health outcomes as well as the continued investment on health in the country as its GDP grows.Ultimately, this will show whether or not public health expenditure is responsive to health outcomes, as well as the payoff from this investment in terms of increased longevity. This paper is divided into four parts.Following this section is Section 2 which discusses the model, data sources and the variables used for this analysis.Section 3 presents and analyses the empirical results.Section 4 concludes the paper. Model specification Following the demand function approach, as applied by Hansen and King (1996), Gerdtham et al. (1992), Stuckler et al. (2011) Mullahy (2009) and others, a hypothesised model of health expenditure will be estimated.Specifically, public health expenditure is hypothesised to be a function of real income and a selection of non-income variables, as follows: In order to capture the elasticity of public health expenditure with respect to the model explanatory variables, a double-log linear regression equation below will be estimated. where t = 1, …, 18 indicate years.Our dependent variable, PHEt represents total public health expenditure, in thousands of rands in South Africa, in year t.The explanatory variables are: GDPt = GDP in constant 2005 prices.GDP was chosen following the findings of Gerdtham et al. (1992) and Hansen and King (1996).The main insight in this case is that an increase in economic output will generate an increase in public health expenditure.LeXt = a measure of average life expectancy at birth, calculated by the Medical Research Council (MRC).The choice of this variable is based on findings by Hitiris and Posnett (1992) and Stuckler et al. (2011).The variable was chosen in order to identify whether there is a variation of the impact of average life expectancy on public health expenditure.Medic_inflt = medical inflation, metropolitan and other urban areas (Index 2000 = 100).This variable was chosen as a control variable to identify whether uncertainty regarding prices has an impact on public health expenditure. Interpretation of elasticities The logarithmic specification of Equation 2ensures that βi can be interpreted as elasticities (Koop, 2005).The parameter of primary interest in this study will be β2, the elasticity of public health expenditure in respect of average life expectancy, which provides a measure of the responsiveness of public health funding due to changes in average life expectancy.Hence, the elasticity coefficient will show how public health expenditure has varied with average life expectancy.Therefore, a positive elasticity value of 0.5, for instance, implies that a percentage increase in average life expectancy is associated with half a percentage increase in public health expenditure.An elasticity of 1.33 implies that every percentage increase in average life expectancy is associated with a 1.33 percentage increase in public health expenditure, and so forth.Similar to other regression based analyses that have explained the variation in public health expenditure across countries [see Milne and Molana (1991), Hitiris and Posnett (1992), Hansen and King (1996), Gerdtham et al. (1992), Newhouse (1977)], this study will estimate and evaluate the functional relationship between public health expenditure, GDP and other non-income explanatory variables. Data sources The study utilised secondary, yearly data covering the period from 1995 to 2012.The data on public health expenditure was sourced from the Financial and Fiscal Commission (FFC) database.Data on average life expectancy was sourced from the MRC database.It is based on the average life expectancy at birth indicator which accounts for a wide spectrum of mortality rates and prevalence distribution of health states in the population.Data on medical inflation (based upon a 2000 base of 100) and GDP (at constant 2005 prices) were obtained from Statistics South Africa (STATSSA). Econometric results and discussion As a preliminary step to empirical analysis, the study commences by investigating the integration properties of the series.This is done in order to establish the presence of unit roots in the data and to apply appropriate modelling procedures.That is, in establishing whether the variable is stationary or non-stationary, it is important to test for the presence of unit roots in order to avoid a spurious regression (Harris, 1995).By differencing data to remove the non-stationary (stochastic) trend, spurious regression problem can be avoided.While there are several ways for testing the presence of unit roots in the data, this study utilises the Augmented Dickey-Fuller (ADF) approach to test the null hypothesis that a series contain a unit root against the alternative of stationarity.Following from Gogfrey and Treymayne (1998), Handa and Ma (1989) and Muscatelli and Hurn (1992), the ADF test was employed to this end and the results are summarised in the table below.The results of the Augmented Dickey-Fuller test, reflected in Table 1 above, suggest all model variables are of unit roots in levels, except for MEDIC_INFL.This means that the non-stationary variables had to be differenced.Further tests indicate that the non-stationary variables are stationary after the first differencing, suggesting differenced stationary series of order one, I(1).That is, the logarithms of PHE, LEX and GDP are I(1) and MEDIC_INFL is I(0).Eventually, all non-stationary variables became stationary after taking the first difference. In order to establish a long-run relationship between PHE and other selected variables a cointegration regression analysis is applied, whereby the residuals obtained from the ordinary least squares estimation were subjected to unit root analysis.Based on the Angle-Granger (1987) cointegration test, the results suggest that the residuals from the regression were stationary, hence cointegrated.The results of the cointegration analysis are presented in Table 2 below.The results presented in the table above indicate a cointegrating regression, suggesting the existence of a long-run relationship between PHE, GDP, LeX, and Medic_Infl in South Africa.Hence we can conclude that, for South Africa, these variables share the same long run properties.This is indicative of the ability by GDP, inflation and increased human longevity to influence public health funding. Since the model has been found to reflect a cointegrating regression, the model can be estimated using ordinary least squares (OLS) without any further adjustment to yield consistent estimates.Table 3 presents the coefficient estimates of the model based on the OLS estimation of the relationship between public health expenditure and the selected variables.The regression yield an impressive adjusted R 2 , implying that the regressors account for approximately 98 per cent variation in public health expenditure.The estimated coefficients are fairly robust, significant and of the expected signs.In the first instance, the elasticity of GDP is 3.71 (significantly above one at the 1% level) which mean that a one percentage increase in GDP will give a 3.71 percentage rise in aggregate public health expenditure.Furthermore, since the income elasticity of health care spending obtained exceeded unity, health care is deemed a luxury good in South Africa.This indicates an increased marginal preference to spend on health care and is suggestive of the willingness by government to prioritise the health care sector. This reaffirms the conclusion by Hansen and King (1996) that most estimates of income elasticity of health care spending obtained have exceeded unity. With regards to medical inflation, health care is usually considered to be relatively inelastic with respect to consumer price (Gerdtham et al., 1992).Hence the inelastic coefficient value of 0.20 in respect of medical inflation is in line with empirical findings.This implies that an increase in medical inflation results in a small percentage increase in the health budget.This is true for South Africa where MTEF budgets in the public health sector have been adjusted by the conventionally low CPI inflation which has consistently remained below medical inflation. As mentioned, our parameter of primary interest in this study is the elasticity of public health expenditure in respect of average life expectancy.The value and the sign of this coefficient is important since it will address the critical question of how public health expenditure will react to an increase in life expectancy (i.e.population ageing).The estimated coefficient was found to be significant and the expected sign with an elasticity value of 1.87.Since this value is above 1 in absolute terms (i.e.elastic), this implies that a significant growth in health expenditure have been explained by the accumulative effects of ageing population over time.This means that an increase in the average life expectancy has resulted in a higher percentage increase in the budget for public health care.Between 2009 and 2014, life expectancy in South Africa has increased from 53 years to 57 year and this have a significant impact on health budgets given the fact that as people live longer, they are expecting to be given unconditional access public health care.These results strongly confirmed that health care expenditure for the elderly increase with age, given the fact that health care expenditures increase with closeness to death (Yang et al., 2003).This is also reaffirmed by Gray ( 2005) who was able to show that an increasing elderly population was the main reason for the upsurge in inpatient care expenditures and ultimately long-term health care expenditure.As a result in the UK, for instance, the government have incorporated greater longevity and proximity to death into health care expenditure projections as part of the long-term health spending requirements (Wanless, 2002).Projections by the OECD of the impact of population ageing on public expenditures suggest that population ageing will create an increase in age-related social expenditures from an estimated 19 per cent of GDP in 2000 to almost 26 per cent of GDP by 2050, with health care expenditure accounting most of the increase (Dang et al., 2001). This is suggestive of the potential that the public health care budget has to deliver and address the burden of disease.Consequently, this also indicative of the growing concerns that, in South Africa, public health is consuming more in resources without the concomitant increase in output of service provision and hence raising suspicion of technical inefficiencies that might be embedded in this sector.According to the study on Financing Health Services in Developing countries by Zere (2000), inefficiency was identified as one of the major problems in the African health care system apart from access and equity.Furthermore, evidence emerging from other studies suggests that there is wide prevalence of technical inefficiency in hospitals and other health facilities in South Africa (Zere et al., 2001).In South Africa, much of the attention by policy makers, donors and health care researchers has been on health sector reform and mobilisation of additional resources to address inequity and access to health care.However, it is also equally important that the efficiency with which these resources are used is investigated and addressed. The current real increases in health budgets are an attempt to respond to the needs of the health sector and to deal with the main causes of the burden of disease.However despite these interventions, the life expectancy rate has shown an increasing trend over time.This means that public health expenditure will have to increase significantly over the next decade as people live longer and expecting to be given unconditional access public health care.This implies that, any formal structure or system for delivering health care will eventually be confronted with the inevitable situation of a limited amount of resources to serve a growing patient population.This has a potential to change the demographic landscape of South Africa, challenging the way in which public health care is to be funded.With regards to the results of the errorcorrection model, the error correction term is significant at 1% level of significance and has a speed of adjustment coefficient value of -1.21 (Table 4).This coefficient indicates that public health expenditure adjusted relatively faster to changes to the underlying equilibrium relationship since the parameter estimate, εt-1, shows that 1.21 percentage of disequilibrium is removed in each period.In addition, the diagnostic tests reveal that the estimated model is correctly specified and conform with the statistical assumptions of the classical linear model.Also, the results of the normality test show that the residuals are normally distributed with a zero mean and variance.These results suggest that the estimated regression model is well specified and generally conforms to economic theory and the assumptions underlying our modelling procedures. Conclusion and recommendations Despite the rising allocations and progress made with the delivery of public health services, the South African health care system continues to be challenged by the large burden of disease.Life expectancy has increased over the past years.This has a direct impact on increased public health expenditures.Furthermore, despite real increases in health budgets that have been noted recently, the upward increasing trend in life expectancy rate implies that public health expenditure will have to increase quite significantly over the next decade as people live longer and expecting to be given unconditional access to public health care.If sustained this may potentially change the demographic landscape of South Africa, challenging the way in which public health care is to be funded. One of the main conclusions that emerge from this study is the strong positive relationship between public health expenditure and real GDP, as reported in previous studies.The high income elasticity coefficient above unity is suggestive that health care is a luxury good in South Africa, and hence government's willingness to prioritise the health care sector. Regarding the other non-income variables, the coefficient for medical inflation is relatively small and inelastic.This is also consistent with previous empirical findings as suggested by Gerdtham et al. (1992) that increases in medical inflation result to smaller changes in health expenditure.The findings further confirm that public health expenditure and life expectancy are positively correlated, in South Africa during the period of 1995 to 2012.The elasticity coefficient of life expectancy was found to be elastic with the value 1.87, implying that a significant growth in health budgets have been explained by longevity accumulation impacts over time.This demonstrates the impact of the accumulative effects of ageing population on health expenditure over time. While the results of this analysis may demonstrate the importance of GDP, life expectancy and medical inflation as determinants of public health expenditure, differences in health outcomes and funding levels across districts necessitates the need to go beyond the current national level analysis.Additional research is therefore needed to assess the levels allocative and operational efficiency of health facilities across districts for optimum policy conclusions.In the absence of sufficient study data the use of Stochastic Frontier Analysis (SFA) and Data Envelopment Analysis (DEA) to address some these specific questions is recommended. Table 3 . OLS estimates of the relationship between PHE and GDP, LeX, Medic_Infl. Table 4 . Results of error correction model diagnostic testing.Tabela 4. Resultados do modelo de diagnóstico de correção de erros.
4,772.6
2018-05-18T00:00:00.000
[ "Medicine", "Economics" ]
Ecological niche differentiation in Chiroxiphia and Antilophia manakins (Aves: Pipridae) Species distribution models are useful for identifying the ecological characteristics that may limit a species’ geographic range and for inferring patterns of speciation. Here, we test a hypothesis of niche conservatism across evolutionary time in a group of manakins (Aves: Pipridae), with a focus on Chiroxiphia boliviana, and examine the degree of ecological differentiation with other Chiroxiphia and Antilophia manakins. We tested whether allopatric sister species were more or less similar in environmental space than expected given their phylogenetic distances, which would suggest, respectively, ecological niche conservatism over time or ecologically mediated selection (i.e. niche divergence). We modeled the distribution of nine manakin taxa (C. boliviana, C. caudata, C. lanceolata, C. linearis, C. p. pareola, C. p. regina, C. p. napensis, Antilophia galeata and A. bokermanni) using Maxent. We first performed models for each taxon and compared them. To test our hypothesis we followed three approaches: (1) we tested whether C. boliviana could predict the distribution of the other manakin taxa and vice versa; (2) we compared the ecological niches by using metrics of niche overlap, niche equivalency and niche similarity; and (3) lastly, we tested whether niche differentiation corresponded to phylogenetic distances calculated from two recent phylogenies. All models had high training and test AUC values. Mean AUC ratios were high (>0.8) for most taxa, indicating performance better than random. Results suggested niche conservatism, and high niche overlap and equivalency between C. boliviana and C. caudata, but we found very low values between C. boliviana and the rest of the taxa. We found a negative, but not significant, relationship between niche overlap and phylogenetic distance, suggesting an increase in ecological differentiation and niche divergence over evolutionary time. Overall, we give some insights into the evolution of C. boliviana, proposing that ecological selection may have influenced its speciation. Introduction The distributional area of a species is an expression of its evolutionary history and its ecology [1,2]. Therefore, predictive models of species' geographic distributions are not only useful for identifying ecological characteristics that may limit a species' range [3] but are also useful for setting the stage to infer patterns of speciation [4,5]. Species distribution models can be combined with phylogenies to study ecological divergence and evolution of niches, and therefore allow for inferring how were the processes responsible for the formation of new species (e.g., [6][7][8][9]). Speciation history should leave a detectable signature in present-day phylogenetic patterns and also in current species geographic distributions [4,10]. The niche is often discussed as either a fundamental or a realized niche. A fundamental niche is defined by the set of abiotic conditions where a species potentially is able to persist, whereas the realized niche describes the conditions in which a species actually persists given the presence of competitors or predators [11,12]. Species may retain aspects of their fundamental niche over long periods of time, a process often called niche conservatism [12]. We can use the present-day ecological niche of a species in a comparative way to help understand the evolutionary history of a species and, potentially, modes of speciation. In general, it has been hypothesized that if ranges of sister taxa do not overlap, the mode of speciation is allopatric; whereas if sister taxa co-occur, sympatric speciation is inferred [5]. In allopatric speciation, new lineages arise after geographic separation of ancestral species into isolated sets of populations [13]. Especially for recently diverged species, if speciation is allopatric, sister species will display little or no overlap in geographic range [4]. Further, we might expect species that differentiated via allopatric modes to retain aspects of their fundamental niche (niche conservatism) [12]. In contrast, species that have differentiated in sympatry may be expected to have diverged in their ecological requirements, and such ecological differences may have driven speciation. Although such species may still share aspects of an ancestral climate niche, they might be kept apart by selection against hybridization, or they might have evolved different climate niches [14]. In summary, speciation is a process in which species' ranges may expand or contract in response to several factors (e.g., climate, degree of specialization, dispersal capabilities, etc.) [15]. Here, we use species distribution models and niche comparisons to test the hypothesis that ecological niches are conserved across evolutionary time. Our goal was to understand the coarse-scale ecological and geographic properties of species' distributions [16,17] and, as a consequence, we most closely follow the Grinnellian niche concept (which focuses on the set of coarse environmental conditions for a population to persist, [18]). To test this hypothesis of niche conservatism, we used manakins (Aves, Pipridae) as a model clade. Manakins are an ideal model because they are remarkably diverse, they are broadly distributed across different habitats in the Neotropics, they are well represented in museum collections [19], and are relatively well known with respect to biogeography and speciation [20,21]. We focus on the sister genera Chiroxiphia and Antilophia [22] which form a distinct clade apart from most other genera of manakins [23][24][25]. The genus Chiroxiphia comprises five species: C. linearis, C. lanceolata, C. pareola, C. caudata and C. boliviana; Antilophia comprises two species: A. galeata and A. bokermanni (endemic to a tiny area in the Brazilian northeast) [24]. Chiroxiphia has been regarded as a superspecies, with multiple closely related taxa separated geographically [24]. All these characteristics make Chiroxiphia and Antilophia ideal models to test hypotheses of ecological niche differentiation and speciation. There are two recent studies (Fig 1) that performed molecular phylogenies considering all species in these genera and the results of both suggest that Chiroxiphia is paraphyletic ( [25]; Leite et al. in revision); while the first one indicates that C. boliviana is the sister taxa to Antilophia, the latter suggests its sister taxa might instead be C. caudata. A study on ecological niches of the whole family Pipridae [19] reported niche conservatism between sister species. They did, however, recognize some exceptions, including Chiroxiphia boliviana and Chiroxiphia pareola; they showed that most manakins have a lowland distribution and suggested that C. boliviana might have invaded higher elevations and cooler climates from humid lowlands, the latter of which is a more characteristic habitat of the genus. Given that C. boliviana reaches the highest elevations in the family, here we focus on this species. The present study extends the approaches followed by Anciães and Peterson [19] in four main ways: we focus on C. boliviana in comparison with other Chiroxiphia and Antilophia manakins, making it a more detailed approach; we significantly increase the sample sizes of all taxa; we add robust methods to describe and compare the ecological niches in environmental space; and finally, we perform a correlation between niche and phylogenetic distance. Specifically, the overall goals of this study were to test hypotheses of niche conservatism and speciation in C. boliviana, with other Chiroxiphia and Antilophia manakins. We first use species distribution models to identify the environmental variables that best describe C. boliviana's ecological niche and compare its niche with those of other Chiroxiphia and Antilophia manakins; we then test the hypothesis that ecological niches in these genera are conserved across evolutionary time. We followed the framework developed by Graham et al. [5], which examines geographic ranges of species and their environmental envelopes in a phylogenetic context as a way to explore factors that may have influenced speciation. According to this framework, if allopatric sister species segregate in environmental space more than expected given phylogenetic distance, it suggests their niches are not conserved over time, and that ecologically mediated selection may have had a role in speciation. Alternatively, if allopatric sister species are very similar in environmental space, their ecological niches may be more conserved over time than expected, suggesting that ecological divergence (in relation to the parameters examined) has not been a major factor in speciation [5]. To test the hypothesis raised above, we followed three approaches: 1) we examined whether the other closely related manakin taxa could predict the distribution model developed for C. boliviana; and as well, whether C. boliviana could predict the distribution models developed for other closely related manakin taxa; 2) we compared the ecological niches of all manakin taxa considered using the following niche metrics proposed by Broennimann et al. [26]: overlap, equivalency and similarity; and finally 3) we assessed whether the ecological niches changed more or less than expected based on phylogenetic distances. Materials and methods We include a checklist (S1 Table) to describe the details for the species distribution models we conducted; we followed guidelines by Feng et al. [27]. Species occurrence data We obtained occurrence data for the following manakin taxa with Central and South American distributions: Chiroxiphia boliviana, C. pareola (three of its subspecies were treated separately: C. p. pareola, C. p. regina and C. p. napensis), C. caudata, C. lanceolata, C. linearis, Antilophia galeata and A. bokermanni. Occurrence data included personal sources (personal observations for C. boliviana and C. pareola regina), records kindly provided by M. Anciães (for A. galeata, C. caudata, C. lanceolata and C. linearis), I. Areta (several records for C. boliviana in southern Bolivia and northern Argentina), J. P. Gomez (several records for C. lanceolata in Colombia), some records from specimens deposited at the Colección Boliviana de Fauna (CBF) not reported in other museums, and records from citizen science and natural history museums available on the internet through e-bird, GBIF (Global Biodiversity Information Facility, http://www.gbif.org) and ORNIS (provider institutions included: Kansas University Natural History Museum, Macaulay Library, Yale University Peabody Museum, American Museum of Natural History, Smithsonian Institution, Louisiana State University Museum of Natural Science and Cornell Lab of Ornithology; accession dates: December 2015 and January 2016; see S1 Table for more details). Location data were first mapped on ArcGIS 10.4 to inspect for georeferencing errors and to avoid duplication; we also discarded obvious misplaced localities [28]. In general, we tried to use only records that were at least 1 km apart, to reduce sampling bias; however, in few cases we used record locations that were closer because we wanted to have a complete representation of each species' range. In total, we used 542 records (temporal range of records: 1871-2016): 66 records for C. boliviana, 16 for C. pareola napensis, 17 for C. pareola regina, 74 for C. pareola pareola, 146 for C. caudata, 93 for C. lanceolata, 81 for C. linearis, 40 for Antilophia galeata, and 9 records for A. bokermanni. This study is the most exhaustive conducted so far, in terms of geographic representation for these genera. Anciães and Peterson [19,29] used less than 10 occurrence locations to model the distribution of C. boliviana, for example. They also considered all subspecies of C. pareola together; however, these subspecies are different in body size and to a lesser extent in male coloration and they have relatively well-defined geographic distributions, typically with separation by rivers [24]. In the recent study conducted by Silva et al. [25], they used multilocus DNA sequences from all species and subspecies of Chiroxiphia and Antilophia to infer phylogenetic relationships, and they found two divergent clades within one of the subspecies (i.e., C. p. pareola) on the northern and southern sides of the Amazon river. Given the substantial differences among subspecies, they could be actual separate species (i.e., especially C. p regina and C. p. napensis; [25]), hence our decision to treat them as separate units for analyses. Environmental data Initially, we considered 23 environmental variables to define ecologically suitable locations for our study species. These variables included: 16 bioclimatic variables that described annual and seasonal temperature and rainfall trends (WorldClim version 1.4, [30]), three that described topography (slope, eastness and northness obtained from DIVA-GIS; [31]) and four that described vegetation (derived from NDVI-[Normalized Difference Vegetation Index] taken as a measure of the reflectance of Earth's surface vegetation and representative of leaf area index; [32]). Environmental conditions, especially temperature and rainfall, are major determinants of species distributions at macroscales, and remotely-sensed indices such as NDVI, can complement and improve niche models [33]. Eastness and northness were obtained after transforming aspect; these variables go from a scale of 1 (east and northward, respectively) to -1 (west and southward, respectively). We used NDVI data from 9 years, January 2005 to December 2013, downloaded from the Copernicus Global Land Service program (available at http:// land.copernicus.vgt.vito.be/PDF/portal/Application.html#Home). These NDVI measurements were derived from satellite-borne remote sensors (Top of Canopy SPOT/VEGETATION and PROBA-V data). We used the maximum and minimum monthly values out of the 9 years to calculate the following vegetation layers: overall maximum NDVI, overall minimum NDVI, mean annual NDVI and coefficient of variation NDVI. Accession and download from these sources were done in January 2014. All the environmental variables were in raster format and were prepared in ArcGIS 10.3 to align in geographic space using a WGS84 datum system, and to match in spatial extent and cell size (~1 km 2 cell size, or 0.00833 decimal degrees); following preparation, environmental layers were converted to ASCII raster format for later spatial analyses. To reduce the number of environmental variables, we followed methods by Parra et al. [32] and plotted 1,000 random points within the geographic study area (from southern Mexico to northern Argentina), extracted the associated environmental variables and with these values performed a correlation matrix with these values (S2 Table). To reduce multi-collinearity, we removed variables that had a coefficient of correlation > 0.8 with other environmental variables (S2 Table). Thus, we used 13 environmental layers to construct the species distribution models (SDMs): annualpp (annual precipitation), maxtwarmmo (maximum temperature of warmest month), meantdryqua (mean temperature of driest quarter), ppcoldqua (precipitation of coldest quarter), ppdryqua (precipitation of driest quarter), ppwarmqua (precipitation of warmest quarter), ppseason (precipitation seasonality: standard deviation � 100), tseasoncv (temperature seasonality), overall maximum NDVI (maxndvi), coefficient of variation NDVI (cvndvi), eastness, northness and slope. We selected these variables because they have been shown to be important for the ecology of bird populations. Precipitation variables (i.e., the amount and timing of rainfall) significantly affect the demography, survival and abundance not only of Neotropical birds [34] but also of tropical rainforest Australian birds [35]. Similarly, seasonality in both temperature and precipitation were fundamental in determining the phylogenetic composition of hummingbird communities in Ecuador [36]. Further, climate variables together with vegetation productivity variables such as NDVI, have proved to be important determinants of seasonal niches of long-distance migratory birds [37]. Species Distribution Models (SDMs) All SDMs were performed in Maxent 3.3.3 [38] (version 2013) which uses the principle of maximum-entropy (finds the distribution that is closest to uniform) to estimate a set of functions that relate environmental variables and habitat suitability to estimate a species' potential distribution [39]. Maxent is designed to work well with presence-only data, has a high performance with small datasets [38,40] and has been tested extensively and proven to be a robust machine-learning technique [41,42, S1 Table]. Models were run using the default regularization values (i.e. regularization penalizes the use of too many model parameters; it forces Maxent to focus on the most important features by avoiding overfitting; [38,41]). We also chose the logistic model output. The logistic model output is a transformation of the relative occurrence rate, which describes the relative probability of presence [43]; it is a continuous surface of values ranging from 0 to 1 (i.e., high values indicate a high probability of occurrence; [44]). Given that specificity (i.e., proportion of cells correctly predicted as absence cells in relation to all absence cells) cannot be calculated with presence-only data, a threshold of predicted probability was selected: the resulting models were converted to presence-absence using a 10th-percentile training presence threshold (this identifies the top 90% of training samples; [45]). With the resulting rasters, we used ArcGIS 10.7 to make maps of the discrete and continuous relative suitability ranges of each species. For these maps, we also used two layers: a global country boundaries layer [31] and a Digital Elevation Model (DEM) raster [46]. To develop models for each species and subspecies, we first randomly partitioned each species' data (occurrence locations) into two data sets: 75% used as training data (i.e., to formulate the model parameters) and 25% as test data (i.e., to assess the accuracy of the model) [47]. We then set Maxent to generate 10,000 background points at random from the study space for each taxon (see details of models in S1 Table). To test the accuracy of the models, we used the Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) plot. When absence data are not available, AUC scores represent the ability of the model to distinguish presence from background data [38]. The AUC can range from 0 to 1.0; a value of 0.5 can be interpreted as random predictions, and values above 0.5 indicate a performance better than random [47]. Additionally, we performed the partial-ROC analysis [48] which considers the portion of the ROC curve that lies within the predictive range of the modeling algorithm and within the range of acceptable models in terms of an omission error beforehand (i.e., these results are expressed as ratios; [49]). We calculated these ratios in R [50] using ENMGadgets [51]. Values of AUC ratios depart from unity as the model's ROC curve improves with respect to random expectations, and this is performed by means of bootstrapping [49]. To test the hypothesis that the ecological niche in manakin species is conserved across evolutionary time, we considered the following: if allopatric sister species are nearly identical in environmental space, then the ecological niche is fairly conserved and ecological divergence (in relation to the parameters examined) has not been a major factor in speciation [5]. If, alternatively, allopatric sister species segregate in environmental space, it would suggest low niche conservatism and that ecologically-mediated selection may have had a role in speciation. We follow two approaches: (1) we use the record locations of all other taxa (C. caudata, C. lanceolata, C. linearis, C. p. napensis, C. p. regina, C. p. pareola, A. galeata and A. bokermanni) as independent data (testing data) for evaluating the SDMs of C. boliviana; and (2) we use C. boliviana's occurrence locations as test data for evaluating the models of the other taxa. High AUC values would suggest conservatism of climatic tolerances; low AUC values would suggest low niche conservatism, perhaps suggesting that climatic factors were more important for species' divergence. Comparison of niches We calculated niche overlap, niche equivalency and niche similarity between taxa, following Broennimann et al. [26]; analyses were performed in R [50]. This framework quantifies niche overlap between two species, or any taxonomic, geographical or temporal groups of occurrences (called taxa or entities). It is important to highlight that these analyses only use the species' occurrence locations and spatial climatic data to characterize the ecological niche [26], whereas Maxent uses presence records to predict probable occurrence locations over a landscape. According to Broennimann et al. [26], the environmental space is defined by the first two axes of a PCA; it is divided into a grid of r x r cells (i.e., we set this resolution r to 100) in which each cell corresponds to a unique vector of environmental conditions present at one or more sites in geographical space. A kernel density function is used to determine a smoothed density of occurrences in each cell [26]. Niche overlap is calculated by the D metric [52,53]; it varies between 0 and 1, where 0 means no overlap and 1 complete niche overlap [26]. The niche equivalency test determines whether the niche overlap is constant when randomly reallocating the occurrences of both species between their two ranges. On the other hand, the niche similarity test addresses whether the environmental niche occupied by one taxon is more similar to the one occupied by another than expected by chance (see [26] and references within). In summary, the equivalency test asks whether two niches are identical, it randomly pools the occurrences of both species and reallocates them many times while calculating the D metric, whereas the similarity test asks whether one species' niche model predicts the occurrence of the other [54]. Niche overlap and phylogeny Niche conservatism predicts an increase in climatic niche differentiation (i.e., lower niche overlap) between species with increasing phylogenetic distance [55]. Therefore, we tested whether pairwise niche differences correlated with phylogenetic distance; we did this by correlating the matrix of niche overlap values with a matrix of patristic distances. The patristic distance is defined as the sum of the lengths of the branches that link two taxa in an evolutionary tree [56]; it is based on the inferred number of substitutions per site. We obtained patristic distances from: a) a time-calibrated tree from Silva et al. [25], and b) the concatenated tree from Leite et al. (in revision) in which the branches were made ultrametric using non-parametric rate smoothing (hereafter referred as Silva and Leite phylogenies). Both trees were kindly shared by the authors. The phylogeny by Silva et al. [25] included 11 taxa (i.e., all 9 taxa we considered plus C. p. atlantica and two varieties of C. p. pareola, C. p. pareola N and C. p. pareola S). For the purpose of our study, we only used data from the 9 taxa that we considered for our niche analyses, and corresponding to C. p. pareola we used C. p. pareola S. The phylogeny by Leite et al. included all species but no subspecies; therefore, in order for the two matrices to have the same number of taxa, we used 3 niche overlap matrices, each with data of each subspecies (i.e., C. p. napensis, C. p. regina and C. p. pareola). The correlation between each matrix of patristic distances and the niche overlap matrix was examined with a Mantel test. Additionally, we graphed the relationship between matrices, first with scatter plots and second, by drawing separate trees using the pairwise niche overlap distances to estimate branch lengths for each of the tree topologies (i.e., the Silva and the Leite phylogenies) using unweighted least squares and constraining branch lengths to be non-negative. In essence, if the difference in niches between two species was changing at a "neutral" rate, then the pairwise niche distance would be proportional to the genetic distance; if instead we got very different branch lengths from the phylogenies, it would suggest that pairwise niche distances are not changing by drift but due to other processes. Species Distribution Models (SDMs) All the species distribution models had high training AUC values (> 0.90) and test AUC values (> 0.86), indicating a performance much better than random (S1 Table). Mean AUC ratios were higher than 1.8 for most taxa but they were lower for the subspecies of C. pareola (range: 1.55 to 1.69) (S3 Table). The binary projected distributions for most species basically covered their known range (Fig 2, see also S1 Fig). However, for some species the predicted suitability range was much larger than their published range. For example, for C. caudata and A. galeata the models also predicted suitable areas along the eastern slope of the Andes; or for C. linearis for which the model predicted suitable areas on the western coast of Ecuador. Our models overpredicted the potential distributions of all subspecies of C. pareola as well (Fig 2, see also S1 Fig). The environmental variables that best explained each species' distribution are listed in S3 Table. When other taxa are used as test data for evaluating C. boliviana's SDM, the highest test AUC values were for C. caudata and A. galeata (Table 1). High values indicate performance better than random, and our results suggest niche convergence between C. boliviana, C. caudata and A. galeata. For all subspecies of C. pareola, however, the resulting test AUC values were lower than 0.5, which indicates that training data from C. boliviana had lower ability to predict ecological niches of C. pareola's subspecies. Consequently, there appears to be environmental niche differentiation between C. boliviana and subspecies of C. pareola. AUC values for C. lanceolata, C. linearis and A. bokermanni were a bit higher than 0.5, which also suggests performance no better than random (Table 1). Similar results were obtained when using C. boliviana's presence records to evaluate how well other taxon's SDMs predict C. boliviana. We found greater predictive ability as indicated by higher test AUC values between C. boliviana and C. caudata or between C. boliviana and A. galeata. Thus, these results also suggest low niche conservatism between C. pareola's subspecies and C. boliviana. AUC values for C. lanceolata, C. linearis and A. bokermanni were higher than 0.5, but not by much, suggesting performance not better than random (Table 1). Niche comparisons C. boliviana vs. other taxa. C. boliviana had high niche overlap only with C. caudata (D = 0.62, Table 2). The hypothesis of niche equivalency between these species could not be rejected, suggesting that they occupy environments that are more equivalent than expected by chance. However, niche similarity was rejected (i.e., this test examines whether the environmental niche occupied by C. caudata is more similar to the one occupied by C. boliviana than expected by chance, and vice-versa) ( Table 2). When comparing both species' ranges in environmental space, the first component of the PCA explained 29% of the variation and represents a gradient of decreasing precipitation of coldest and driest quarter and increasing precipitation seasonality (S2 Fig, S4 Table). This makes sense given that both species inhabit ecoregions with high precipitation variability [24]. The ecological niche of C. boliviana in comparison with C. pareola's subspecies showed low niche overlap, and the hypotheses of niche equivalency and niche similarity were rejected ( Table 2). The first component explained a large portion of the variation in all three comparisons (30-45% of the variation; S4 Table, S2 Fig). In the comparison with C. p. regina and C. p. napensis, the first axis was defined by decreasing annual precipitation and increasing precipitation seasonality; precipitation of the driest quarter was also important in C. p. napensis' case, and mean temperature of the driest quarter in C. p. regina's case (S2 Fig, S4 Table). In the comparison with C. p. pareola, the first axis represented a gradient with decreasing precipitation of coldest quarter and mean temperature of driest quarter, and increasing temperature seasonality (S4 Table, S2 Fig). In all three comparisons, the second axis explained between 12 and 17.5% of the variation and the contribution of environmental variables was diverse (S4 Table, In comparisons with other taxa, C. boliviana had marginally moderate niche overlap with C. linearis and C. lanceolata (D = 0.42 and D = 0.39, respectively; Table 2). However, the hypotheses of niche equivalency and similarity were rejected. The PCA that described the Table 1 environmental niche of C. boliviana and C. linearis explained 46.8% of the variation; it was defined by increasing precipitation seasonality and mean temperature of the driest quarter, and increasing overall maximum NDVI, in its first component (S2 Fig, S4 Table). The second component described a gradient of decreasing annual precipitation and that of the coldest quarter, and increasing temperature seasonality (S2 Fig, S4 Table). . Test AUC values and AUC ratios (partial AUC) from species distribution models using two approaches: (I) Using other taxa as test data for evaluating C. boliviana; and (II) Using C. boliviana as test data for evaluating other taxa (values of more than 0.7 are in bold). Refer to S3 In the rest of comparisons (Table 2), two cases stand out: 1) C. lanceolata vs. C. linearis and 2) C. lanceolata vs. C. p. pareola, because they had high values of niche overlap (D = 0.65 and D = 0.77, respectively) and the hypotheses of niche equivalency could not be rejected. Species in the first comparison inhabit both dry and humid lowland forests, and their life histories share similarities [24]. Species in the second comparison were distributed in all types of lowland forests in the north of South America (Fig 2, S1 Fig). Niche overlap and phylogeny Mantel tests showed that there was no significant relationship between niche differentiation and phylogenetic distance, neither in the case of the Silva phylogeny (Mantel r = 0.084, Table 2. Analyses of niche overlap, niche equivalency and niche similarity of Chiroxiphia and Antilophia manakins, following Broennimann et al. [26]. Niche overlap measures levels of intersection between two species' ranges, niche equivalency measures whether the niche overlap is constant when randomly reallocating occurrences of both species between their two ranges, and niche similarity asks whether one species' niche can predict the occurrence of the other. PLOS ONE Ecological niche in Chiroxiphia and Antilophia P = 0.25), nor the Leite phylogeny (niche overlap with C. p. napensis: Mantel r = -0.126, P = 0.73; with C. p. regina: Mantel r = -0.153, P = 0.80; with C. p. pareola: Mantel r = -0.167, P = 0.79) (Fig 3). Similarly, the least-squares trees showed that the degree of niche overlap resulted in branch lengths that were not consistent with the phylogeny (S3 Fig), suggesting that niche distances were not changing by drift but rather due to other processes. Discussion Through the use of varied approaches, our study revealed that niche divergence may have been a major process in the diversification of taxa in this clade of manakins. It confirmed some results reported by Anciães and Peterson [19,29] that C. boliviana occurs at montane humid forests at higher elevations than other manakins, and that its ecological niche differs significantly from that of C. pareola, its sister species according to the phylogeny they used [19]. However, we give further insights into the differentiation of C. boliviana's ecological niche. The three approaches we followed showed first that the environmental conditions in which C. boliviana is distributed are comparable to those of C. caudata and A. galeata; second, that especially between C. boliviana and C. caudata there was a high niche equivalency, and level of intersection between their ranges (i.e., niche overlap); and thirdly, that the ecological niches in this clade formed by Chiroxiphia and Antilophia segregated more than expected given their phylogenetic distances, suggesting niche divergence rather than niche conservatism. We propose that while allopatric speciation may have been important for the speciation of C. boliviana, ecologically mediated selection could not be ruled out as a factor in its speciation. Ecological selection occurs when new environmental conditions appear, including geographic heterogeneity; this ecologically-based divergent selection can create genetic diversification from the original population and therefore speciation [9,57,58]. Further, we propose that ecological selection may have also had an important role in speciation for the other Chiroxiphia and Antilophia manakins. Here, our main objective was to compare C. boliviana's niche with other South American manakin taxa at a landscape, coarse spatial resolution to allow a comparison among species with very different geographic range sizes. Given that C. boliviana is the species that reaches the highest elevations in its genus, we wanted to get some insights on environmental conditions that characterize its ecological niche. We believe these types of comparisons are very useful in general, because under climate change, historical envelopes are expected to shift upslope and species distributions are expected to follow; this could have significant effects on avian communities [59]. Most of the models developed here over-predicted the geographic distribution of species. The model of C. boliviana, for example, predicted a distribution on its known historic range along the eastern slope of the Andes primarily, but it also predicted suitable areas in the southeast of Brazil, in the Atlantic forest. The models for C. caudata and A. galeata, predicted their known ranges as well, but also suitable areas along the eastern slope of the Andes, particularly for C. caudata. These results highlight the resemblance in the ecological niches of C. boliviana and C. caudata (see below). Furthermore, the geographic distribution was largely over-predicted for all subspecies of C. pareola, perhaps as a consequence of modelling species with such wide distributions [60]. Over-prediction in our models might also reflect ecological differentiation of these taxa in dimensions that we did not examine [5]. Climate variables describe the fundamental niche and therefore act at large scales, whereas other aspects of the ecological niche of a species (e.g. vegetation, distribution of nesting or food resources, distribution of leks; [61,62]) and divergent selection pressures are manifested at much finer spatial scales than climatic variation. Important aspects describing the realized niche of a species, such as biotic interactions, are overlooked when distributions are modeled at such large geographic scales [7,17], although they can be very important in determining distributions. Freeman [63] for example, studying sister species pairs of tropical montane birds shows that competitive interactions upon secondary contact are a common mechanism driving elevational divergence. We did not consider accessible areas over relevant time periods when selecting the geographic extent for model calibration, which can result in an overestimation of niche conditions [64]. Several studies on a broad range of organisms have examined the ability of ecological niche models to reveal information about niche evolution and differentiation [5][6][65][66][67][68][69][70][71]. Using similar methodology as ours, many of these show niche conservatism in evolutionary time between sister taxon pairs (e.g., in birds [70,71], in birds, mammals and butterflies [65], in salamanders [6], in plants [68]), though others show niche divergence (e.g., in birds [66], in lizards [67]). Niches of C. caudata and C. boliviana showed important resemblances in environmental space and demonstrated high niche overlap and equivalency; each species could predict the distribution of the other species to a reasonable degree. Both species inhabit topographically diverse areas, with great environmental heterogeneity (this study and [19]). C. caudata is found in the understory of the southern coastal Atlantic Forest in Brazil, occurring in lowland and montane evergreen forests as well as secondary forests [24,72]. Comparatively, C. boliviana inhabits semi-deciduous to humid montane/hill forests at 600-2600 m.a.s.l., where it is found both in forest interior and at edges of primary and secondary forests along the eastern slope of the southern Andes [24,73]. These results show that even though C. boliviana and C. caudata occur in areas with different climate characteristics, their respective ranges were more similar than expected by chance. Likewise, Rice et al. [66] examined similar questions between pairs of Aphelocoma jays and found low predictability and low niche similarity between closely related species; Zink [14] explored the role of niche conservatism and divergence in shaping species ranges and found a lack of niche divergence between sister species of aridland birds; Jiguet et al. [74] studied two sister species of cotingas and found that even with their similar niches, one species could not predict the other. Studies comparing niches between subspecies find both niche similarities (e.g., between eastern and western subspecies of Passerina ciris in North America; [70]) and divergences (e.g., subspecies of the woodpecker Colaptes auratus and the warbler Setophaga coronata; [75]). Analogous questions assessing whether ecological niches between sister species can be predicted over space and time, were explored with plants [68,76]. Phylogenetic niche conservatism (PNC) refers to the tendency for lineages to retain ancestral ecological characteristics over time; however, Pyron et al. [58] argue that if populations are experiencing rapid ecological change, selection for their current niche (PNC) may actually result in niche divergence. They proposed a theoretical framework in which they discuss the mechanisms by which PNC can act as a fundamental driving force in speciation [58]. This process can lead to three potential patterns: a) niche constraints (speciation occurs by internal mechanisms), b) niche conservatism (similarity of ecological niches over evolutionary timescales), and c) niche divergence (geographic and ecological variation are large; local adaptation will lead populations to diverge from their ancestral niche as they track their instantaneous niche). Some tests for PNC have been proposed using distribution models [58,77]. If sister species pairs are less similar than expected under a null model on the phylogeny, it would be indicative of PNC due to directional ecological selection driving speciation; on the other hand, if species are more similar than expected under their phylogenetic relationships, it would be indicative of PNC due to stabilizing selection. This is essentially what we tested for by comparing the ecological niches of species' pairs in light of their phylogenetic distances and relationships. The phylogeny proposed by Anciães and Peterson [19], reported C. boliviana as sister species to C. pareola. We found that these two species (with either or all the subspecies) segregated in environmental space more than expected given their phylogenetic distance, suggesting that perhaps ecologically mediated selection may have had an important role in speciation. If we consider C. boliviana as sister to Antilophia, as the Silva phylogeny denotes, these taxa also segregated in environmental space more than expected given their phylogenetic distance (the Leite phylogeny suggests C. boliviana is sister to all other ingroup species, making it harder to directly compare). We did not find a significant relationship between niche overlap and phylogenetic distance (regardless of which phylogeny we used), though the slight negative trend might suggest ecological selection and an increase of ecological differences over evolutionary time, a pattern more consistent with niche divergence. Previous research has examined the phylogenetic relationships between sister species of vertebrates (i.e., birds, mammals) in ecoregions of South America that are geographically separated, and many have found interesting taxonomic affinities between regions, which could explain the niche similarities that we found between C. boliviana and C. caudata. These studies include comparisons between the Itatiaia highlands of southeastern Brazil and the Bolivian Andes (e.g. [78]), between the Amazon and Atlantic forests (e.g. [9,[79][80][81]), between the tropical Andes and the Amazon (e.g. [82]), or between seasonally dry tropical forests (SDTFs) (e.g. Caatinga, interandean valleys in Ecuador, Peru and Bolivia; [83]). For instance Sick [78], suggested that there was a band of continuous vegetation extending between the Andes and southeastern Brazil, which served as a colonization corridor for many plant and bird species (e.g. Scytalopus novacapitalis, Caprimulgus longirostris, Schizoeaca moreirae, etc.). Combining phylogenetic with distributional data, Batalha-Filho et al. [81] examined taxa of New World suboscines with disjunct Amazonian/Atlantic forest distributions with the objective of depicting historical connections between these biomes. They report that the Atlantic and Amazonian forests were connected in the past and they hypothesize different pathways for the dispersal of organisms. Their study considered three Chiroxiphia species (i.e., C. boliviana, C. caudata and C. pareola) and found that C. boliviana had a closer relationship to C. caudata than to C. pareola; they also estimated a recent time of split (4.17 mya) between C. boliviana and C. caudata. The Batalha-Filho et al. [81] study, combined with our study and the two phylogenies we used, suggest lack of agreement in placement of C. boliviana within the clade. The phylogeny by Silva et al. [25] found that C. boliviana is more closely related to Antilophia than to other Chiroxiphia; however, phenotypic and behavioral differences suggest otherwise. On the other hand, the Leite maximum likelihood phylogeny places C. boliviana sister to Antilophia and to other Chiroxiphia species (though there was some uncertainty about this relationship). Overall, this study has given us insights on the ecological niche of C. boliviana in comparison to other closely related manakins. It has been able to depict some relevant ecological niche differences and similarities among manakin taxa, and set up the potential for further examining niche divergence in relation to morphological and molecular divergence. Table. Checklist of the species distribution modeling. We followed the guidelines provided by Feng et al. [27]. Table. Contributions (percentage of total) of environmental variables (the highest values are in bold) and AUC values for each distribution model developed. In this analysis, we used 75% of the occurrence points for training and 25% for testing the models. Environmental variables: annualpp (annual precipitation), ppcoldqua (precipitation of coldest quarter), ppdryqua (precipitation of driest quarter), ppseason (precipitation seasonality), ppwarmqua (precipitation of the warmest quarter), maxtwarmmo (maximum temperature of warmest month), meantdryqua (mean temperature of driest quarter),), ppcoldqua (precipitation of coldest quarter), ppdryqua (precipitation of driest quarter), ppwarmqua (precipitation of warmest quarter), ppseason (precipitation seasonality: standard deviation � 100), tseasoncv (temperature seasonality), overall maximum NDVI (maxndvi), coefficient of variation NDVI (cvndvi), eastness, northness and slope. Manakin species: Cbol (C.
9,328.2
2021-01-13T00:00:00.000
[ "Biology", "Environmental Science" ]
Amplitude Ratios and Neural Network Quantum States Neural Network Quantum States (NQS) represent quantum wavefunctions by artificial neural networks. Here we study the wavefunction access provided by NQS defined in [Science, \textbf{355}, 6325, pp. 602-606 (2017)] and relate it to results from distribution testing. This leads to improved distribution testing algorithms for such NQS. It also motivates an independent definition of a wavefunction access model: the amplitude ratio access. We compare it to sample and sample and query access models, previously considered in the study of dequantization of quantum algorithms. First, we show that the amplitude ratio access is strictly stronger than sample access. Second, we argue that the amplitude ratio access is strictly weaker than sample and query access, but also show that it retains many of its simulation capabilities. Interestingly, we only show such separation under computational assumptions. Lastly, we use the connection to distribution testing algorithms to produce an NQS with just three nodes that does not encode a valid wavefunction and cannot be sampled from. • the ability to compute expectation values of sparse observables. We give evidence that SQ is strictly stronger access model than AR and show that AR is a strictly stronger model than PCOND. We derive a robust version of the fidelity estimator and show how to estimate sparse observables with AR access. NQS postselection gadgets: We show how to postselect the Born distribution of an NQS by changing its network structure. We call such transformation a NQS postselection gadget and use it to give an NQS with only three nodes (and polynomially-bounded weights and biases) that does not encode a valid wavefunction. As the result implies that the NQS distribution cannot be sampled from, it can be understood as a counterpart to the best known hardness of sampling result for Restricted Boltzmann Machines, which shows that a certain distribution cannot be represented (and hence sampled from) with a polynomially-sized RBM [20]. We briefly discuss other possible application of the gadgets. Neural Network Quantum States (NQS) Ref. [6] proposed NQS representation of quantum wavefunctions by a hidden Markov models, largely inspired by Restricted Boltzmann Machines (RBM) [23,12]. We briefly review it here. Let v ∈ {−1, +1} n and define ψ(v) = f θ (v)/Z θ , where: and Z θ := v |f θ (v)| 2 . The parameters: are all complex-valued; a ∈ C n , b ∈ C m , W ∈ C n×m and fully specify the model. We denote θ ∞ = max ( a ∞ , b ∞ , W ∞ ) and assume that θ ∞ ≤ poly(n). Note that Z θ := v |f θ (v)| 2 sums over all configurations v and that there is apriori no simple way to evaluate it. In contrast, the numerator f θ (v) in Eq. 1 can be evaluated to machine precision 2 in polynomial time in m and n. This follows by observing that each of the m factors only depends on a n j v i W ji which has at most n terms. See Fig. 1. The set of parameters θ is usually found by variational optimization [6]. This relies on sampling from the Born distribution |ψ θ (v)| 2 and gradually updating the network parameters. Sampling from the Born distribution is usually done by "thermalizing" the model with a Markov Chain Monte Carlo; or Gibbs sampling from the Born distribution. The same method is then used to sample from the Born distribution of a trained model. There is generally no guarantee that the Markov chains, both during the training and testing phase, converge to the target distribution rapidly for a given θ. Remarkably, we in show in Sec. 4.2 that there exist parameters for which no such Markov chain converges. Figure 1: NQS is composed of hidden (h) and visible (v) nodes arranged in a bipartite graph. Every node can be in a state +1 or −1. Each edge (i, j) of the graph carries a complex-valued interaction strength W ij and every node carries a local field a i (for visible) or b j (hidden). The model encodes the unnormalized wavefuntion into a "marginal amplitude" over hidden nodes (Eq. 1). It can be sampled from using Gibbs sampling. NQS and Restricted Boltzmann Machines (RBMs): NQS were inspired by Restricted Boltzmann Machines (RBMs) [6] and while the two models share many similarities, there are important differences. An RBM represents a distribution p θ (v) for v ∈ {−1, +1} n by a marginal over the Gibbs-Boltzmann distribution of an Ising model on a bipartite graph. The model is, in the simplest setting, defined by a set of real-valued weights and biases (local fields). The output distribution is represented as the thermal distribution marginalized over the hidden nodes: Even though Eq. 1 and Eq. 3 look very similar, NQS is not straightforwardly represented by a marginal distribution. The model instead implicitly defines a Born distribution: This allows for interference between the summands. Because the network parameters can be complex-valued, such interference can be destructive. This means that Eq. 4 can evaluate to zero. In comparisson, the marginal sum for an RBM in Eq 3 is lower-bounded by 2 m exp(− a ∞ n), which is large for m n and small a ∞ . In such setting, the RBM algorithm cannot faithfully represent zeros in the output distribution. We show in Sec. 4.2 that this implies that there exist parameters for which the NQS does not define a valid distribution. Approximations Here we introduce some notions of approximation that will be used throughout the work. We also state and justify the key assumption (Assumption 1), which ensures that the notion of amplitude ratios is well-defined. Additive approximations: Let ≥ 0. An -additive approximationR to a real number R is a real number that satisfies: Relative approximations: Let > 0. An -relative approximationR to a non-negative real number R ∈ R + 0 is a real number that satisfies: Lemma 1. Fix > 0. LetQ ∈ R andR ∈ R be -relative approximations to Q, R ∈ C respectively. ThenQ/R is a 3 -relative approximation to Q/R,QR is a 3 -relative approximation to QR. Proof. First note that: Set 1 + = (1 + ) 2 as the degree of relative approximation to the ratio, from which = 2 +2 ≤ 3 . Given an -relative approximationQ to Q, 1/Q is an -relative approximation to 1/Q. This means thatQR is a 3 -relative approximation to QR. Complex numbers: We will require a similar notion of approximation for complex numbers. To simplify our analysis, we use the following simplification: we assume that the complex numbers are stored in polar form as e iα R := (α, R) for α ∈ [0, 2π) and R ∈ R + 0 and that the phase α is stored exactly 3 . By an -relative approximation to a complex number C = e iα R, we mean a numberC that satisfies: and α =α. Lemma 1 works for this notion of approximation if we assume that the result of adding two phases is always mapped back to [0, 2π). Machine precision: Machine precision is an upper bound on the relative error due to rounding in floating point arithmetic: numbers are represented up to a finite number of bits, which leads to usually insignificant additive errors. These errors usually imply good relative approximation. We define this more precisely now: Lemma 2. Let R ∈ R + and ∈ [0, 1). Assuming ≤ R, an 2 2 -additive approximationR to R is also an -relative approximation to R. Proof. This follows from: Throughout this work, we will make a simplifying assumption that by evaluation to machine precision, we mean evaluation to -relative error in poly(log(1/ )) time and justify it shortly. We will often require evaluation of some quantities to machine precision in poly(n, log(1/ )) time, where n is the input size to the problem. From this, we bind the error parameter to the input size and simply assume that standard functions, such as exp or cosh, can be efficiently evaluated to 2 −poly(n) error for arguments with magnitude bounded by some polynomial poly(n) (similarly to Eq. 1). The following is an important consequence of Lemma 2: can be efficiently evaluated to machine precision, then so can 1/Q, QR and Q/R. The key simplifying assumption To make use of Corollary 1, we will often need the following assumption regarding wavefunctions: We assume that |ψ(v)| ≥ 2 −poly(n) or ψ(v) = 0 for some sufficiently large polynomial poly(n). This always holds for any wavefunction represented by NQS, because for θ ∞ ≤ poly 1 (n), we have f θ (v) ≥ 2 −poly 2 (n) for a suitable polynomial (perhaps distinct from poly 1 (n)) and Z ≤ 2 −n . The condition is not automatically guaranteed for other families of wavefunctions and this may become problematic for the access models that we study in the next section. The assumption also guarantees that a machine precision approximation to ψ(v) is always representable by poly(n) bits. Amplitude Ratios We first compare the NQS wavefunction access model to the pair-cond (PCOND) query access to distributions from Refs. [7,3]. PCOND queries are strictly more powerful than sampling and we show that they can be simulated efficiently for any NQS distribution. This gives improved algorithms for distribution testing. It also leads to a modification of PCOND for quantum wavefunctions. We define this as amplitude ratio (AR) access and study it independently of NQS. We compare AR to sample and query (SQ) access used in dequantization [25] and probabilistic simulation [27]. We argue that AR is weaker access model than SQ with normalized queries, but retains many classical simulation techniques of SQ. Proof. Observe that ψ(i)/ψ(j) = f θ (i)/f θ (j). The claim follows from: (see Eq. 1) and noting that each of the m factors only depends on n j v j W ji , which has at most n terms. Since we assumed that θ ∞ ≤ poly(n) below Eq. 2, f θ (v) can be efficiently evaluated to machine precision (Sec. 2.3). The result follows from Corollary 1. There is an analogy between amplitude comparison and pair-cond (PCOND) oracle access used in conditional distribution testing [3]. Definition 1 (PCOND [3]). Let p be a probability distribution over Ω. The PCOND oracle accepts an input set S that is either S = Ω or S = {i, j} for some i, j ∈ Ω. It returns an element i ∈ S with probability p(i)/p(S), where p(S) = i∈S p(i). Refs. [3,7] show that PCOND queries lead to significant complexity improvement for some distribution testing tasks. We observe that: Problem Random samples PCOND queries Is p θ (v) -close to uniform distribution? To show that NQS allows PCOND queries, fix Ω = {−1, 1} n in the definition of PCOND. 1. On input S = Ω, output a sample from the Born distribution encoded by the NQS. 2. On input S = {i, j} for i, j ∈ Ω, compute: Return i or j according to a r-biased coin flip (if both |f θ (i)| 2 and |f θ (j)| 2 are zero, return one of i, j u.a.r). The ratio in Eq. 12 is not computed exactly, but we assume that the deviation from its exact value will be only observed for exponentially many coin flips in the input size. Since we are interested in algorithms that make at most polynomialy many PCOND queries, we neglect this. Theorem 1 enables efficient implementation of algorithms for NQS distributional testing; see Tab. 1. The algorithms test the total variation distance of two NQS Born distributions and have exponential advantage in their runtime compared to sampling algorithms. Procedure compare The key algorithmic tool used in Ref. [3] is a procedure compare that uses PCOND queries for estimating probability ratios. Given that the ratio f θ (i)/f θ (j) is not too large or too small, the procedute compare outputs its 1/poly(n) additive approximation with high probability of success using poly(n) many PCOND queries [3]. For NQS, this ratio can be efficiently computed to machine precision, which offers further simplifications of the algorithms. This motivates the definition of amplitude ratio wavefunction access model introduced in the next section. PCOND and RBM: PCOND can be also instantiated for RBMs. The PCOND algorithms of Ref. [3] apply with little modification to RBMs and imply significant speedups for some distribution testing tasks. Results of Ref. [3] were presented as a part of a theoretical analysis in conditional property testing and do not have runtimes that would make them immediately practical. No natural and efficient instantiation of the PCOND oracle (such as the one in the context of RBMs) has been known to the authors in mid 2021 [2]. It is therefore possible that the algorithms could be optimized and perhaps used in practice. Previous quantum information work on PCOND: PCOND access was studied from the quantum computing viewpoint by Sardharwalla, Strelchuk and Jozsa in Ref. [22], but the AR definition (introduced below) is new. The difference between their definition and AR is that their quantum extension of PCOND oracle (PQCOND) provides access to conditional probabilities associated with the underlying distribution -their oracles are defined at a level of quantum states, while the AR access is always defined with respect to some fixed basis/wavefunction. The authors give a PQCOND version of the compare procedure from Ref. [3] and show that PQCOND queries have polynomial improvements upon many of the PCOND distribution testing results. They also derive results on boolean distribution testing and quantum spectrum testing. There does not seem to be an obvious connection between AR access and PQCOND, but it would be interesting to understand this better. Amplitude Ratio (AR) Access Because NQS allows for computation of amplitude ratios to high precision, they should offer stronger access to the wavefunction than PCOND. We study this as a wavefunction access model, independently of NQS. Definition 2 (Exact AR). Let ψ : Ω → C be a wavefunction over Ω. The AR oracle accepts as an input either an ordered pair If the ratio diverges, it returns a special symbol 'DIV' and it conventionally returns 1 for ψ(i) = ψ(j) = 0. Definition 2 assumes that the queries are returned exactly. This becomes problematic if ψ(i)/ψ(j) becomes irrational number as the query result is an infinitelly-long output. This issue can be dealt with as follows: Definition 3 (AR). Let ψ : Ω → C be a wavefunction over Ω subject to Assumption 1. For ∈ [0, 1), the AR( ) oracle accepts as an input either an ordered pair S = (i, j) or S = Ω. If S = Ω, it returns a random sample from the Born distribution |ψ(v)| 2 over v ∈ {−1, 1} n . If S = (i, j), it returns an -relative approximation to ψ(i)/ψ(j). 4 If the ratio diverges, it returns 'DIV' and it conventionally returns 1 for ψ(i) = ψ(j) = 0. We write AR := AR( ) if scales as 2 −poly(n) for some polynomial poly(n). 5 SQ vs AR We now compare AR with another type of access to quantum wavefunctions, the sample and query (SQ) access. SQ was defined in Ref. [25], but was previously used as computational tractability with additional computational requirements in Ref. [27]. There is a difference between the two definitions: Ref. [25] defines SQ access without normalizing the queries, but subsequenly assumes knowledge of the normalization factor (see for example Prop 4.2. [25]), while Ref. [27] assumes normalization (as well as efficiency) in the definition of computationally tractable states (Ref. [27], Def. 1). Tang's definition of SQ also does not treat the underlying object as a wavefunction, but more generally as a real valued vector with 2 -norm sampling access. This presentation puts less emphasis on the need for the normalization factor. Here we use the SQ access model assuming that the queries yield normalized amplitudes, but impose no efficiency constraints. Definition 4 (Exact SQ). Let ψ : Ω → C be a wavefunction over Ω. The wavefunction has SQ access if ψ(i) can be computed for any i ∈ {−1, +1} n and its Born distribution |ψ(i)| 2 can be sampled from. The above definition has the same problem as exact AR (Def. 2): if the result of the query is an irrational number, the result of the query will not have bounded size. We update it as follows: AR and unnormalized SQ Tang used SQ with unnormalized queries in [25] but subsequently assumed knowledge of the normalizing factor to the wavefunction throughout the work. Without knowledge of the normalization factor, the only information about the vector with such access comes from amplitude ratios and sampling. This way, the definition is essentially the same as AR -so why bother with AR? The key reason for defining AR is to emphasize that the normalization factor of the wavefunction is simply unavailable, aside from its empirical estimate by sampling. It seems problematic to guarantee this (at least somewhat) rigorously: does evaluation of the state up to an "arbitrary" normalization factor always force you to not use it? That subtlety aside, one can also see AR as unnormalized SQ access and the rest of this section as comparison between normalized and unnormalized variants of SQ. AR is not stronger than SQ AR can be simulated by SQ: Evidence that AR is weaker than SQ: To show that AR access is in some sense weaker than SQ, we show conditional separation between variants of the two models under efficiency constraints. SQ is related to the concept of computationally tractable (CT) states [27]: subject to Assumption 1, is computationally tractable (CT) if both queries in Def. 5 can be implemented with a poly(n)-time randomized algorithm. 6 This can be seen as SQ with efficient classical sampler and efficient classical algorithm for the amplitude queries. The key capability of CT states is an efficient algorithm that, given two CT wavefunctions ψ(v) and φ(v), approximates ψ|φ in polynomial time to inverse-polynomial precision (see Theorem 3. and Lemma 3. in Ref. [27] or equivalently Prop 4.8. in Ref. [25]). This enables estimation of constant-local bounded observables on CT states or simulation of sparse quantum circuits. Techniques related to CT framework were used in dequantization algorithms in Ref. [25] or quantum algorithm analysis in Ref. [11]. Analogously to CT states, we define amplitude ratio (AR) states and show that their fidelities and expectation values on constant-local observables can be also efficiently computed. They subsume CT states, which suggests that the CT requirement can be relaxed to AR in many applications. We show that, subject to Assumption 1, all CT states are AR states. Our evidence that SQ access is somewhat stronger than the AR access follows from that for machine precision, not all AR states are CT, unless #P = FBPP. 7 This is shown using the separation between (exact) counting and uniform sampling by Jerrum, Valiant and Vazirani (Sec. 4 [15]). (There is an exact AR state that is not CT): The proof uses the observation that uniform sampling of solutions to a given boolean formula over n variables in disjunctive normal form (DNF) is easy, while their exact enumeration is #P-complete (Sec. 4 of Ref. [14]). We consider a quantum state that is a uniform superposition over satisfying assignments to the boolean formula and show that its Born distribution can be easily sampled from. The normalization factor of such state counts the number of satisfying solutions to the formula, which is #P-complete to compute exactly. We show that a good relative approximation to the amplitude determines this quantity. Given a boolean formula in DNF over n variables, interpret v ∈ {−1, +1} n as an assignment to its variables: if v i = +1, then the i-th variable is true and if v i = −1, the i-th variable is false. Define the DNF formula-"state" as follows: Notice that Z is the number of variable assignments that satisfy the formula. For any input v, the predicate |ψ DNF (v)| > 0 can be tested by plugging in the variable assignments into the formula. Any non-zero amplitude evaluates to 1/ √ Z, from which it is possible to compute the amplitude ratio as required by exact AR. The Born distribution |ψ DNF (v)| 2 is a uniform distribution over the satisfiying assignments to the boolean formula. This distribution can be sampled from exactly by a polynomial-time randomized algorithm described in Sec. 4 of Ref. [15]. It works as follows: For a DNF boolean formula F = F 1 ∨ F 2 . . . ∨ F m in n variables, where F i is a conjunction of literals for all i ∈ [1, m], let S j ⊆ {−1, +1} n be the set of satisfying assignments to F j . Note that |S j | is easy to compute, because F j is only satisfied if it fixes a subset of the variables involved in it and the remaining variables can take arbitrary values. Let S = j S j the set of all satisfying assignments to F . The aim is to sample uniformly over S, which is achieved by Algorithm 1. With 1/N probability, output a and halt. end for The algorithm does not halt with small probability. If it does not halt, rerun the forloop. The output of the algorithm is a uniformly random satisfying variable assignment. This implies that ψ DNF (v) is an exact-AR state. 8 The family of states ψ DNF is not exact-CT unless #P = FBPP, because ψ DNF (v) = 1/ √ Z for any satisfying variable assignment v. Evaluating this exactly is a #P-complete problem, because Z is the number of satisfying solutions to the boolean formula. So unless FBPP = #P, ψ DNF (v) is an exact-AR state that is not exact-CT. It remains to show that ψ DNF (v) is not CT. To do this, we show that CT can compute Z to sufficient accuracy to determine it exactly. First note that Z ≤ 2 n . We want to choose so that the -relative approximations to Z and Z + 1 can be distinguished for any 0 ≤ Z ≤ 2 n . This happens if (1 + )Z < (1 + ) −1 (Z + 1). Since < 1, we have that (1 + ) 2 < 1 + 3 and (1 + 3 )Z < Z + 1 =⇒ (1 + ) 2 Z < Z + 1. We can therefore choose ≤ 2 n−2 to recover Z exactly. It follows that if ψ DNF (v) was CT, Z could be computed exactly by a polynomial time randomized algorithm (FBPP). This problem is however #P-complete, so this gives a contradiction unless #P = FBPP. We remark that the above argument won't work for = 1/poly(n). The reason is that there is a polynomial time randomized algorithm for approximating the number of DNF satisfying clauses to relative error that outputs the solution with high probability in poly(n, 1/ ) time [17]. 9 This is worse approximation than what is assumed in the definition of AR states, which shows that one has to use different argument to separate CT( ) from AR( ) for scaling as 1/poly(n). Similar argument to the above leads to an argument that could separate SQ and AR in terms of their query complexity: Lemma 5. Given SQ access to ψ DNF from the previous theorem, a single query suffices to approximate Z to machine precision. Proof. (sketch) Given ψ DNF and an SQ access to it, the SQ algorithm draws a sample v from |ψ DNF (v)| 2 and evaluates ψ DNF (v) = 1/ √ Z, which gives a machine precision approximation to Z. Conjecture 1. Z for ψ DNF can be query-efficiently estimated by AR to no better than 1/poly(n) relative error. The reasoning behind the conjecture is the following: Given AR access to ψ DNF , any amplitude ratio query on a pair of nonzero amplitudes evaluates to 1. Any amplitude ratio on pairs of zero and nonzero amplitudes gives either 0 or 'DIV'. By finding a v with zero amplitude and a single w with a nonzero amplitude, AR can detect if ψ DNF (z) > 0 for any z ∈ {−1, +1} n . Possibly the best way to approximate Z with this access seems to be variants of importance/nested sampling (see for example [17,24,8,13]), which at best lead to poly(n)-sample algorithm that estimates Z to 1/poly(n) relative error. 10 It would be extremely surprising if these methods were not asymptotically optimal. I don't have a proof though. Theorem 3. Assuming Conjecture 1, there is a task that requires just one SQ query, but at least poly(n)-many AR queries. AR is stronger than sample access This is shown by noting that AR access implies PCOND access and then referencing the known results from conditional distribution testing that separate PCOND and sample access. AR access is stronger than PCOND access and PCOND is strictly stronger than sample access (Tab 1). It is worth noting that the algorithms of [2] work even if the AR ratios can be approximated only to 1/poly(n) additive error. It may be therefore interesting to study weaker variants of AR. Lemma 7. AR is stronger than PCOND. Proof. (sketch) AR can decide if |ψ(i)/ψ(j)| 2 ≤ 2 −n in a single query, but the same task requires exponentially many PCOND queries. This is because the PCOND model can estimate the ratio only by sampling, which is limited by the usual concentration bounds. These imply a lower bound of exponentially many queries. One can object that the above comparison is rather unfair, because AR computes the amplitude ratio to a high degree of precision with a single query, while PCOND can only do so query-efficiently to 1/poly(n) additive error. We strengthen the above lemma to separation from a version of high-precision PCOND that can query for ratios of Born distribution as follows: which can be computed with 2 AR queries. The states have the same, uniform, Born distributions, so all PCOND ratio queries evaluate to 1. The PCOND therefore cannot evaluate the overlap of ψ and φ. The PCOND access model therefore cannot evaluate the overlap of ψ and φ. Summary Results of Sec. 3.3 can be summarized as : since PCOND access is separated from sampling access to a Born distribution of a wavefunction by Tab 1, AR is separated from PCOND access by Lemma 8, and SQ is almost surely (that's why the star) separated from AR by Theorems 2 and 3. By separation, we mean that there exists at least one problem that can be query efficiently solved by one of the classes, but not by the other. AR and probabilistic simulation We now show that AR states retain many simulation capabilities of CT states. Most of the results follow from standard algorithms used with NQS that implicitly used the AR access. We improve some of them, for example by robustification of the AR fidelity estimator, to give the closest possible analogues to the previous results for CT states [27]. AR fidelity estimator: Given two amplitude ratio (AR) states ψ and φ, there is a randomized algorithm that approximates their fidelity | ψ|φ | 2 in polynomial time to inverse-polynomial precision. Medvidovic and Carleo use the following estimator for NQS in Ref. [21]: Every term in the summation uses two AR ratio queries, which can be seen from: This can be operationally understood as follows: sample (x, y) ∼ |φ(x)| 2 |ψ(y)| 2 (which is a valid product distribution) and compute the product of AR ratios on ψ for (x, y) and φ for (y, x). We now show that the estimator has finite variance and give a robust version of it with fast concentration around the mean. Set: and notice that: From this, the variance becomes: While the main utility of G(x, y) is a simplification of the variance analysis, it could also allow for additional cancellation between f ψ (x) and f φ (y) that could not be exploited in the product of means estimator of Ref. [21]. Robust AR Fidelity Estimator: Despite the fact that the estimator has a finite variance, the random variable G(x, y) is unbounded because it contains a ratio of wavefunctions evaluated at two distinct points and may explode if either f ψ (y) or f φ (x) are close to zero. If we are unlucky enough to obtain such outlier in our empirical estimation, it can significantly skew the statistics. It is therefore desirable to use an estimator that is less affected by outliers -such estimators are often called robust. We show how to make the above estimator robust using the median of means amplification. Theorem 4 (Median of means estimator). Let k, be two integers. Define the empirical meanḠ as:Ḡ = i G i /k. Compute such empirical means and use the median as the estimator. Then: Proof. We have σ 2 (Ḡ) = σ 2 [G]/k. By Chebyshev inequality: The median of means condition in Eq. 24 is violated if the majority of empirical means violate the Chebyshev condition in Eq. 25. The probability of this happening is at most: where the inequality follows by monotonicity. We can bound this by: where we used a tail-bound on the binomial distribution. This technique is commonly used in computer science (see Eg. [15] or a recent review [18]) and was previously used with SQ access/CT states by Tang [25], where it was presented as a standard technique. An alternative way to make the SQ estimators robust was used by Van den Nest in Ref. [27], but it does not work for AR. A corollary of Theorem 4 is a polynomial-time robust estimator of the overlap: Corollary 2 (Robust AR fidelity estimator). There is an algorithm that estimates | ψ|φ | 2 to -additive error with probability 1 − e −n using 64n/ 2 AR queries. Proof. Use the median of means estimator in Theorem 4. One evaluation of the random variable G(x, y) in Eq. 21 costs two AR queries. Use the median of empirical means of G(x, y) (each over k evaluations of the random variable) as the estimator of the overlap. Setting k = 4/ 2 and = 8n in Theorem 4 gives: which follows from: The overall number of AR queries that achieves this is at most 2 k = 64n/ 2 . We briefly compare this estimator to the CT estimator of Ref. [27]. The algorithms achieve the same goal and their asymptotic query complexity in and n are the same. The above fidelity estimator however does not require computation of the amplitude, but only amplitude ratios, which is a computationally easier problem as argued in Theorem 2. Estimating Sparse Observables: Given AR access, there is an algorithm for estimating expectation value of a (hermitian) observable O expanded in the same basis as the wavefunction. This algorithm is well-known in computational physics as local observable estimation: This can be interpreted as sampling from the Born distribution |ψ(j)| 2 and querying the f ψ (k)/f ψ (j) AR ratios for every k that appears in: This means that if O has at most poly(n) non-zero entries in each row, we can compute this estimator in polynomial time. As previously, we bound the variance of this estimator. We have that: and 11 We have that ψ|O 2 |ψ ≤ λ λ 2 | λ|ψ | 2 ≤ λ 2 max , where λ max is the largest eigenvalue of O. For hermitian matrices, the largest eigenvalue coincides with the operator norm O = λ max , from which we have that ψ|O 2 |ψ ≤ O 2 . It follows that: σ[X] ≤ O 2 . Theorem 5 (Estimating Sparse Observables with AR). There is an algorithm that estimates ψ|O|ψ for O ≤ 1 to -additive error with probability at least 1 − e −n using at most 32sn/ 2 , where s is the row-sparsity of O. Proof. Use the median of means estimator in Theorem 4. One evaluation of the random variable X costs at most s queries, where s is the row sparsity of O. The rest follows from Corollary 2. AR and dequantization? SQ access was studied in dequantization and it is natural to ask if AR can lead to some improvements in that framework. We outline mostly negative results for improvements of Ref. [25] using AR. Ref. [25], Proposition 4.2. gives an algorithm for estimating the inner product x, y of two real vectors x and y, using the knowledge of the normalization constant of one of the vectors and error that depends on the normalization factor of both. Assuming x, y ≥ 0 (i.e. for non-negative vectors), the AR fidelity algorithm gives a good estimator of the inner product without knowledge of the normalizing factors. Proposition 4.3. of Ref. [25] crucially depends on computing the rejection sampling filter: This can be computed with O(k 2 ) AR ratio queries to V , assuming that the entire matrix has been normalized with the same (possibly unknown) normalization factor. This assumption is most likely too strong to be useful in the context of Tang's algorithm. 11 The cancellation of |ψ(j)| 2 is problematic for all values for which ψ(j) = 0, because the random variable becomes unbounded on the values outside of the support of |ψ(j)| 2 . The expectation value then depends on the values that the observable O jk takes on samples outside of the support of |ψ(j)| 2 , which is undesirable. Let Ω be the domain of |ψ(j)| 2 and Σ := {j ∈ Ω : |ψ(j)| 2 > 0} be its support. To alleviate the above issue, it is more natural to define: such that the cancellation of |ψ(j)| 2 is well-defined. We can write this as: Irrespectivelly of the definition of E|X| 2 , the inequality following Eq. 36 holds. I want to thank Giuseppe Carleo for pointing out this subtlety. Lastly, the modified version of the FKV algorithm [9] (Algorithm 2 in [25]) crucially relies on sampling from the distribution induced by norms of the matrix columns (normalization factors). There does not seem to be a simple way to circumvent this with the AR access. It would be really interesting to see if some of these limitations could be avoided in different dequantization algorithms. Postselection Gadgets Here we explore a different way of accessing the wavefunction that can be implemented with an NQS: postselection gadgets. Postselection gadgets are maps between NQSs that allow for a different set of conditional queries to the Born distribution encoded in the NQS called subcube conditional queries in Ref. [4]. In contrast to PCOND, postselection gadgets cannot be instantiated efficiently for an arbitrary NQS. We use this property to show that there is an NQS with just three nodes that does not encode valid wavefunction and cannot be sampled from -a counterpart to similar results for RBMs [19,20]. It is possible that the gadgets may have applications beyond this, but the analysis seems to be beyond reach of the techniques known to the author. Postselection Gadgets Let |ψ θ (v|r)| 2 := p θ (v|r) where v ∈ {−1, 1} n and r ∈ {−1, 1, } n be the distribution |ψ θ (v)| 2 := p θ (v), conditioned on the event that for all non-bits of r, the output bits of v are fixed to bits of r. A postselection gadget is a function that transforms a NQS θ to another NQSθ (possibly with additional nodes), such that |ψ θ (v|r)| 2 = |ψθ(v)| 2 . Proof. The following proof uses notation used in Sec. 2.1. For every non-bit of r, introduce a hidden node g and attach it to the corresponding visible node. Set the bias on this hidden node to i(π/4) and couple it to the visible node with strength −i(π/4)r i (see Fig. 2). This gives: Let F be the event 12 that v 1 = r 1 , v 2 = r 2 . . . , v k = r k . Because 2 cosh (iπ/2) = 0, we have that: Let E ⊆ {−1, +1} n be an arbitrary event. We have that: From Eq. 42, it follows that: (44) 12 An event F ⊆ Ω is a subset of domain Ω of the probability distribution. and pθ(E|F Hence, from Eq. 43: pθ(E) = p θ (E|F ). Thus, the output distribution of the augmented state is exactly the conditional of the original distribution. See Fig. 2. Note that the function θ → θ is easy to compute. Postselection gadgets allow for sampling from a subset of conditional distributions of the distribution encoded into the NQS. Additional examples of postselection gadgets can be found in the appendix. The construction almost trivially extends to Deep Boltzmann Machines [10,5]. While the analysis was inspired by its RBM counterpart by Long and Servedio in Ref. [19], their gadget construction does not straightforwardly extend to RBMs. Not all NQS encode valid quantum states We show that many NQS do not encode valid quantum states. This follows from the fact that the postselection gadget allows postselection on probability zero events. Any NQS can be modified by adding two hidden nodes g 1 and g 2 , as in Sec. 4, that fix the value of the visible node v 1 to 1 and −1 respectively. Eq. 41 then gives fθ(v) = 0, which also means that the encoded "wavefunction" is zero. The resulting NQS then does not encode any wavefunction because it cannot be normalized. The smallest NQS for which this works is one with two hidden nodes and one visible node. Such NQS naturally cannot be sampled from by any algorithm. Figure 3: A NQS that does not encode a valid quantum state can be constructed by appending two auxiliary nodes, each of which fixes the value of v 1 to +1 and −1 respectively. There does not seem to be any analogous simple construction for restricted Boltzmann machines with real valued coefficients. The reason is, as shown earlier, that RBMs cannot encode zeroes in the output probability because each outcome probability is lower-bounded by 2 m exp(− a ∞ n) and the contribution of any "gadget" hidden node to the output RBM probability is a factor of cosh(x) for some real x, which is lower-bounded by 1. We remark that existence of such "zero-valued" NQS is however not significant for applications in which the network is trained by sampling from output distributions of a sequence of NQSs. Some care must be however taken when encoding quantum states directly. Other applications? The postselection gadget can be in principle used for postselection without retries. However, even when the original NQS θ can be easily sampled from, it may be (and it is very likely in some cases) that the Gibbs sampling algorithm for the modified NQSθ won't be efficient anymore. Still, one toy example that may suggest that this could be more efficient than resampling (but that also seems useless) is the case in which the encoded distribution is a product distribution -fixing output bits of such distribution using gadgets will not lead to any slowdown. In the case where the sampling algorithm remains efficient for the appropriate sequence of conditionals, one can also obtain a very crude multiplicative approximation of the normalizing factor of the NQS wavefunction, essentially by retracing the RBM algorithm presented in Ref. [19]. The question of characterizing what conditional gadgets allow for efficient sampling however remains widely open. Lastly, Ref. [16] used a similar gadget construction to simulate universal random circuits with NQSs. The postselection gadgets from Thm. 6 can be seen as extension of their result. See Appendix A for additional postselection gadgets and Appendix B for enconding of Pauli gates. Discussion We studied the access model offered by neural network quantum states (NQS), which, along with connection to previous results from conditional distribution testing, motivated the definition of amplitude ratio (AR) access model. We related AR to sample and query (SQ) access and showed that it retains some of the simulation capabilities of SQ. We gave some evidence that AR may be weaker than SQ and showed that existing results in distribution testing imply that AR is stronger than sample access. We then considered alternative access to the NQS wavefunctions by means of subcube conditional queries and showed that even small NQS may not encode valid distributions. Our work left several questions open: • It would interesting to further explore the connections between AR and dequantization and understand if it is possible to meaningfully relax the SQ normalization requirement. • Both definitions of CT and AR states assumed existence of a classical randomized sampler for the Born distribution. One may thus ask if there are any nontrivial states in the quantum-classical generalization of CT and AR states, where we can efficiently sample from using quantum algorithm, but still classically estimate the ratios or amplitudes. Such states may be useful for construction of quantum algorithms based on conditional property testing results in Ref. [3,7]. It might be for example interesting to understand how this plays with some of the known supremacy results in which approximating a target amplitude is known to be #P-hard, yet there is an efficient quantum algorithm that samples the output [1]. It may be that, while the amplitudes are hard to compute, approximating their ratios remains tractable. • It would be also interesting to understand, perhaps numerically, if and for what problems could the NQS postselection gadgets provide advantage when compared to postselection by resampling in some of the applications of NQS. Acknowledgements I want to sincerely thank Ashley Montanaro and Noah Linden for their help. I also want to thank Giuseppe Carleo, Srini Arunachalam, Sergii Strelchuk, James Stokes and Juani Bermejo-Vega for their suggestions and discussion.
9,814
2022-01-22T00:00:00.000
[ "Computer Science" ]
Nowhere-Zero 3-Flows in Signed Graphs Nowhere-Zero 3-Flows in Signed Graphs . Tutte observed that every nowhere-zero k -flow on a plane graph gives rise to a k - vertex-coloring of its dual, and vice versa. Thus nowhere-zero integer flow and graph coloring can be viewed as dual concepts. Jaeger further shows that if a graph G has a face-k -colorable 2-cell embedding in some orientable surface, then it has a nowhere-zero k -flow. However, if the surface is nonorientable , then a face-k -coloring corresponds to a nowhere-zero k -flow in a signed graph arising from G . Graphs embedded in orientable surfaces are therefore a special case that the corresponding signs are all positive. In this paper, we prove that if an 8-edge-connected signed graph admits a nowhere-zero integer flow, then it has a nowhere-zero 3-flow. Our result extends Thomassen’s 3-flow theorem on 8-edge-connected graphs to the family of all 8-edge-connected signed graphs. And it also improves Zhu’s 3-flow theorem on 11-edge-connected signed graphs. Introduction. Graphs considered in this paper may have multiple edges and loops unless otherwise stated.Let G = (V, E) be a graph and let k be a positive integer.An ordered pair (D, f ) is called a k-flow of G if D = (V, A) is an orientation of G and f : A → {0, ±1, . . ., ±(k − 1)} is an assignment of flows, such that, for each v ∈ V , f (e), where E + (v) is the set of all arcs leaving vertex v in D and E − (v) is the set of all arcs entering vertex v.We say that the k-flow (D, f ) is nowhere-zero if f (e) = 0 for any e ∈ A. The concept of nowhere-zero integer flow was introduced by Tutte in 1954, and the theory of integer flows provides an interesting way to extend theorems about region-coloring planar graphs to general graphs [12,13] (see also [15]).Tutte observed that every nowhere-zero k-flow on a plane graph gives rise to a k-vertex-coloring of its dual, and vice versa.Thus nowhere-zero integer flow and graph coloring can be viewed as dual concepts, and the above Tutte's observation is often referred to as the duality theorem.One of the major open problems in this research area is Tutte's 3-flow conjecture, which is exactly the dual version of Grötzsch's 3-color theorem on planar graphs [3,4]. Thomassen [11] made a breakthrough in this conjecture by establishing the following weaker version. As proved by Kochol [7], a minimum counterexample to the 3-flow conjecture is 5-edge-connected.Therefore, the above theorem is actually just one step away from the resolution. The aforementioned duality theorem cannot be extended directly to embedded graphs.(See DeVos et al. [2] for an asymptotic version.)Nevertheless, Jaeger [5] showed that if a graph G has a face-k-colorable 2-cell embedding in some orientable surface, then it has a nowhere-zero k-flow.Interestingly, if the surface is nonorientable, then this coloring corresponds to a nowhere-zero k-flow in a signed graph arising from G. It is due to their great theoretical interest that integer flows in sign graphs have also been subjects of extensive research. Let us define a few terms before proceeding.A signed graph is a pair (G, σ), where G is a graph and σ : E(G) → {1, −1} is a signature of G.An edge e is called positive if σ(e) = 1 and negative otherwise.Each edge e = xy of a signed graph, (G, σ) is composed of two half-edges h x and h y , where h x is incident with x and h y is incident with y.An orientation D of (G, σ) assigns every half-edge a direction in the following way: if e = xy is positive, then h x and h y are directed both from x to y, or both from y to x (see Figure 1); if e = xy is negative, then the directions of h x and h y are opposite.(There are two possibilities: (1) h x is directed to x h y is directed to y; (2) h x is directed from x and h y is directed from y. See Figure 1.) A negative edge e = xy is called a source edge if e is directed toward both x and y, and it is called a sink edge otherwise.In the literature, an oriented signed graph is also called a bidirected graph.If all edges of (G, σ) are positive, then a signed graph is equivalent to a graph.So we can view signed graphs as generalizations of graphs. The concept of nowhere-zero integer flow in graphs carries over naturally to signed graphs, and the following is a well-known conjecture on integer flows in signed graphs. Despite tremendous research effort, this conjecture remains open; Xu and Zhang [14] confirmed it for 6-edge-connected signed graphs.In [10], Raspaud and Zhu established that every 4-edge-connected signed graph has a nowhere-zero 4-flow provided it admits a nowhere-zero integer flow.Based on Theorem 1.2, Zhu [16] (Zhu [16]).Every 11-edge-connected signed graph admitting a nowhere-zero integer flow has a nowhere-zero 3-flow. What is the least edge-connectivity that can guarantee the existence of nowherezero 3-flows in signed graphs?Zhu posed this as an open question in [16].With the motivation to improve the bound in Theorem 1.3 and extend the setting of Theorem 1.1, we establish the following main result in this paper. It is worthwhile pointing out that the assertion no longer holds if 8 is replaced by 4: Let (G, σ) be the signed graph with three vertices in which each pair of vertices is connected by precisely one positive edge and precisely one negative edge.Clearly, G is 4-edge-connected and has a nowhere-zero 4-flow.Nevertheless, it is routine to check that G admits no nowhere-zero 3-flow. Operations. In this section we introduce some operations on signed graphs which will be employed in subsequent proofs. Flipping.Let (G, σ) be a signed graph and let A be a subset of V (G).Define where Ā = V (G) \ A and [A, Ā] is the cut in G consisting of all edges between A and Ā.We say that the signed graph (G, σ ) is obtained from (G, σ) by flipping all edges in [A, Ā].Two signed graphs (G, σ) and (G, σ ) are called equivalent if one can be obtained from the other by flipping all edges in a cut.The following two lemmas are well-known facts (see [10] and [16]) in graph theory, that is, that this flipping operation does not affect the existence of a nowhere-zero integer flow in a signed graph. Lemma 2.1.Let (G, σ) and (G, σ ) be two equivalent signed graph and let k be a positive integer.Then (G, σ) has a nowhere-zero k-flow if and only if so does (G, σ ). Throughout we use n(G, σ) to denote the minimum number of negative edges contained in a signed graph equivalent to (G, σ). Contraction.Let (G, σ) be a signed graph and let A be a subset of V (G).The signed graph obtained from (G, σ) by contracting A, denoted by (G/A, σ), is the graph arising from (G, σ) by identifying all vertices in A to a single vertex, in which each edge of G with both ends in A becomes a loop, and each edge has the same sign as in (G, σ). Since the sign of a loop is not effected by a flipping operation, the following statement holds. Lifting We say that the signed graph (G , σ ) is obtained from (G, σ) by lifting xy and xz; see Figure 2 for an illustration.Note that x, y, z are not necessary distinct in this definition. An orientation of (G , σ ) can be extended naturally to an orientation of (G, σ) by orienting the two half-edges incident with x as follows: one enters x and the other leaves x; see Figure 2 Let G be a graph and let x, y be two distinct vertices of G.The local edgeconnectivity of G between x and y, denoted by λ G (x, y), is the maximum number of edge-disjoint paths connecting x and y in G.The following Mader's theorem [9] asserts that the local edge-connectivity is preserved under some lifting operation. Theorem 2.5 (Mader [9]).Let G be a connected loopless graph and let v 0 be a vertex of degree at least 4 such that no edge incident with v 0 is a cut-edge of G. Then G contains two edges v 0 v 1 and v 0 v 2 such that λ H (x, y) = λ G (x, y) for any two vertices x, y different from v 0 , where H is the graph obtained from G by lifting v 0 v 1 and v 0 v 2 . As shown by Tutte [12], a graph G admits a modulo 3-orientation if and only if it has a nowhere-zero 3-flow; this equivalence relation can be further extended to signed graphs. The remainder of this paper is devoted to a proof of Theorem 3. The proof proceeds by induction on |V (G)| + |E(G)|; to make the induction work, we need a generalized concept of graph orientation and a set function from [8], which is a variant of the one introduced by Thomassen in [11]. Let G be a loopless graph.A mapping β : for all v ∈ V (G).This mapping τ can be further extended to any nonempty A ⊆ V (G) as follows: where β(A) ≡ v∈A β(v) (mod 3).Since d(A) and τ (A) have the same parity, the following inequality holds.Lemma 3.3 (Lovász et al. [8]). 2 is an immediate corollary of the following result, which was derived by refining Thomassen's technique [11] and will be used in our proof. Theorem 3.4 (Lovász et al. [8]).Let G be a loopless graph, let β be a Z 3boundary of G, let z 0 ∈ V (G), and let D(z 0 ) be a preorientation of the set E(z 0 ) of all edges incident with z 0 .Assume that Then D(z 0 ) can be extended to a β-orientation D of the entire graph G. When restricted to the disjoint union of an isolated vertex z 0 and a 6-edgeconnected loopless graph, the preceding theorem yields the following statement. Proof.Let m be the number of negative edges of (G, σ).Set r = 1 if m = 2 and r = 0 if m = 3.Let H be the graph obtained from G by first orienting r negative edges as sink edges and the remaining m − r negative edges as source edges, then inserting a new vertex to each negative edge, and finally identifying all these newly inserted vertices to a single vertex z 0 .Let G = H if m = 2 and let G be obtained from H by replacing one arc leaving z 0 with two parallel arcs entering z 0 if m = 3. Therefore, by Theorem 3.4, the preorientation of the arcs incident with z 0 can be extended to a modulo 3-orientation of the entire graph G , which clearly yields a modulo 3-orientation of (G, σ). Lemma 3.7.Let G be a loopless graph, let β be a Z 3 -boundary of G, let z 0 ∈ V (G), let D(z 0 ) be a preorientation of the set E(z 0 ) of all edges incident with z 0 , and let Let p be the integer in Z 3 with β(z 0 ) − d(z 0 ) + 1 ≡ 2p (mod 3) and let q = 7 − d(z 0 )−p.Then q ≥ 0 and p+q ≥ 2 as d(z 0 ) ≤ 5. Let G be obtained from G by adding a set P of p arcs from S to z 0 and adding a set Q of q arcs from z 0 to S such that each vertex in S has degree at least six in G .(This G is available because |S| ≤ 2.) Let β (z 0 ) be the integer in Z 3 with β (z 0 ) ≡ β(z 0 )+q −p (mod 3).By the definitions of p and q, we obtain β (z 0 ) ≡ (d(z 0 )−1+2p)+(7−d(z 0 )−p)−p ≡ 0 (mod 3).So β (z 0 ) = 0.For each vertex v = z 0 , let P (v) (resp., Q(v)) be the set of all arcs in P (resp., Q) incident with v, and let β (v) be the integer in Assume on the contrary that (G, σ) is a smallest counterexample and, subject to this, the number of negative edges in (G, σ) is minimum. Let G be the loopless graph (with no signature) obtained from the signed graph (G/ Ā, σ) by first deleting all negative edges and then deleting all loops incident with z 0 , the vertex arising from contracting Ā.We orient all edges between A and z 0 in G as follows: Suppose edge xz 0 in G with x ∈ A corresponds to edge v A y in G with y ∈ Ā.Then the direction of xz 0 in G is exactly the same as the direction of v A y in D .For convenience, we denote this preorientation of edges incident with z 0 by D(z 0 ).Let p(z 0 ) (resp., q(z 0 )) be the number of all resulting arcs entering (resp., leaving) z 0 ; we define β (z 0 ) to be the integer in Z 3 with β (z 0 ) ≡ q(z 0 ) − p(z 0 ) (mod 3).Let F 1 be the set of all negative edges of G with both ends in A. Recall that ( 8) We orient all edges in F 1 as sink edges if k(A, σ) ≡ 2 (mod 3), and orient all edges in F 1 as source edges otherwise.Let F 2 be the set of all negative edges between A and Ā in G; for each edge f ∈ F 2 , we orient it as in D .Set F = F 1 ∪ F 2 .For each v ∈ A, let p(v) (resp., q(v)) be the number of all half-arcs entering (resp., leaving) v in F ; we define β (v) to be the integer in Z 3 with β (v) ≡ p(v) − q(v) (mod 3).We propose to show that (9) β is a Z 3 -boundary of G . To justify this, let p 1 (resp., q 1 ) be the number of positive edges directed from A to Ā (resp., from Ā to A) in D , and let p 2 (resp., q 2 ) be the number of source (resp., sink) edges between A and Ā in D .Note that (10) p 1 = p(z 0 ) and q 1 = q(z 0 ). Since d + D (v A ) ≡ d − D (v A ) (mod 3), the following equality holds. Fig. 2 . Fig. 2. A lifting of xy and xz and an orientation extension. for the case when σ(xy) = σ(xz) = −1.Lemma 2.4.Let (G, σ) be a signed graph and let xy, xz be two edges of G.If (G , σ ) is the signed graph obtained from (G, σ) by lifting xy and xz, thenn(G , σ ) ≥ n(G, σ) − 2. Proof.For each U ⊆ V (G), let [U, Ū ] G (resp., [U, Ū ] G )denote the cut consisting of all edges between U and Ū in G (resp., in G).Suppose the signed graph (G , σ ) obtained from (G , σ ) by flipping all edges in a cut [A, Ā] G has precisely n(G , σ ) negative edges.Consider the signed graph (G, σ) obtained from (G, σ) by flipping all edges in [A, Ā] G .It is easy to see that the number of negative edges in (G, σ) is at most two plus the number of negative edges in (G , σ ).Hence, n(G, σ) ≤ n(G , σ ) + 2, as desired. 3 . Orientations: Modulo and beyond.Let (G, σ) be a signed graph.For each A ⊆ V (G), the degree of A, denoted by d(A), is the number of edges between A and Ā; we write d(A) = d(a) if A = {a}.(Notice that the contribution to d(a) made by any loop incident with a, if any, is zero.)For each orientation D of (G, σ), let d + D (v) (resp., d − D (v)) denote the number of half-arcs leaving (resp., entering) a vertex v; we may drop the subscript D if there is no danger of confusion.Note that, by definition, each loop incident with v (if any) contributes two to d + D (v) + d − D (v), so d(v) < d + D (v) + d − D (v) if such a loop exists.Downloaded 03/17/15 to 147.8.204.164.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php For each A ⊆ V (G ), we use d (A) and τ (A) to denote the degree of A in G and the value of the set function at A, respectively.If m = 2, then d (z 0 ) = 4 ≤ 4 + |τ (z 0 )|.If m = 3, then d (z 0 ) = 7.So |τ (z 0 )| = 3 by definition and thus d (z 0 ) = 4 + |τ (z 0 )|.Hence the inequality d (z 0 ) ≤ 4 + |τ (z 0 )| holds in either case.By Lemma 3.3, we have d then D(z 0 ) can be extended to a β-orientation D of the entire graph G. Proof.By definition, d(z 0 ) and τ (z 0 ) have the same parity, so |τ (z 0 )| ≥ 1 if d(z 0 ) = 5.Hence, d(z 0 ) ≤ 4 + |τ (z 0 )|.If S = ∅, then the statement follows instantly from Theorem 3.4.Thus we may assume S = ∅. and τ (A) denote the degree of A in G and the value of the set function at A, respectively.Since d (z 0 ) = 7 and β (z 0 ) = 0, we have |τ(z 0 )| = 3.So d (z 0 ) = 4 + |τ (z 0 )|.Since d (v) ≥ 6 for each v ∈ S, from Lemma 3.3 it follows that d (v) ≥ 4 + |τ (v)|.Therefore, by Theorem 3.4, the preorientation of the arcs incident with z 0 can be extended to a β -orientation of the entire graph G , which clearly yields a β-orientation of (G, σ).
4,344.2
2014-09-30T00:00:00.000
[ "Mathematics" ]
The Physiological Role of Irisin in the Regulation of Muscle Glucose Homeostasis Irisin is a myokine that primarily targets adipose tissue, where it increases energy expenditure and contributes to the beneficial effects of exercise through the browning of white adipose tissue. As our knowledge has deepened in recent years, muscle has been found to be a major target organ for irisin as well. Several studies have attempted to characterize the role of irisin in muscle to improve glucose metabolism through mechanisms such as reducing insulin resistance. Although they are very intriguing reports, some contradictory results make it difficult to grasp the whole picture of the action of irisin on muscle. In this review, we attempted to organize the current knowledge of the role of irisin in muscle glucose metabolism. We discussed the direct effects of irisin on glucose metabolism in three types of muscle, that is, skeletal muscle, smooth muscle, and the myocardium. We also describe irisin’s effects on mitochondria and its interactions with other hormones. Furthermore, to consider the relationship between the irisin-induced improvement of glucose metabolism in muscle and systemic disorders of glucose metabolism, we reviewed the results from animal interventional studies and human clinical studies. Introduction Insulin resistance and abnormal insulin secretion are thought to be the major mechanisms of type 2 diabetes (T2DM) onset. Although there is a debate about the fundamental cause of T2DM, in general, insulin resistance is thought to precede its deficiency in the early stages of onset, and hyperglycemia develops when the relative lack of insulin exceeds the threshold. T2DM can be said to be a disease that includes various pathological conditions caused by hyperglycemia. It has been believed that sedentary behavior, commonly seen in subjects with T2DM, is associated with many deleterious health outcomes. Obesity, because of the associated sedentary behavior, is one of the most important modifiable risk factors for the prevention of T2DM. Accordingly, preventing TDM development and treating its associated consequences should focus on lifestyle modifications to eliminate a lack of exercise [1]. It is widely known that regular exercise has benefits for the treatment of patients with T2DM, such as improved bodyweight control, better blood glucose levels, greater regulated blood pressure control, and the onset of fewer complications [2,3]. Various prescriptions for exercise therapy are being tested, and some particular types of exercise, such as aerobic and resistance training, have been shown to be effective for the treatment of T2DM [4][5][6]. Muscle falls into three distinctly different types, as follows: myocardium, skeletal muscle, and smooth muscle. More than half of a body's weight is made up of muscle, that is, muscle is the largest organ of the body. Muscle is also known as the largest site of insulin-stimulated glycogen synthesis for glucose storage. In addition, it has recently come to be recognized as a secretory organ capable of releasing various myokines [7]. Myokines regulate multi-organ metabolism, angiogenesis and growth through autocrine, paracrine and endocrine signaling [8]. Some of the myokines are induced by exercise, and exercise-induced myokines can have some beneficial biological effects, for example, anti-inflammatory effects in both acute inflammation and in chronic low-grade inflammation [9]. Gene expression in muscle and serum levels of myokines show unique patterns of change immediately after the start of exercise, suggesting that the exercise-induced release of myokines may play an important role in coordinating metabolism, leading to a beneficial effect on T2DM treatment [10]. The impact of exercise on myokine function is not yet fully understood. However, it has been reported that exercise induces crosstalk between muscle and adipose tissue via myokine [8], induces the interaction between myokine and other cytokines [11], and controls systemic inflammatory response [11]. The myokine's secretome contains many cytokines that act on various tissues, such as adipose tissue, liver, pancreas, and brain [12][13][14][15]. Among them, irisin is a novel myokine produced by the release of the proteolytically cleaved extracellular portion of the fibronectin type III domain-containing protein 5 (FNDC5) [16]. Irisin is secreted in response to exercise and increases energy expenditure by promoting the browning of white adipose tissues (WAT) [17][18][19]. In mice fed a high-fat diet (HFD), the overexpression of FNDC5 increased the serum levels of irisin, slightly reduced the weight, and, most prominently, improved hyperglycemia and hyperinsulinemia, suggesting an improvement in the insulin resistance of the mice [16]. Skeletal muscle also communicates with the pancreatic islet through irisin, regulating insulin secretion [20]. Thus, irisin has attracted a great deal of attention as a therapeutic target for metabolic diseases, including obesity, dyslipidemia, T2DM, and arterial hypertension. Based on these findings of FNDC5 in metabolic regulation with the exercise-induced nature of irisin, and the possibility that muscle itself can be irisin's target organ, researchers have started to look at the role of irisin in exercise-induced effects on muscle glucose metabolism [16,21,22]. The aim of this review is to highlight the emerging knowledge about irisin in glucose homeostasis in three types of muscles in vitro and in vivo under metabolic stresses, such as high-lipid/hyperlipidemia, and high-glucose/ hyperglycemia. Synthesis and Secretion of Irisin Irisin was first described in 2012 as a myokine of transgenic mice overexpressing Ppargc1a (peroxisome proliferator-activated receptor gamma coactivator 1α; PGC1α), a transcription cofactor that plays a pivotal role in the regulation of energy metabolism [16]. PGC1α stimulates the expression of FNDC5 to increase the synthesis of the membranebound FNDC5. FNDC5 is a 209-residue protein with an N-terminal 29-residue signal sequence, followed by a putative fibronectin III (FNIII) domain, a linking peptide, a transmembrane domain, and a 39-residue cytoplasmic segment ( Figure 1). Proteolytically cleaved protein is modified with glycosylation and dimerization, and then the segment is released into circulation as irisin, which consists of 112 amino acids [23][24][25]. The 112-amino acid sequence is identical in humans and mice [16,26]. Previous research has revealed preliminary evidence that irisin is not only expressed in mammalian muscular tissues, such as cardiac muscle, skeletal muscle, and smooth muscle (tongue, rectum, etc.) but also in the pancreas, liver, and adipose tissue, which has important functions in systemic glucose metabolism regulation [27][28][29][30]. Therefore, it can be said that irisin belongs to the group of regulatory molecules, such as adipocytokines/adipokines [30][31][32]. Irisin induces the expression of uncoupling protein 1 (UCP1) and then increases energy expenditure in WAT with adipocyte browning [33]. Furthermore, irisin is expected to show protective effects in the pathogenesis of harmful complications of obesity, such as dyslipidemia, T2DM, and arterial hypertension [34][35][36]. With these findings, as mentioned above, irisin has attracted substantial interest as a novel remedy for these metabolic disorders. Of note, palmitate (PA), or high ambient glucose, inhibited the expression of FNDC5 by human primary muscle cells in vitro [37]. However, FNDC5 expression is generally higher in the muscle cells of individuals with T2DM than in those who are non-diabetic [37]. On the other hand, shortterm (4 h) exposure of myotube to PA could induce irisin secretion without affecting FNDC5 [20]. Accordingly, HFD is able to acutely increase irisin serum concentration [20] These findings suggest that additional unknown factors are engaged in the lipid/glucose-mediated regulation of FNDC5 expression. Future research is expected to disclose factors involved in the mechanism. the pathogenesis of T2DM are insulin resistance, a deteriorated insulin secretory capacity, and a genetic background associated with excess energy intake and physical inactivity. Physical exercise, which directly protects muscle glucose metabolism and attenuates insulin resistance [38][39][40], may restore the impaired insulin secretory capacity [41] and rebuild glycemic control [42]. As mentioned above, irisin is induced by physical exercise, and given the biological activities of irisin, it is reasonable to presume that irisin is involved in the protective effects of physical training on muscular glucose metabolism. Skeletal muscle plays a well-studied role in regulating glucose homeostasis, and skeletal muscle insulin resistance plays a pivotal role in the pathogenesis of T2DM [43]. By accounting for approximately 50% of the mass of the whole body, muscles make up a large part of the body capacity of glycogen storage. Under resting conditions, about 80% of blood glucose is metabolized by brain and peripheral tissues in an insulin-independent manner. However, after insulin stimulation, skeletal muscle accounts for almost 80% of glucose utilization [44]. Glycogen is the storage form of carbohydrates in mammals. In humans, the majority of glycogen is stored in skeletal muscle and the liver to a lesser extent. Glycogen storage in skeletal muscle is limited by feedback-mediated inhibition of glycogen synthase (GS), which prevents excess accumulation of glycogen. De novo lipid synthesis can take the place of glycogenesis when glycogen stores are filled [45], and this accelerated lipid synthesis will lead to ectopic fat accumulation and eventual insulin resistance [46]. Irisin improves glucose homeostasis by increasing glycogenesis via phosphatidylinositol 3-kinase (PI3K)/Akt/glycogen synthase kinase-3 (GSK-mediated glycogen synthase (GS) activation, while reducing gluconeogenesis via the downregulation of PI3K/Akt/forkhead box transcription factor O1 (FOXO1)-mediated phosphoenolpyruvate carboxykinase (PEPCK) and glucose-6-phosphatase (G6Pase) (Figure 2) [47]. The portion of the other types of muscle, that is, smooth muscle and myocardium, is far smaller than that of skeletal muscle. However, the glucose metabolism of these small muscles markedly synergizes with local changes in metabolic syndrome [48,49]. Therefore, it is also meaningful to consider the action of irisin on these small muscles. Muscle dysfunction as a factor in metabolic disorders is far more diverse than previously thought. Recently, the interaction between muscle and pancreas has been attracting attention as a predisposing factor for the regulation of insulin secretion. And in the context of the muscle-pancreas interaction, irisin is considered to restore impaired glucose-induced insulin secretion by pancreatic β-cells [20,50]. Considering the importance of muscle in glucose metabolism, developing a blueprint for the regulation of muscle metabolism with myokines will enable the acquisition of further knowledge about the role of the novel myokine in the development and prevention of metabolic disorders [51]. Skeletal Muscle Muscle tissue, along with adipose tissue, is considered to be the main target organ for irisin in regulating the homeostasis of glucose [52][53][54][55]. In this context, several studies have described that irisin mimicked or reinforced insulin actions in skeletal muscle in vitro and in vivo. That is, the treatment of primary human skeletal muscle cells and the C2C12 myoblast cell line with recombinant irisin for 1 h or longer significantly increased the uptake of glucose [56,57]. Similarly, the overexpression of irisin in C2C12 cells showed a promoting effect on glucose uptake and glycogen accumulation in the cell [57]. In vivo, soleus muscle isolated from irisin-treated (0.1 mg/kg, 4 i.p. injections/week, for 5 weeks) HFD mice contained higher glycogen levels than the control mice by stimulating glucose transporter type 4 (GLUT4) translocation to the skeletal muscle cell membranes, and decreased irisin secretion contributes to muscle insulin resistance [54,57,58]. Furthermore, the irisin-overexpressed C2C12 cells had a significantly higher basal insulin receptor (IR) phosphorylation level than the empty vector-transfected control cells [57]. It has been found that irisin also influences glucose metabolism in skeletal muscle at the level of gene expression. After 6 h of irisin treatment on primary human skeletal muscle cells, the expression of genes that participate in glucose transport and lipid metabolism, such as GLUT4, Hexokinase 2 (HK2), and peroxisome proliferator-activated receptor alpha (PPARA), were upregulated, whereas the expression of genes that relate to glycogenolysis (glycogen phosphorylase; PYGM) or gluconeogenesis (phosphoenolpyruvate carboxykinase 1; PCK1) was suppressed [59]. These changes in the metabolism of skeletal muscle glucose at various levels were triggered by declines in intracellular and intra-mitochondrial ATP, which led to an increase in the levels of the phosphorylation of AMP-activated protein kinase (AMPK) and the activation of its downstream kinases, such as mitogen-activated protein kinase (MAPK), Erk1/2, and p38 [57,60]. As proof that AMPK is an important factor, a number of papers have shown the importance of the AMPK signaling pathway for the effects of irisin on skeletal muscle glucose metabolism [56,57]. Recombinant irisin augmented the glucose uptake via AMPK activation in differentiated L6 muscle cells [58]. The activation of AMPK was preceded by the induction of reactive oxygen species (ROS) and the activation of p38 MAPK, which was consequential to the translocation of GLUT4 to the outer membranes of these cells [58,61,62]. The treatment of irisin-overexpressed C2C12 cells with compound C, a reversible AMPK inhibitor, suppressed the activity of the IR signaling pathway [57]. Similarly, the enhanced uptake of glucose in the C2C12 cells treated with irisin and cultured in high ambient glucose and PA containing medium were alleviated after the inhibition of the AMPK signaling with AMPKα2 siRNA [62]. The treatment or overexpression of irisin in the C2C12 cell line can attenuate PA-induced insulin tolerance by stimulating the phosphorylation of Akt and Erk [53,57]. Metformin (Met) is a biguanide antihyperglycemic drug that is traditionally used for the management of T2DM [63]. The therapeutic effects of Met are based on a combination of improved peripheral uptake and the utilization of glucose, a decreased hepatic glucose output, a decreased rate of intestinal absorption of carbohydrate, and enhanced insulin sensitivity [64,65]. In skeletal muscle, Met increases glucose uptake through its activation of AMPK [66,67]. Met is also known to promote irisin release from murine skeletal muscle independently of AMPK activation [68], and plasma irisin levels provide clinically relevant information about the effectiveness of Met treatment in T2DM patients [49]. Interaction with irisin in skeletal muscle via AMPK signaling may be one of the mechanisms of action of Met as a therapeutic drug for T2DM. As mentioned above, it seems plausible to consider that irisin is a regulator of glucose metabolism in skeletal muscle. To put it another way, glucose seems to be a critical factor in regulating irisin synthesis through skeletal muscle. For example, in human studies, myotubes isolated from patients with T2DM expressed higher FNDC5 levels than those from healthy controls [69]. In these patients, a euglycemic-hyperinsulinemic clamp showed unchanged irisin levels in circulation [70]. Furthermore, the treatment of cultured muscle cells with glucose can reduce FNDC5 expression significantly [71]. This negative effect of glucose on FNDC5 expression is more prominent in myotubes isolated from patients with T2DM than in those from healthy controls [72]. These findings suggest that glucose is a critical suppressor of irisin synthesis in skeletal muscle, especially in patients with T2DM [70,71]. It is expected that the details of the mode of involvement of irisin in glucose metabolism in skeletal muscle will be clarified by further research. Smooth Muscle There is limited information on the action of irisin on smooth muscle compared to skeletal muscle, and no report regarding the involvement of irisin in smooth muscle glucose metabolism has been published so far. Although not directly related to glucose metabolism, a report demonstrates that platelet-derived growth factor (PDGF)-induced fibrotic phenotype modulation of rat vascular smooth muscle is prevented by irisin through the suppression of the signal transducer and activator of the transcription 3 (STAT3) signaling pathway, and it was suggested that irisin has a function of maintaining a healthy phenotype of smooth muscle cells [72]. It has been reported that the STAT3 pathway induces insulin resistance and the disruption of glucose metabolism in some cells and tissues, such as lung, kidney, and muscle [73][74][75][76]. There is also a report showing that intimal hyperplasia can be attenuated by inhibiting the activity of the BB isoform of the PDGF (PDGF-BB)-induced Janus kinase 2 (JAK2)/STAT3 signaling pathway in vascular smooth muscle cells [77]. Taken together, PDGF-STAT3 signaling may contribute to glucose metabolism in smooth muscle cells as well. However, there are reports that the conditional knockout of STAT3 in muscle does not prevent HFD-induced insulin resistance, and STAT3 variants are not associated with obesity or insulin resistance in female twins [78][79][80]. Further research is needed for details on the relationships among smooth muscle health, PDGF/STAT3 pathway, and glucose homeostasis. Pioglitazone (PIO), a PPAγ agonist that improves glycemic control in T2DM through its insulin-sensitizing action, was shown to inhibit vascular smooth muscle cell proliferation, and the inhibitory effect was mediated by AMPK activation and/or diminishing of PDGFinduced mechanistic target of rapamycin (mTOR) activity [81]. Membrane-bound PDGF-BB transfer by endothelial cell-derived extracellular vesicles could account for vascular smooth muscle cell resistance to apoptosis under the hyperglycemic environment of patients with T2DM [82]. PDGF-BB specifically induced smooth muscle cell migration and proliferation through PI3K-dependent Akt activation, Erk activation, ROS generation, nuclear factor-κB (NF-kB) and activator protein-1 (AP-1) activation, microRNA (miR)-221 and miR-222 induction, reversion-inducing cysteine-rich protein with kazal motifs (RECK) suppression, and matrix metalloproteinase (MMP2 and 9) activation [83]. According to these studies, it is obvious that various unidentified factors are involved in the action of PDGF. As previously mentioned, information on irisin, smooth muscle, and its glucose metabolism is currently very limited and would be an interesting topic for future research. Myocardium It has been reported that, depending on various conditions, rat cardiac muscle may produce more irisin than skeletal muscle in response to an exercise load [84]. This finding showed the possibility that cardiac muscle may be another main source of irisin besides skeletal muscle. This also suggests that myocardium-produced irisin can display endocrine, paracrine, and autocrine functions in cardiac muscle as well as in skeletal muscle. Among the various myocardial substrates, glucose holds less than 25% of energy generation under ordinary conditions, while fatty acid oxidation generates the majority of energy [85]. However, glucose is unique among myocardial substrates because a small amount of ATP is obtained by substrate-level phosphorylation during glycolysis even in stressful environments, such as hypoxia or ischemia. ATP obtained from glycolysis in the extramitochondrial compartment may be especially critical for the maintenance or restoration of ionic homeostasis. The requirement for glucose to maintain cardiac function becomes more pronounced in the presence of metabolic stress [86]. Therefore, it is important to maintain normal glucose metabolism to sustain the health of the myocardium under stress. Considering the action of irisin on skeletal muscle, it is expected to have a similar effect on glucose metabolism in the myocardium, but so far, no reports have been found on the direct action of irisin in myocardial glucose metabolism. Even in such a situation, there are a few reports that show that irisin has a protective effect on the myocardium in a hyperglycemic environment, with descriptions that may be relevant in no small measure [87,88]. As another example of indirect evidence for effects of irisin on cardiac glucose metabolism, in an in vitro study, 500 μM of PA induced insulin resistance in the H9c2 cardiomyoblast cell line, while co-treatment with 200 ng/mL of irisin reversed it and significantly increased cellular insulin-stimulated glucose consumption by inhibiting autophagy through the PI3K/Akt signaling pathway [89]. Recently, it has been revealed that autophagy plays a pivotal role in diabetes and its cardiac complications [90][91][92]. Autophagy is a cellular catabolic process, facilitating lysosomal degradation, recycling of intracellular misfolded proteins and injured organelles [93]. It is involved in the maintenance of various physiological responses and plays a dual role in inducing cytoprotection and cell death [94,95]. In the last few years, as one of irisin's most pleiotropic and favorable properties, irisin's autophagy regulating function has been attracting attention [96,97]. During the last decade, several studies have described the relationship between autophagy and insulin resistance in cardiac tissue and other organs. However, results and conclusions from these studies have been inconsistent [89,[98][99][100]. The downregulation of autophagy was observed, particularly in autophagy-related 7 (Atg7) expression levels in both genetic and dietary models of obesity [101], and in vivo and in vitro suppression of Atg7 led to impaired insulin signaling. In contrast, suppressed mTOR signaling and augmented autophagy in adipocytes from obese patients with T2DM were described in [102]. Conversely, there is a report showing that autophagy is not involved in the development of insulin resistance in skeletal muscle [103]. In addition, excessive autophagy activation is associated with PA-induced cardiomyocyte insulin resistance [104]. Taken together, these findings may indicate that maintaining normal cellular insulin signaling requires keeping autophagy levels stable. The relationship between autophagy and glucose metabolism is an interesting issue but there is still room for further investigation. As mentioned above, irisin is generally regarded as a regulator of autophagy, and this function of irisin is thought to improve the integrity of cells and tissues [105]. However, currently, there is no clear answer as to how irisin regulates autophagy in the heart or how it attenuates insulin resistance in cardiac muscle. Further innovative reports are needed regarding the relationship between irisin and autophagy. Effects of Irisin on Mitochondria to Preserve Muscle Glucose Homeostasis As described briefly above, irisin preserves the mitochondrial transmembrane potential in an AMPK signaling-dependent manner and stimulates mitochondrial biogenesis by upregulating the genetic expression of Tfam (mitochondrial transcription factor A), Ppargc1a, and Nrf1 (nuclear respiratory factor 1), as well as the genetic and protein levels of UCP3 and GLUT4 in C2C12 cells [53]. This maintenance of mitochondrial health is associated with the increased resistance of cells to hyperglycemic stress environments [53,58,61]. Mitochondria play a major role in enhancing skeletal function by not only producing ATP to meet energy demands but also by regulating cellular apoptosis and calcium retention [106,107]. The drastic changes in mitochondrial proteome to downregulate mitochondrial metabolic processes have been observed in skeletal muscle in diabetic patients [108,109]. HFD-induced diabetic mice showed mitochondrial dysfunction to inhibit myoblast differentiation [110]. C2C12 myoblasts exposed to high ambient glucose (15 mM) and/or hyper-lipidemic (0.25 mM PA) conditions for 2 h showed increased mitochondrial fragmentation and membrane potential as well as elevated ROS production compared to control cells in normoglycemic (5.6 mM glucose) conditions [111]. Then, autophagy removed damaged mitochondria with metabolic overload to protect the skeletal muscle from insulin resistance in obesity and T2DM [112]. Given these findings, mitochondrial maladaptation to metabolic stress, such as hyperglycemia, can be a critical factor for disturbances of glucose metabolism in skeletal muscle. However, there is also a report showing that mitochondria are functionally intact in insulin-resistant skeletal muscle from a T2DM non-obese rat model [113], so further verification is necessary on this matter. Exercise is an effective nonpharmacological remedy that induces beneficial mitochondrial adaptations, increasing mitochondrial quality and content [114]. The exercise-induced mitochondrial adaptations in skeletal muscle act on PGC1α, which activates the downstream factor FNDC5 in the skeletal muscle cells [115]. This intriguing relationship between FNDC5/irisin and mitochondrial genes and proteins that regulate mitochondrial function has recently been reported [116,117]. Interactions of Irisin and Other Hormones- The effects of irisin on skeletal muscle and the interaction of irisin with other hormones were well described in a previously published review [60]. That is, irisin induced a significant increase in levels of betatrophin (also known as angiopoietin-like protein 8) in obese mice [118]. In mice, betatrophin is produced by the liver, WAT, and brown adipose tissue (BAT), while in humans, the liver is the major producing organ [119]. Betatrophin affects glucose homeostasis and lipid metabolism [120]. Accordingly, a PGC1α-irisin-betatrophin pathway has been expected to regulate glucose homeostasis. According to this theory, exercise-induced PGC1α stimulates FNDC5 expression and consequently increases irisin release from muscle cells, and then irisin acts on muscle in a paracrine or autocrine manner to reduce insulin resistance directly and/or indirectly through betatrophin. However, some studies could not reproduce these previous results and the role of betatrophin in glucose homeostasis, and even the existence of such an axis, remains controversial [121]. Leptin participates in glucose homeostasis with irisin. Leptin mediates stimulation in myotubes, the downregulation of irisin secretion, and the expression of FNDC5 in subcutaneous adipose tissue (SAT) [121]. Leptin can also induce irisin-dependent myogenesis and inhibit the browning of adipocytes by downregulating UCP1 [122]. Interactions between other adipokines, such as adiponectin or resistin and irisin, have also been described. For example, a positive association between serum levels of irisin and adiponectin has been described in obese patients [123], while a negative relationship of irisin with resistin has been found in exercise training [124]. Of note, studies associating irisin concentrations with adipokines are still scarcely described and contradictory. There are descriptions that there is both a correlation or no correlation between the expression levels of irisin and leptin [125,126]. A cohort study on children has reported no correlation between the levels of irisin and resistin [127]. Several studies have described the interaction between leptin and irisin. Leptin increased the expression of FNDC5 in the skeletal muscle of mice while decreasing FNDC5 expression in SAT via the downregulation of PGC1α. Co-treatment with leptin and irisin downregulated irisin-induced fat browning of subcutaneous adipocytes [128]. Thus, further characterization of the relation between irisin and adipokines, a potential factor involved in cardiometabolic risk, is needed in the future. Finally, so far, there is not much available information on the relationship between irisin and other hormones involved in glucose metabolism, such as adrenaline, cortisol, growth hormone (GH), and incretins. Diurnal fluctuations are observed in the blood level of irisin, meaning that the possibility that irisin and cortisol/growth hormone are mutually regulated cannot be ruled out, as these hormones follow a specific circadian circulating pattern [129]. Furthermore, serum levels of irisin in individuals with a various range of body mass index (BMI), including patients with anorexia nervosa or those with obesity, show no relation to levels of cortisol, TSH, C-reactive protein, or ghrelin [130]. However, as mentioned above, a possible relation of irisin with these hormones has not been described in detail yet, and it is premature to carry out a detailed discussion on that. Only a few studies have reported the role of irisin in insulin signaling. In these reports, in vitro C2C12 myofibroblasts treated with PA have increased insulin resistance via the suppression of Akt and/or MAPK (Erk1/2 and p38) phosphorylation, and this suppression was partially reversed by irisin, indicating a protective effect of irisin on insulin signaling in muscle [57,58]. Moreover, several studies described a direct correlation between fasting levels of irisin and insulin but not between postprandial levels of them [130][131][132]. Conversely, insulin did not alter irisin levels in patients with T2DM and obesity in a euglycemic-hyperinsulinemic clamp [107]. Due to its modalities of secretion and its pancreatic and extra-pancreatic effects, irisin could be considered an incretin-like hormone, with an action similar to that of glucagon-like peptide-1 (GLP-1), which retains substantial insulinotropic activity in diabetic patients [133]. This similarity between irisin and incretin has been discussed but not yet established. Future studies should focus on irisin's insulinotropic effect and on any possible interactions between irisin and insulin that might affect glucose metabolism. Interventional Animal Studies- In the very first report introducing irisin, BALB/c mice fed with an HFD for 20 weeks were injected intravenously with FNDC5expressing adenoviral particles [17]. After 10 days, these mice had similar body weights to the control mice, however, the glucose levels and fasting levels of insulin after intraperitoneal glucose infusion were significantly reduced (~50%), suggesting that irisin can attenuate systemic insulin resistance. Regarding the autocrine physiological effects of irisin on muscles, the in vivo treatment of mice with irisin resulted in an increase in muscle mass and strength [134]. In the study, 5-week-old C57BL/6J mice were injected twice weekly with 2.5 μg/g body weight of irisin intraperitoneally (IP) for 4 weeks and changes in weight and the cross-sectional area (CSA) of muscles were evaluated (quad, M. biceps femoris, M. tibialis anterior, and M. extensor digitorum longus) along with some biochemical/histochemical markers. With these data, the authors of the paper proposed that irisin injection leads to an increase in the activation of satellite cells and reduces protein degradation by the downregulation of atrogin-1 and muscle ring-finger protein-1 (MuRF-1), resulting in a partial rescue of muscular atrophy. As an investigation for the potential autocrine role of irisin on skeletal muscle glucose metabolism, Yang et al. showed that HFD-induced diabetic C57BL/6 mice developed muscular impairment of insulin signaling, and in combination with the in vitro data, proposed that extrinsic irisin reverses the insulin resistance of the myocytes [55]. Moreover, Farrash et al. reported that the electrotransfer of FNDC5-harboring vectors to rat hindlimb muscle (M. tibialis cranialis) resulted in the increase of muscle glycogen, along with enhanced glycogen synthase 1 (GS1) gene expression [135]. In addition, GLUT4 protein tended to increase in the muscle [135]. However, glucose uptake by the muscle was unchanged, suggesting that short-term in vivo effects of irisin on muscle glucose uptake were not defined in the study. Human Studies-A number of clinical studies regarding the relation between irisin and systemic glucose metabolism have been published. For example, Park et al. reported that serum irisin levels are associated with an increased risk of metabolic syndrome in humans, indicating either increased irisin secretion by adipose/muscle tissue or a compensatory increase of irisin to overcome an underlying irisin resistance [136], which is similar to the well-documented leptin resistance [137]. Irisin resistance is generally defined as the inability of endogenous or exogenous irisin to promote the expected beneficial metabolic outcomes, such as stimulation of energy expenditure, due to multiple molecular, neural, environmental, and behavioral mechanisms. María et al. proposed that in individuals with obesity, FNDC5 expression in muscle was significantly decreased in association with T2DM, and FNDC5 expression in muscle was significantly associated with FNDC5 and UCP1 expression in visceral adipose tissue [133]. In most clinical studies, irisin levels of patients with pre-diabetes or T2DM have been reported to be lower than the controls [134,138,139]. The factor that is responsible for the low secretion of irisin in T2DM has not yet been identified, although some studies have suggested that chronic hyperglycemia and hyperlipidemia are possible causes [37,70]. Accordingly, the levels of irisin in the blood could be an important factor in the changes observed in the health and disease of metabolism [140]. Taken together, although there seems to be no doubt that irisin is associated with insulin resistance, there is no consensus on the link between irisin and metabolic syndrome. Furthermore, there is no publication for human study about the mechanism by which the effect of irisin on muscle glucose metabolism leads to systemic obesity and insulin resistance. This lack of literature is probably due to the difficulty in the evaluation of glucose metabolism in the living body. Larger prospective studies with the innovation in research technology are therefore needed to clarify these issues. Applicability of Irisin in the Treatment of Diabetic Complications T2DM, especially with its major complications (neuropathy, retinopathy, and nephropathy) is known to be associated with the increased risk of loss of mobility and strength that is frequently associated with disease control. Sarcopenia, a comorbid symptom of T2DM, is a loss of muscle mass associated with a loss of strength and/or performance, resulting in worse morbidity and quality of life in patients [146]. Currently, practical treatments are limited to indirect means, such as dietary prophylaxis and exercise therapy. With the increasing prevalence of sarcopenia in T2DM, there is a need for new interventions that effectively counter the loss of skeletal muscle mass. Considering the direct effects of irisin to preserve the health of skeletal muscle, irisin may also have potential as a treatment for sarcopenia. Furthermore, diabetic foot ulceration (DFU) occurs in up to one-quarter of people with T2DM and is one of the most common causes of lower limb amputation [147]. Wounds of diabetic patients usually show abnormal slow healing, and this delayed healing is thought to be due to a combination of factors, including macrovascular and microvascular disease [148]. Angiogenesis, the formation of new blood vessels from pre-existing vessels, is a crucial process for wound healing and is seriously damaged in diabetic wounds [149]. Irisin improved cardiac function and reduced the infarct area in post-myocardial infarct mice hearts, and this therapeutic effect was associated with its pro-angiogenic effects [150]. Based on these findings, it is possible that irisin may also have a therapeutic effect on DFU by a mechanism other than the normalization of muscle glucose metabolism. This effect was partly due to the reduction of oxidative stress (due to a decrease in intracellular ROS levels and an increase in the total antioxidant capacity) by suppressing inflammatory markers such as NF-κB, cyclooxygenase 2, p38 MAPK, tumor necrosis factor (TNF), and IL-6 [151,152]. Taken together, irisin not only keeps muscle glucose metabolism healthy in a hyperglycemic and high-lipid environment but also has the effects of maintaining the health of tissue oxidative/antioxidant balance and suppressing inflammation, so it can be a potential therapy not only for T2DM but also for many of its complications. Conclusions Muscle, one of the major targets of insulin, is one of the first tissues to develop insulin resistance in a state of general obesity, diabetes, and other forms of disorders of glucose metabolism. Considering the function of muscle as an endocrine organ that secretes a variety of myokines involved in maintaining homeostasis of glucose metabolism in response to nutritional status and exercise, it is reasonable to imagine that the development of insulin resistance in muscle has a great effect on its function as a secretome or vice versa. Since muscle is also the major tissue where insulin stimulates glucose uptake and removes excess glucose from the blood, it plays a central role in glucose metabolism throughout the body, so that changes in the muscular secretome may have an impact not only on the local muscle but also on the systemic glucose homeostasis. Irisin, which has been known to be involved in the regulation of energy expenditure, seems to be a strong candidate for the treatment of metabolic disorders. In fact, its potential as a therapy has been suggested by numerous in vivo and in vitro experiments. Through functions in muscle, irisin contributes to normoglycemia (Figure 3). The elucidation of irisin's physiology involved in the maintenance of muscle and systemic glucose homeostasis and understanding their mechanisms of action is critical in developing treatments for metabolic diseases, such as obesity and T2DM, by pharmacologically mimicking the effects of exercise. Based on the current knowledge, trials to evaluate the usefulness of irisin as a therapeutic agent in humans appear to be premature. That is, many reports have not reproduced the previous findings, partly because non-physiological levels of irisin were used in these studies, as many of them were done before it became possible to accurately measure the blood levels of irisin. Furthermore, inconsistencies in the data highlight the necessity for better design for both basic and clinical studies. In recent years, the accuracy of the irisin assay has improved, and the accumulation of irisin's physiological information, such as concentration in circulation, has also progressed. Thus, it is expected that the accuracy and consistency of irisin research will be improved in the future. Funding: The study was supported by National Heart, Lung, and Blood Institute Grants (R01 HL089405 and R01 HL115265) and National Institute of General Medical Sciences (GM 141339). Irisin augments insulin-induced phosphatidylinositol 3-kinase (PI3K)/Akt signaling activity. The activated Akt promotes glucose transporter type 4 (GLUT4) translocation to the membrane, which leads to the increase of glucose inflow into the cell. For glycogen synthesis, the activated Akt inhibits GSK3 activity and subsequently activates glycogen synthase (GS) to enhance glycogen synthesis. Conversely, activated Akt inhibits forkhead box transcription factor O1 (FOXO1) and downregulates the gene expressions of phosphoenolpyruvate carboxykinase (PEPCK) and glucose-6-phosphatase (G6Pase), which leads to a decrease in gluconeogenesis. IRS: insulin receptor substrate. Thin red arrow indicates promotion, thin blue arrow indicates suppression. Irisin is primarily secreted by skeletal and cardiac muscle (and maybe by smooth muscle) during exercise (blue arrows). Irisin returns to muscles via blood or in an autocrine manner (red arrows), leading to changes in their handling of glucose homeostasis. The effects of irisin on muscles favor states of normoglycemia. Black arrows pointing up indicate promotion and black arrows pointing down indicate suppression.
8,272.6
2021-08-13T00:00:00.000
[ "Medicine", "Biology" ]
Identification and validation of a ferroptosis-related genes based prognostic signature for prostate cancer Ferroptosis, an iron-dependent form of selective cell death, involves in the development of many cancers. However, systematic analysis of ferroptosis related genes (FRGs) in prostate cancer (PCa) remains to be clarified. In our research, we collected the mRNA expression profiles and clinical information of PCa patients from TCGA and MSKCC databases. The univariate, LASSO and multivariate Cox regression method were performed to construct prognostic signature in TCGA cohort. Seven FRGs, AKR1C3, ALOXE3, ATP5MC3, CARS1, MT1G, PTGS2, TFRC, were included to establish the risk model, which was validated in MSKCC dataset. Subsequently, we found that high risk group was strongly correlated with copy number alteration load, tumor burden mutation, immune cell infiltration, mRNAsi, immuetherapy and bicalutamide response. Finally, it was identified that overexpression of TFRC could induce proliferation and invasion in PCa cell lines in vitro. These results demonstrated that this risk model based on recurrence free survival (RFS) could accurately predict prognosis in PCa patients, suggesting that FRGs are promising prognostic biomarkers and drug target genes for PCa patients. Introduction Prostate cancer (PCa) is one of the most common male malignancies among the world, which caused the second cancer-related deaths across western countries [1].Since 1990s, prostate-specific antigen (PSA) has been used as the standard test for PCa detection.However, there were no significant differences in mortality among PSAscreened patients comparing with those without screening [2][3][4].PSA is also regarded as a significant sign for prognosis of PCa.For those patients accepted radical prostatectomy and radiotherapy, 27%-53% of them experienced the return of PSA, which was defined as biochemical recurrence (BCR) [5].The BCR will lead to the development of advanced castration-resistant prostate cancer (CRPC) stage and resulted in an increasement of risk in distant metastases, prostate cancer-specific mortality and overall mortality [6,7].Hence, it is of great significance to identify novel prognostic biomarkers for PCa. Ferroptosis, a newly identified form of regulated cell death (RCD) characterized by iron accumulation and lipid peroxidation, is distinct from other forms of RCD (necroptosis, apoptosis, or autophagic cell death) [8].Recently, emerging evidence suggested that ferroptosis is related with cancer initiation, progression or drug sensitivity [9][10][11]. Application of ferroptosis inducers could help overcoming drug resistance and inhibiting cancer progression or metastasis.For example, Tang et al reported that knockdown of metallothionein-1G (MT-1G) enhances the sensitivity of sorafenib in hepatocellular carcinoma through promotion of ferroptosis [10].Shi's group indicated that cysteine dioxygenase 1 (CDO1) suppression increased cellular glutathione (GSH) levels, inhibited reactive oxygen species (ROS) generation, and decreased lipid peroxidation in erastin-treated gastric cancer cells [12]. The role of ferroptosis in PCa has drawn attention in recent years.Butler et al's study revealed that knockdown of DECR1 in PCa cells inhibits tumor cells proliferation and migration via accumulating cellular polyunsaturated fatty acids (PUFAs), enhancing mitochondrial oxidative stress and lipid peroxidation, and promoting ferroptosis [13]. In Pan's research, the clarified pannexin 2 (PANX2) as a new marker which regulates ferroptosis via Nrf2 signaling pathway and accelerates cancer cell proliferation, invasion in PCa [14].Though preliminary evidence has identified several markers correlated with ferroptosis in PCa, the association between other ferroptosis-related genes (FRGs) and PCa patient prognosis still remains largely unknown. In our research, we firstly collected the mRNA expression profiles of 40 2.1.Data collection The RNA-seq (FPKM value) data of 499 PCa and 52 normal prostate tissues with related clinical data were collected from the TCGA website (https://portal.gdc.cancer.gov/repository).As validation dataset, the MSKCC data were enrolled as the validation cohort downloaded from GEO dataset (https://www.ncbi.nlm.nih.gov/geo/query/ ), which included integrated genomic profiling of 218 prostate tumors.All datasets used in this study could be reached to the public.2.2.Gene Signature Building 40 genes were included in this study which were verified in the research of Stockwell et al [15] and are provided in Supplementary Table S1.Univariate Cox was performed to select the RFS related genes, and genes with P values less than 0.05 were retained. Then LASSO method was applied to minimize the risk of overfitting.Finally, multiple stepwise Cox regression was utilized to establish the risk model.The risk score of hub genes was established as (exprgene1 × coefficientgene1) + (exprgene2 × coefficientgene2) + ⋯ + (exprgene7 × coefficientgene7). 2.3.Clustering, genetic alterations, functional enrichment analysis Consensus clustering based on ferroptosis genes was identified using "Consensus ClusterPlus" package [16].The "clusterProfiler" R package [17] was applied to perform GO and KEGG analyses based on the different expression genes (DEGs) (|log2FC| ≥ 1, FDR < 0.05) between different risk groups.The infiltrating score of 28 immune cells were calculated with ssGSEA in the "GSVA" R package [18].The annotated gene set file is provided in Supplementary Table S2.The genetic alterations of selective genes in TCGA patients were acquired from the cBioPortal [19,20].The relationships between mRNA expression level and gleason score, lymph nodal metastasis, methylation levels were acquired from UALCAN [21]. Copy number variation (CNV) load, tumor mutation burden (TMB), neoantigens, tumor stemness and clonal score Based on the median value of risk score, the TCGA patients were divided into two risk group.Then the CNV load at the focal and arm levels were calculated based on the GISTIC_2.0results, which were freely available from Broad Firehose (https://gdac.broadinstitute.org/).The tumor mutation burden (TMB) of each patient was calculated as the total number of non-synonymous mutations per megabase.Tumor neoantigens, which could be recognized by neoantigen-spcific T cell receptors (TCRs) and may play critical roles in T-cell-mediated antitumor immune response, were analyzed based on the results from TCIA (https://tcia.at/).Clonality is the critical character of tumors.Then we also analyzed the clonality of PCa patients from TCIA dataset.Moreover, considering the tumor stemness is also one of the basic traits of tumor, then mRNAsi data was analyzed based on the report of Robertson [22]. 2.5.Prediction of immunotherapy and chemotherapy responses In order to assess the clinical response to immune therapy in PCa patients, the Tumor Immune Dysfunction and Exclusion (TIDE) [23] was applied.The chemotherapy response for three common drugs of each PCa patient in the TCGA and MSKCC datasets were calculated based on the Genomics of Drug Sensitivity in Cancer (GDSC, https://www.cancerrxgene.org) by performed 'pRRophetic' R package [24]. 2.6.Cell culture, TFRC stably over-expressed cell lines and reagents The HEK293T, and prostate cancer cells C4-2, PC-3, LNCaP, 22RV1 were purchased from American Type Culture Collection (ATCC, Manassas, VA).HEK293T was cultivated in DMEM while prostate cancer cells in RPMI 1640.All cells were cultured in the same humidified atmosphere (37 °C with 5% CO2).5-Azacytidine (Sigma, #A2385) was used for the treatment of 22RV1 and LNCaP cells.The TFRC overexpressed plasmid was designed and synthesized by Genomeditech (Shanghai, China).Next, the TFRC overexpressed plasmids, psPAX2 and pMD2.G were mixed and added into HEK293T cells to package lentivirus.After 48h incubation, lentivirus supernatants were obtained, filtered and used for infecting cells. 2.7.Western blot analysis After washed 3 times with cold PBS buffer, cells were lysed using RIPA buffer and kept on ice.We loaded the same amounts (40 µg) of different protein samples onto the SDS-PAGE gel and then transferred the protein to PVDF membranes.The membranes were blocked in skim milk (5%) for 1 h and incubated at 4 °C with primary antibodies overnight.The next day, we incubated the PVDF membranes with HRP-conjugated secondary antibodies (mouse or rabbit) for 1 h at room temperature.After 3 washes in TBST, the blots were visualized based on an ECL system (Thermo Fisher Scientific). MTT reagent was added to each well and then the plate was incubated at 37 °C for 2 h. After the incubation, we removed the medium and added dissolved the formazan crystals in DMSO.The absorbance of each well was measured at 490 nm. Colony formation: 22RV1 and LNCaP cells were seeded (1×10 3 per well) in 6-well plates and allowed to grow for 12 days.Next, the culture medium was removed, and the cells were washed in cold PBS for 3 times.Then we fixed the cells using 95% ethanol for 15 min.After that, the cells were stained in 1% crystal violet for 20 mins. The colonies were counted and photographed using a microscope. 2.9.Cell invasion assay After overexpressing TFRC, we used transwell chambers (8 μm pore size, Corning, MA, USA) to detect the migration and invasion of 22RV1 and LNCaP.The upper chamber was pre-coated with Matrigel (Corning, USA) for invasion and not for migration.10×10 4 cells with serum-free media were seeded in the upper chamber, while the lower chamber was added 600 μL 10% FBS culture media.After 12 h (migration) or 24 h (invasion), we used a cotton swab to remove the cells left in the upper surface of the chambers and fixed the cells that on the lower surface of the filters in 100% methanol for 15 min.Then, we stained them in 0.1% crystal violet solution for 20 min. The cell numbers were counted and averaged across 5 randomly chosen fields using a microscope. Statistics All data analyses in this study were performed in the R platform (v.4.0.2, https://cran.rproject.org/)or Graphpad 8.0.The comparison of mRNA expression between PCa tissues and adjacent nontumorous samples, CNV, TMB, mRNAsi, clonal score, chemotherapy response and immunotherapy response across different risk groups were calculated with Wilcoxon rank-sum test.The RFS of different groups was measured using Kaplan-Meier method with log-rank test.The DEGs between different risk groups (high and low) were utilized "limma" R package [25].The MTT, invasion and migration results were tested via Student's t test.P < 0.05 was considered as statistically significant. Clustering, construction and validation of FRGs prognostic model According to the mRNA expression levels of the 40 FRGs, the TCGA cohort could be divided into two cluster (Figure 2A).Moreover, patients in cluster1 had significantly poor RFS than those in cluster2 (Figure 2B), which suggested that FRGs could be related with the difference with PCa patients.Therefore, the explorations of the prognostic FRGs were very necessary.Firstly, the univariate Cox regression method was applied to select the prognostic-associated FRGs, which indicated that 17 genes were correlated with PCa RFS.In order to minimize the risk of overfitting, then LASSO method was utilized to choose the hub genes (Figure 2C).To further identify the FRGs with the greatest prognostic value, we conducted multiple stepwise Cox regression and chose seven hub FRGs to construct the prognostic model among PCa patients (Figure 2D).Moreover, the seven hub genes had more than 1% genetic alterations, for example TFRC (4%) and ALOXE3 (4%) (Figure 2E The K-M plot demonstrated that the group with high risk had unfavorable RFS than the low risk group (P < 0.0001, Figure 3A).Moreover, the ROC area (AUC) under the curve in 1-year, 3-year and 5-year were 0.741, 0.729 and 0.736 (Figure 3B).The RFS status of PRAD, heatmap and barplot of these seven genes were showed in Figure 3C, Figure 3D&3E.Furthermore, the results of univariate and multivariate Cox regression analysis demonstrated that the risk score was an independent factor for PCa patients in TCGA cohort (HR = 1.11, 95% CI:1.08-1.15,p value < 0.001, Table 2).In order to verify the stability of this risk model, the MSKCC cohort was included to test the predictive value.The results were consistent with the effect of TCGA dataset (HR = 1.76, 95% CI: 1.43-2.17,p value <0.001, Figure 4A-E and Table 2). Functional analyses in TCGA and MSKCC cohorts To further clarify the biological functions and pathways of FRGs, the DEGs between the high risk and low risk group were analyzed in the TCGA-PCa cohort, where 664 genes reached the set (adjust P < 0.05, |log2FC| > 1), including 419 upregulated and 245 downregulated genes (Figure 5A).Then GO and KEGG enrichment were performed using ClusterProfiler R package.Interestingly, most of the GO terms were enriched in immune-related functions (Figure 5B), such as humoral immune response, regulation of humoral immune response, B cell mediated immunity, immunoglobulin complex, antigen binding and growth factor activity.While the KEGG terms were closely associated with several metabolism process (Figure 5C), such as ascorbate and aldarate metabolism, steroid hormone biosynthesis, especially drug metabolism and metabolism of xenobiotics by cytochrome P450, which were essential pathways for many drugs fully utilization.Considering the risk score was strongly associated with the immune response, then the 28 immune infiltration status were calculated using the ssGSEA method.The activated CD4 T cell, CD 56 dim natural killer cell, mast cell, memory B cell, neutrophil, regulatory T cell and Type 17 T helper cell were significantly distinct across low-and high-risk groups in the TCGA dataset (P <0.05, Figure 5D).While the CD 56 dim natural killer cell, central memory CD8 T cell, natural killer cell, natural killer T cell and Type 17 T helper cell were significantly different between groups with low or high risk in MSKCC dataset (P <0.05, Figure 5D).And risk scores were associated with regulatory T cell, type 17 T helper cell, CD 56 bright natural killer cell and neutrophil (Figure 5E&5G), which strongly prompted that FRGs may regulate the progress of PCa via the immune process pathway. 3.4.The distinctions of gene TMB, CNV, cancer stemness index and sensitivity to immuno-/chemotherapy among the groups with high-and low-risk According to the GO and KEGG enrichment results, the risk score was closely associated with the immune process and drug metabolism pathways.Then in order to detect whether the risk score was correlated with the immune response and chemotherapy, the CNV, TMB, cancer stemness index and clonal score were included. Overexpressing TFRC facilitates proliferation, migration and invasion in PCa cell lines. Considering the risk model was strongly associated with the RFS of PCa, then the hub genes may have more effect on the biological functions.Therefore, we chose the high genetic alteration gene TFRC to confirm our hypothesis.Firstly, we measured the baseline protein levels of TFRC in PCa cells PC-3, LNCaP, 22RV1 and C4-2 by western blots.The expression of TFRC was relatively lower in LNCaP and 22RV1 cells (Figure 7A).Thus, we used these two cell lines to overexpress TFRC in later experiments. Interestingly, the mRNA expression of TFRC decreased in PCa tissues (Figure 1C&1E) when compared to normal samples, but increased in high gleason score tissues and lymph nodes metastasis (Supplementary S1A&1B).Several literatures reported that methylated CpG sites can block transcription initiation via inhibiting the binding of transcription factors, like promoters and distal regulatory regions, then change the mRNA expression of host genes [27][28][29].Then we explored the methylation levels of TFRC and the results showed that TFRC had high methylation levels in PCa tissues (Supplementary S1C), which were negatively associated with the mRNA expression of TFRC (Supplementary S1D).Therefore, we treated the PCa cells with 5-zacytidine (a widely-used DNA methylation inhibitor [30]) and found that the protein level of TFRC was increased after cells were incubated with 0.5 μM 5-zacytidine for 24 h (Figure 7B).Next, we overexpressed TFRC in 22RV1 and LNCaP cells (Figure 7C-D).The effect of TFRC on PCa cells were evaluated by MTT, colony formation and transwell assays.As shown in Figure 7E-F, upregulation of TFRC induced proliferation in PCa cells.In parallelly, the transwell assays indicated that TFRC promotes the migrative and invasive activities of LNCaP and 22RV1 cell lines (Figure 7G). Discussions Selective induction of cancer cell death is the most effective therapy method for tumor [31].Several literatures reported that ferroptosis, a common selective induction cell death, plays pivotal role in the process of tumorigenesis [32,33].However, the systematic analysis of prostate cancer has yet to be elucidated.In this study, we collected the RNA-seq data to investigate expression variations in mRNA expression profiles of ferroptosis-related genes in prostate cancer.On the basis of unicox, lasso regression and multicox analyses, the signature of seven ferroptosis-related genes was identified. It was widely reported that the seven hub genes, AKR1C3, ALOXE3, ATP5MC3, CARS1, MT1G, PTGS2, and TFRC were involved in the development of several diseases.AKR1C3, a crucial androgenic enzyme, could reprogram AR signaling in advanced prostate cancer [34] and implicated in the production of aromatase substrates in breast cancer [35].ALOXE3, epidermal LOX type 3, converting fatty acid substrates through R-hydroperoxides to specific epoxyalcohol derivatives [36], involved in late epidermal differentiation [37] and ichthyosis [38].ATP5MC3, also named ATP5G3 [39], encoding a subunit of mitochondrial ATP synthase, was associated with overall survival in clear cell renal carcinoma patients [40].CARS1, Cysteinyl-TRNA Synthetase 1, associated with tRNA function, has been found associated with the development of inflammatory myofibroblastic tumor [41] and kidney cancer [42]. CARS1 knockdown has been proved to suppress ferroptosis induced by cysteine deprivation and promote the transsulfuration pathway [43].MT1G, a small-molecular weight protein that has high affinity for zinc ions, could inhibit the proliferation via increasing the stability and the transcriptional activity of p53 [44].PTGS2, also known as cyclooxygenase 2, is responsible for the prostanoid biosynthesis [45].Several researches demonstrated that PTGS2 induces cancer stem cell like activity and promotes proliferation, angiogeniesis and metastasis of cancer cells [46][47][48].As for TFRC, a cell surface receptor necessary for cellular iron [49].Notably, except PTGS2 and MT1G, few of these genes, has ever been reported in PCa.Here in this study, these seven genes were demonstrated correlated with the RFS of PCa. Based on the mRNA expression profiles of these seven hub genes, the ferroptosis related risk model was constructed in TCGA PCa cohort.The KM plot and ROC curve showed this risk score could easily distinguish different RFS.Moreover, the results were validated in the MSKCC external validation dataset.To further explore the detail mechanism of the FRGs, the PCa patients were distributed into two groups according to the risk score.There were 664 DEGs between these two different groups. Interestingly, these DEGs were enriched in immune related GO terms and metabolism KEGG terms (Figure 5B-5C).Subsequently, the immune infiltration status of 28 immune cells were calculated based on the ssGSEA method, and the results showed that several immune cells were associated with risk score (Figure 5D-5G), such as central memory CD8 T cell, CD56 bright natural killer cell, which suggested this risk model may have prompt for immunotherapy.Then, the immunotherapy related signature, such as CNV load, TMB, mRNAsi, clonal score, and TIDE score were included for further investigation.These results indicated that high risk group was correlated with CNV load, TMB, neoantigens, mRNAsi and clonal score (Figure 6A-6F), which suggested that high risk score group may have better response for immune therapy.The TIDE score in high risk group was also increased, which also suggested the better outcome for immune therapy (Figure 6G&6I).Besides, considering the most KEGG terms between low and high risk group were enriched in metabolism, especially in drug metabolism (Figure 5C), we detected the estimated IC50 for three commonly used drugs.The results showed an increasement in high risk group for Bicalutamide (Figure 6H-J), which implied that this group may be resistant for Bicalutamide.In a word, the FRGs may have effects on the RFS via regulation of the drug response, however, the high risk patients may improve their situation through different immune therapy, which may provide a potential strategy in individual treatment of PCa patients. Finally, we chose the high genetic alteration rate gene TFRC for further validation. Considering the mRNA level of TFRC is low in PCa patients when compared with normal tissues, we decided to overexpress TFRC to detect the effect on PCa cells.Based on the baseline protein expression, LNCaP and 22Rv1 cell lines were selected (Figure 7A, Figure 7C and Figure 7D).MTT assay, clonal formation and invasion showed that TFRC overexpression could significantly increase the proliferation (Figure 7E, 7F) and invasion (Figure 7G) in PCa cells.Moreover, we also add 5-Aza to reduce the DNA methylation of TFRC, which demonstrated that the TFRC gene may have hypermethylation status and thus induce the relatively lower expression in PRAD tissues, but rising trend along with the gleason score.This suggests that TFRC may play an oncogenic role in PCa, but the detail mechanism needs to be explored further. In summary, our study systematically analyzed the expression of FRGs and their potential prognostic ability in PCa.Then, the risk model of seven FRGs was established in TCGA and validated in MSKCC dataset.Moreover, we detected that high risk group was correlated with high CNV load, TMB, neoantiogens, mRNAsi, clonal score and immune therapy response, but with high estimated IC50 for Bicautamide.Finally, the impacts on proliferation and invasion when overexpressing TFRC in PCa cell lines were tested.Our research has several limitations.First, the risk model in our study was established and validated both based on public databases.However, to confirm its clinical significance, more prospective real-world data are needed.Second, we only initially explore the effect for overexpression of TFRC on biological functions, the specific mechanism needs to be discussed further in vivo and vitro. Conclusions In conclusion, a ferroptosis-related risk model was established, which were strongly associated with aberrantly CNV load, TMB, mRNAsi, neoantigens, clonal score, immune-/chemo-responses. Simultaneously, TFRC is validated to be an oncogenic factor in prostate cell lines.Our research provides new insights of the personalised therapy for PCa patients. ferroptosisrelated genes and clinical data of PCa patients from the TCGA database.Then we evaluated their differential expression in different risk PCa samples and investigated the enriched pathways and biological roles.Moreover, we chose the hub gene transferrin receptor (TFRC) for further experiment in vitro.As a result, we constructed a FRGs-based prognostic model in PCa of TCGA dataset and validated it in other dataset, and verified the TFRC gene function, which might strengthen our understanding of PCa. Figure 1 . Figure 1.The landscape of ferroptosis-related genes (FRG) in prostate cancer.(A) Heatmap of 40 FRGs between 499 prostate cancer and normal tissues.(B) Heatmap of 40 FRGs between 52 tumor and adjacent normal pairs.(C) mRNA expression levels FRGs levels in total prostate cancer and normal tissues.(D) The PPI network acquired from the STRING database among the FRGs.(E) mRNA expression levels FRGs levels in prostate cancer and paired tissues.(F) The correlation among FRG in prostate cancer.* P < 0.05, ** P < 0.01, *** P < 0.001, ns, not significant. Figure 2 . Figure 2. RFS of PRAD patients in cluster 1/2 subgroups and risk signature with 7 FRGs.(A) Consensus Clustering matrix for k =2.(B) Kaplan-Meier curves of DFS.(C) The Cross-Validation fit curve calculated by lasso regression method.(D) The coefficients of seven hub FRGs estimated by multivariate Cox regression.(E) Genetic alterations of seven hub FRGs. Figure 3 . Figure 3. Risk score based on seven hub FRGs in TCGA PRAD cohort.(A) Survival analysis according to risk score.(B) ROC analysis.(C) Survival status of patients.(D-E) Heatmap and barplot of the seven hub genes between high and low risk group. Figure 4 . Figure 4. Risk score based on seven hub FRGs in MSKCC PRAD cohort.(A) Survival analysis according to risk score.(B) ROC analysis.(C) Survival status of patients.(D-E) Heatmap and barplot of the seven hub genes between high and low risk Figure 5 . Figure 5. Potential biological pathways affected by FRGs.(A) The different expression genes (DEGs) between the high-risk and low-risk groups.(B) The Gene Ontology (GO) enrichment of DEGs.(C) The KEGG enrichment of high and low risk groups.Comparison and correlations of the ssGSEA scores between different risk groups and risk scores in TCGA (D-E) and MSKCC(F-G) dataset. Figure 6 . Figure 6.The correlations of different risk group with copy number alterations, tumor mutation burden, neo-antigens, tumor stemness, clonal status and immuno-/chemotherapy response.(A) Arm-level copy number amplification and deletion.(B) Focal-level copy number amplification and deletion; (C) Tumor mutant burden difference.(D) Neo-antigens.(E) Tumor stemness difference represented by the mRNAsi.(F) Clonal status.(G-H) Immunotherapy response based on TIDE website and estimated IC50 indicates the efficiency of chemotherapy in TCGA.(I-J) Immunotherapy response and estimated IC50 in MSKCC dataset. Figure 7 . Figure 7. TFRC expression influenced by 5-zacytidine and overexpressing TFRC promotes proliferation, migration and invasion in PCa cells 22RV1 and LNCaP.(A) The protein levels of TFRC in PCa cells (C4-2, LNCaP, 22RV1 and PC-3) were detected by western blots.(B) 5-azacytidine inhibited DNA methylation and increased the expression of TFRC in LNCaP and 22RV1 cells.(C) and (D).The efficiency of TFRC overexpressed lentivirus were confirmed using fluorescence microscope and western blots.(E) and (F).The MTT and colony formation assays indicated that TFRC promotes cells proliferation in LNCaP and 22RV1.(G) and (H).Transwell assays suggested that TFRC promotes cells migration and invasion in LNCaP and 22RV1.Supplementary Figure S1.The associations between TFRC mRNA expression levels and gleason score (A), nodal metastasis (B) and methylation levels (C-D) in TCGA PCa patients. Figure 7 Figure 1 A B
5,574.8
2020-10-17T00:00:00.000
[ "Medicine", "Biology" ]
Acylated Anthocyanins from Red Cabbage and Purple Sweet Potato Can Bind Metal Ions and Produce Stable Blue Colors Red cabbage (RC) and purple sweet potato (PSP) are naturally rich in acylated cyanidin glycosides that can bind metal ions and develop intramolecular π-stacking interactions between the cyanidin chromophore and the phenolic acyl residues. In this work, a large set of RC and PSP anthocyanins was investigated for its coloring properties in the presence of iron and aluminum ions. Although relatively modest, the structural differences between RC and PSP anthocyanins, i.e., the acylation site at the external glucose of the sophorosyl moiety (C2-OH for RC vs. C6-OH for PSP) and the presence of coordinating acyl groups (caffeoyl) in PSP anthocyanins only, made a large difference in the color expressed by their metal complexes. For instance, the Al3+-induced bathochromic shifts for RC anthocyanins reached ca. 50 nm at pH 6 and pH 7, vs. at best ca. 20 nm for PSP anthocyanins. With Fe2+ (quickly oxidized to Fe3+ in the complexes), the bathochromic shifts for RC anthocyanins were higher, i.e., up to ca. 90 nm at pH 7 and 110 nm at pH 5.7. A kinetic analysis at different metal/ligand molar ratios combined with an investigation by high-resolution mass spectrometry suggested the formation of metal–anthocyanin complexes of 1:1, 1:2, and 1:3 stoichiometries. Contrary to predictions based on steric hindrance, acylation by noncoordinating acyl residues favored metal binding and resulted in complexes having much higher molar absorption coefficients. Moreover, the competition between metal binding and water addition to the free ligands (leading to colorless forms) was less severe, although very dependent on the acylation site(s). Overall, anthocyanins from purple sweet potato, and even more from red cabbage, have a strong potential for development as food colorants expressing red to blue hues depending on pH and metal ion. Introduction The color of red cabbage (RC) and purple sweet potato (PSP) is due to closely related anthocyanins displaying a cyanidin or peonidin (3 -O-methylcyanidin) 3-O-sophoroside-5-O-glucoside structure [1,2]. Cyanidin derivatives (the major RC pigments) are especially interesting colorants owing to their ability to bind metal ions (via their catechol B-ring) in neutral or mildly acidic solution. Indeed, metal-induced cyanidin deprotonation leads to a quinonoid chromophore that can express intense purple to blue colors [1,3]. Another important consequence of metal-anthocyanin binding is increased color stability. Indeed, through the electrophilic flavylium ion (main colored form in acidic solution), anthocyanins, unlike their metal chelates, are vulnerable to water addition with the concomitant reversible formation of colorless forms (hemiketal and chalcones) [4]. A remarkable feature of RC anthocyanins is that the sophorosyl moiety is typically acylated by one or two residue(s) of p-hydroxycinnamic acid (HCA = p-coumaric, ferulic, caffeic, or sinapic acid). Phenolic acyl groups are known to favor folded conformations in which the anthocyanidin (chromophore) and the acyl residues develop π-stacking interactions. Like metal binding, this phenomenon, called intramolecular copigmentation, causes a bathochromic shift (BS) in the visible absorption band and protects the chromophore against water addition [5][6][7]. The combination of π-stacking interactions and metal binding is actually required to achieve maximal blue color stability with anthocyanins. Indeed, phenolic acyl groups stacked onto the cyanidin nucleus could either directly participate in metal binding (e.g., caffeic acid residues through their catechol ring) or at least strengthen metal binding by building a hydrophobic pocket around the metal-cyanidin complex. Recently, a remarkable RC anthocyanin (called pigment B or PB), displaying a single ideally located sinapoyl residue, was shown to form an aluminum(III) complex of 1:3 stoichiometry in which the three PB ligands in octahedral coordination to Al 3+ adopt a chiral arrangement, causing an intense positive Cotton effect in the visible part of its circular dichroism spectrum [8]. The Al(PB) 3 complex is strongly stabilized by the π-stacking interactions taking place between each cyanidin chromophore and the sinapoyl residue of an adjacent ligand. Moreover, this original supramolecular structure imposes a large torsion angle around the bond connecting the B-and C-rings of the three cyanidin nuclei. This unique combination of structural characteristics results in an intense vibrant blue color of high stability, making PB and its metal complexes potential lead compounds for the replacement of artificial blue colorants by natural alternatives. In this work, a selection of acylated cyanidin glycosides from red cabbage (including PB) and purple sweet potato is revisited for its affinity for aluminum and iron ions. In particular, the influence of the acyl groups on the binding kinetics, color stability, and rate of oxidative degradation of the complexes is systematically addressed. Indeed, the addition of iron ions was shown to accelerate the oxidative degradation of nonacylated cyanidin glycosides, while the presence of phenolic acyl groups tends to cancel this effect [9]. In this work, the conditions (pH, metal type, and concentration) permitting the optimal development of a blue color is also explored. Results and Discussion Metal-anthocyanin binding is of great importance for plants, not only because it is an efficient way to express blue colors to attract pollinating insects [6], but also as a detoxification mechanism against metal excess [10], which can operate with a variety of metal ions (e.g., Fe, Al, Pb, Cd, Mo, Mg, Ni, and V in corn roots). With Cu 2+ , the binding is followed by Cu 2+ reduction and anthocyanin oxidation [11]. However, quantitative physicochemical investigations of metal-anthocyanin binding are scarce. Such approaches have to address the structural transformations of anthocyanins in aqueous solution, the kinetics of metal binding and its stoichiometry, the critical influence of pH, and the possible influence of phenolic acyl groups and even of the selected buffer (depending on its own affinity for metal ions). This is the specific focus of this work, based on a large series of diversely acylated cyanidin glycosides and two of the most important metals (Al, Fe) in terms of blue color development [6]. The Color and Spectral Properties of the Metal Complexes Anthocyanins under their flavylium form (AH + ) are typically diacids undergoing a first proton loss from C7-OH (pK a1 ≈ 4), followed by a second one from C4 -OH around neutrality [12]. Thus, at pH 7, cyanidin glycosides from RC or PSP are a mixture of neutral (A 7 ) and anionic (A 4 7 ) bases (pK a2 = 7.0-7.3 [7,13]), in agreement with the broad absorption band observed (Figure 1). From the experimental spectra at pH 5-7, the pK a values, and the spectrum of the pure flavylium ion (pH 1), the spectra of the pure neutral and anionic bases can be calculated [13]. Compared to the neutral base, the anionic base not only has a much higher λ max , but also a higher molar absorption coefficient at λ max ( Figure 1). The major PSP anthocyanins are acylated peonidin glycosides, which do not bind metal ions through their chromophore. However, cyanidin glycosides are also present in PSP. Unlike the RC anthocyanins, the HCA residues of the PSP anthocyanins are only located at the primary C6-OH positions of the sophorosyl moiety (Scheme 1). In particular, the acyl residue borne by Glc-2 (R3) is expected to be more mobile than its homolog in red cabbage (R2 = sinapoyl). This happens to make a large difference in terms of color variation: the Al 3+ -induced bathochromic shifts for PB (red cabbage) are ca. 50 nm at pH 6 and pH 7 vs. at best ca. 20 nm for P4′ (R3 = feruloyl, Table 1). The flexible acyl residue of P4′ is probably much less apt to develop π-stacking interactions with the cyanidin nucleus than the more rigid sinapoyl residue of PB. Consistently, it has been demonstrated that RC anthocyanins having a single HCA residue at R1 (P1-P3) are much more susceptible to water addition than PB [8] as a consequence of the latter adopting folded conformations in which the cyanidin and HCA moieties are in molecular contact. However, PSP pigments have a specific advantage over RC pigments: the presence of caffeoyl residues at Glc-1 and/or Glc-2, which themselves can bind metal ions. Wavelength (nm) Figure 1. UV-VIS spectra of Pigment 5 (50 µM) at pH 7, its pure anionic base (A − , calculated), and its Fe 2+ and Al 3+ complexes (1 equiv.). Right: color patches from the L*a*b* coordinates calculated from the visible spectra. The major PSP anthocyanins are acylated peonidin glycosides, which do not bind metal ions through their chromophore. However, cyanidin glycosides are also present in PSP. Unlike the RC anthocyanins, the HCA residues of the PSP anthocyanins are only located at the primary C6-OH positions of the sophorosyl moiety (Scheme 1). In particular, the acyl residue borne by Glc-2 (R 3 ) is expected to be more mobile than its homolog in red cabbage (R 2 = sinapoyl). This happens to make a large difference in terms of color variation: the Al 3+ -induced bathochromic shifts for PB (red cabbage) are ca. 50 nm at pH 6 and pH 7 vs. at best ca. 20 nm for P4 (R 3 = feruloyl, Table 1). The flexible acyl residue of P4 is probably much less apt to develop π-stacking interactions with the cyanidin nucleus than the more rigid sinapoyl residue of PB. Consistently, it has been demonstrated that RC anthocyanins having a single HCA residue at R 1 (P1-P3) are much more susceptible to water addition than PB [8] as a consequence of the latter adopting folded conformations in which the cyanidin and HCA moieties are in molecular contact. However, PSP pigments have a specific advantage over RC pigments: the presence of caffeoyl residues at Glc-1 and/or Glc-2, which themselves can bind metal ions. Table 1. Spectral characteristics of the metal complexes of red cabbage and purple sweet potato anthocyanins (1 equiv. metal ion). ∆λ max = λ max (+metal) − λ max (no metal). ∆A = A(+metal) − A(no metal). Our recent work [13] suggests that such pigments, in which the cyanidin and caffeoyl units tend to stack onto each other, can actually sequester Fe 2+ and Al 3+ by the simultaneous involvement of both catechol rings, thereby largely increasing the metal-induced bathochromic shift. This was confirmed in the current work: the bathochromic shifts induced by Al 3+ (1 equiv.) at pH 7 were 8, 21, and 22 nm for P4′ (R3 = feruloyl), P6′ (R1 = caffeoyl), and P7 (R1 = R3 = caffeoyl), respectively (Figure S1A, Table 1). It was also clear that the caffeoyl residue at R1 was critical to promote the bluing effect, while the other one at R3 was not. By contrast, P9b (R1 = caffeoyl, R3 = feruloyl) was clearly less efficient (BS = 16 nm), possibly pointing to a less-favorable binding because of the relatively bulky feruloyl residue. Fe The pH dependence of the λmax values in the pH range 6-8 for the free forms (Table S1) was consistent with the conversion of the neutral base (A7) to the anionic base (A4′7, proton loss from C4′-OH), in full agreement with the corresponding pKa2 value of the RC and PSP anthocyanins [7,13]. As Al 3+ also triggers proton loss from C4′-OH, the bathochromic shift (BS) induced in the free forms by increasing the pH from 6 to 8 should be close to the one accompanying Al 3+ binding at pH 6. Additionally, only small BS were expected upon Al 3+ binding at pH 8. While the latter prediction was well-verified experimentally for all PSP pigments (Al 3+ -induced BS at pH 8 < 10 nm, Table S1), the former was not: indeed, the pH-induced BS (ca. 50 nm) were much larger than the Al 3+ -induced BS at pH 6 (from 16 nm for P4′ to 29 nm for P9b'). Moreover, bathochromism was clearly observed in the complexes' visible band when the pH was increased from 6 to 8 (BS = 25-33 nm, Table S1). Hence, it can be proposed that C7-OH in the complexes was not dissociated at pH 6, and lost its proton when the pH is raised to 8 (Scheme 2). This proposal is consistent with the theoretical visible spectrum calculated on the Al(PB)3 complex, which, while involving 3 ligands undissociated at C7-OH, were found fully consistent with the Scheme 1. Structure of the RC and PSP anthocyanins studied. When present, hydroxycinnamoyl residues R 1 , R 2 , and R 3 are p-coumaroyl (pC), feruloyl (Fl), caffeoyl (Cf), and/or sinapoyl (Sp). Note: a) from red cabbage; b) from purple sweet potato. Our recent work [13] suggests that such pigments, in which the cyanidin and caffeoyl units tend to stack onto each other, can actually sequester Fe 2+ and Al 3+ by the simultaneous involvement of both catechol rings, thereby largely increasing the metal-induced bathochromic shift. This was confirmed in the current work: the bathochromic shifts induced by Al 3+ (1 equiv.) at pH 7 were 8, 21, and 22 nm for P4 (R 3 = feruloyl), P6 (R 1 = caffeoyl), and P7 (R 1 = R 3 = caffeoyl), respectively (Figure S1A, Table 1). It was also clear that the caffeoyl residue at R 1 was critical to promote the bluing effect, while the other one at R 3 was not. By contrast, P9b (R 1 = caffeoyl, R 3 = feruloyl) was clearly less efficient (BS = 16 nm), possibly pointing to a less-favorable binding because of the relatively bulky feruloyl residue. The pH dependence of the λ max values in the pH range 6-8 for the free forms (Table S1) was consistent with the conversion of the neutral base (A 7 ) to the anionic base (A 4 7 , proton loss from C4 -OH), in full agreement with the corresponding pK a2 value of the RC and PSP anthocyanins [7,13]. As Al 3+ also triggers proton loss from C4 -OH, the bathochromic shift (BS) induced in the free forms by increasing the pH from 6 to 8 should be close to the one accompanying Al 3+ binding at pH 6. Additionally, only small BS were expected upon Al 3+ binding at pH 8. While the latter prediction was well-verified experimentally for all PSP pigments (Al 3+ -induced BS at pH 8 < 10 nm, Table S1), the former was not: indeed, the pH-induced BS (ca. 50 nm) were much larger than the Al 3+ -induced BS at pH 6 (from 16 nm for P4 to 29 nm for P9b'). Moreover, bathochromism was clearly observed in the complexes' visible band when the pH was increased from 6 to 8 (BS = 25-33 nm, Table S1). Hence, it can be proposed that C7-OH in the complexes was not dissociated at pH 6, and lost its proton when the pH is raised to 8 (Scheme 2). This proposal is consistent with the theoretical visible spectrum calculated on the Al(PB) 3 complex, which, while involving 3 ligands undissociated at C7-OH, were found fully consistent with the experimental spectrum [8]. Moreover, comparing the neutral base of the 7-O-β-D-glucosyloxy-4 -hydroxyflavylium ion (proton loss from C4 -OH) with that of its 4 -O-β-D-glucosyloxy-7-hydroxyflavylium regioisomer (proton loss from C7-OH) also showed that the former displayed a λ max value that was 20 nm higher [14] and a molar absorption coefficient almost 3 times as large. Finally, DFT calculations on pyranoanthocyanins [15] confirmed that the neutral base formed by proton loss from C4 -OH (A 4 , a very minor species) displayed an intense absorption band at higher wavelengths than the major tautomer (proton loss from C7-OH). Overall, turning A 7 into A 4 upon metal binding is actually expected to promote both bathochromism and hyperchromism. experimental spectrum [8]. Moreover, comparing the neutral base of the 7-O-β-D-glucosyloxy-4′-hydroxyflavylium ion (proton loss from C4′-OH) with that of its 4′-O-β-D-glucosyloxy-7-hydroxyflavylium regioisomer (proton loss from C7-OH) also showed that the former displayed a λmax value that was 20 nm higher [14] and a molar absorption coefficient almost 3 times as large. Finally, DFT calculations on pyranoanthocyanins [15] confirmed that the neutral base formed by proton loss from C4′-OH (A4′, a very minor species) displayed an intense absorption band at higher wavelengths than the major tautomer (proton loss from C7-OH). Overall, turning A7 into A4′ upon metal binding is actually expected to promote both bathochromism and hyperchromism. Scheme 2. Metal-anthocyanin binding. Red cabbage anthocyanins with R2 = sinapoyl, whether mono-or diacylated, were distinct from the PSP anthocyanins and the other RC pigments in that the Al 3+ -induced BSs were larger, especially at pH 7 (36 and 52 nm for P5 and PB, respectively, vs. barely 5 and 8 nm for P2 and P4′, respectively) ( Table 1, Figures 1 and S1B). The λmax values of the complexes were even higher than that of the anionic base. Thus, even if proton loss from C7-OH was not complete at pH 7 for the Al 3+ complexes, the strong π-stacking interactions occurring between the cyanidin nucleus and R2 = sinapoyl and the concomitant torsion imposed between the B-and C-rings [8] effectively turned the color to an intense cyan hue ( Figure 1) close to the one expressed by the major synthetic blue food colorants Brilliant Blue and indigotine. Red cabbage anthocyanins with R 2 = sinapoyl, whether mono-or diacylated, were distinct from the PSP anthocyanins and the other RC pigments in that the Al 3+ -induced BSs were larger, especially at pH 7 (36 and 52 nm for P5 and PB, respectively, vs. barely 5 and 8 nm for P2 and P4 , respectively) ( Table 1, Figures 1 and S1B). The λ max values of the complexes were even higher than that of the anionic base. Thus, even if proton loss from C7-OH was not complete at pH 7 for the Al 3+ complexes, the strong π-stacking interactions occurring between the cyanidin nucleus and R 2 = sinapoyl and the concomitant torsion imposed between the B-and C-rings [8] effectively turned the color to an intense cyan hue ( Figure 1) close to the one expressed by the major synthetic blue food colorants Brilliant Blue and indigotine. With Fe 2+ , the BSs were even larger: at pH 7, 87 and 76 nm for P5 and PB, respectively, vs. 25 nm for P2 (Table 1, Figures 1 and S1B). As already reported for other Fe 2+ -polyphenol complexes [16,17], bound Fe 2+ was rapidly autoxidized to Fe 3+ . This reaction was: (a) promoted by the higher affinity of catechols for Fe 3+ (vs. Fe 2+ ); (b) confirmed by the similarity of the final spectra, whether Fe 2+ or Fe 3+ was added [13]; and (c) consistent with the broad absorption band of the iron complexes and their high λ max , which both suggest ligand-to-Fe 3+ charge transfer. However, iron autoxidation clearly followed metal binding. Indeed, despite the higher intrinsic affinity of catechols for Fe 3+ , anthocyanins bound Fe 2+ much more rapidly in our model (Figures 2 and S2) [13], as competition between anthocyanins and the phosphate anions for the metal was much less severe with Fe 2+ . remarkably insensitive to pH in the subgroup with R2 = sinapoyl ( Figure S1B). In particular, the BS featuring iron-PB binding at pH 5.68 hit a record high of ca. 110 nm, i.e., twice as much as with Al 3+ at the same pH. Hence, it can be proposed that C7-OH in the iron complexes is dissociated even at low pH (Scheme 2). The colorimetric data of the PSP cyanidin glycosides and their Al 3+ complexes (Table S2) provide additional evidence of the bluing effect induced by raising the pH from 6 to 8, or adding increasing Al 3+ concentrations at a given pH. For comparison, a hue angle of 207.3 was recorded for the PB-Al 3+ complex at pH 7, i.e., a close match for that of the synthetic triarylcarbonium colorant Brilliant Blue (h 0 = 209.4), regarded as a reference for a vibrant cyan hue in the confectionary industry [8]. No such match was observed with the PSP anthocyanins and the best result recorded, i.e., the P6′-Al 3+ complex at pH 8 (h 0 = 221.7) remained off target. However, it confirmed that a single caffeoyl residue at R1 was sufficient to promote a strong bluing effect. Kinetic Analysis and Stoichiometry of Metal Binding When small volumes of concentrated Fe 2+ or Al 3+ aqueous solutions were added immediately after diluting the RC pigments into pH 6-8 phosphate buffers, a relatively fast metal-anthocyanin binding occurred (Figures 2 and S2). Contrary to predictions based on steric hindrance, the observed trend was that acylation by noncoordinating HCA residues favored metal binding. For instance, nonacylated PA weakly bound Al 3+ at pH 7 and 8 (weak spectral changes preventing the kinetic analysis), and strong Al 3+ -P2 binding only occurred at pH 8. By contrast, P5 strongly bound Al 3+ at both pHs. The minimal metal/ligand molar ratio to reach full binding (saturation of the visible band of the complex) is an indicator of the complex's stoichiometry. This ratio lay between 1/3 and 2/3 for PA, PB, P2, and P5 (Figures 2 and S3), as already observed with iron [9]. Thus, mixtures of 1:1, 1:2, and 1:3 complexes were expected in variable proportions ac- Kinetic Analysis and Stoichiometry of Metal Binding When small volumes of concentrated Fe 2+ or Al 3+ aqueous solutions were added immediately after diluting the RC pigments into pH 6-8 phosphate buffers, a relatively fast metal-anthocyanin binding occurred (Figures 2 and S2). Contrary to predictions based on steric hindrance, the observed trend was that acylation by noncoordinating HCA residues favored metal binding. For instance, nonacylated PA weakly bound Al 3+ at pH 7 and 8 (weak spectral changes preventing the kinetic analysis), and strong Al 3+ -P2 binding only occurred at pH 8. By contrast, P5 strongly bound Al 3+ at both pHs. The minimal metal/ligand molar ratio to reach full binding (saturation of the visible band of the complex) is an indicator of the complex's stoichiometry. This ratio lay between 1/3 and 2/3 for PA, PB, P2, and P5 (Figures 2 and S3), as already observed with iron [9]. Thus, mixtures of 1:1, 1:2, and 1:3 complexes were expected in variable proportions according to the metal/ligand molar ratio. Consistently, in our recent work, the Al 3+ (PB)3 and Kinetic Analysis and Stoichiometry of Metal Binding When small volumes of concentrated Fe 2+ or Al 3+ aqueous solutions were added immediately after diluting the RC pigments into pH 6-8 phosphate buffers, a relatively fast metal-anthocyanin binding occurred (Figures 2 and S2). Contrary to predictions based on steric hindrance, the observed trend was that acylation by noncoordinating HCA residues favored metal binding. For instance, nonacylated PA weakly bound Al 3+ at pH 7 and 8 (weak spectral changes preventing the kinetic analysis), and strong Al 3+ -P2 binding only occurred at pH 8. By contrast, P5 strongly bound Al 3+ at both pHs. The minimal metal/ligand molar ratio to reach full binding (saturation of the visible band of the complex) is an indicator of the complex's stoichiometry. This ratio lay between 1/3 and 2/3 for PA, PB, P2, and P5 (Figures 2 and S3), as already observed with iron [9]. Thus, mixtures of 1:1, 1:2, and 1:3 complexes were expected in variable proportions according to the metal/ligand molar ratio. Consistently, in our recent work, the Al 3+ (PB)3 and Al 3+ (P6)3 complexes were evidenced by high-resolution mass spectrometry in a dilute am- Kinetic Analysis and Stoichiometry of Metal Binding When small volumes of concentrated Fe 2+ or Al 3+ aqueous solutions were added immediately after diluting the RC pigments into pH 6-8 phosphate buffers, a relatively fast metal-anthocyanin binding occurred (Figures 2 and S2). Contrary to predictions based on steric hindrance, the observed trend was that acylation by noncoordinating HCA residues favored metal binding. For instance, nonacylated PA weakly bound Al 3+ at pH 7 and 8 (weak spectral changes preventing the kinetic analysis), and strong Al 3+ -P2 binding only occurred at pH 8. By contrast, P5 strongly bound Al 3+ at both pHs. The minimal metal/ligand molar ratio to reach full binding (saturation of the visible band of the complex) is an indicator of the complex's stoichiometry. This ratio lay between 1/3 and 2/3 for PA, PB, P2, and P5 (Figures 2 and S3), as already observed with iron [9]. Thus, mixtures of 1:1, 1:2, and 1:3 complexes were expected in variable proportions according to the metal/ligand molar ratio. Consistently, in our recent work, the Al 3+ (PB)3 and Al 3+ (P6)3 complexes were evidenced by high-resolution mass spectrometry in a dilute am- Kinetic Analysis and Stoichiometry of Metal Binding When small volumes of concentrated Fe 2+ or Al 3+ aqueous solutions w mediately after diluting the RC pigments into pH 6-8 phosphate buffers, a metal-anthocyanin binding occurred (Figures 2 and S2). Contrary to predic steric hindrance, the observed trend was that acylation by noncoordinating favored metal binding. For instance, nonacylated PA weakly bound Al 3+ (weak spectral changes preventing the kinetic analysis), and strong Al 3+ -P occurred at pH 8. By contrast, P5 strongly bound Al 3+ at both pHs. The minimal metal/ligand molar ratio to reach full binding (saturatio band of the complex) is an indicator of the complex's stoichiometry. This ra 1/3 and 2/3 for PA, PB, P2, and P5 (Figures 2 and S3), as already observed Thus, mixtures of 1:1, 1:2, and 1:3 complexes were expected in variable p cording to the metal/ligand molar ratio. Consistently, in our recent work, th Kinetic Analysis and Stoichiometry of Metal Binding When small volumes of concentrated Fe 2+ or Al 3+ aqueous solution mediately after diluting the RC pigments into pH 6-8 phosphate buffer metal-anthocyanin binding occurred (Figures 2 and S2). Contrary to pre steric hindrance, the observed trend was that acylation by noncoordinat favored metal binding. For instance, nonacylated PA weakly bound A (weak spectral changes preventing the kinetic analysis), and strong Al 3 occurred at pH 8. By contrast, P5 strongly bound Al 3+ at both pHs. The minimal metal/ligand molar ratio to reach full binding (satura band of the complex) is an indicator of the complex's stoichiometry. This 1/3 and 2/3 for PA, PB, P2, and P5 (Figures 2 and S3), as already obser Thus, mixtures of 1:1, 1:2, and 1:3 complexes were expected in variab cording to the metal/ligand molar ratio. Consistently, in our recent work Kinetic Analysis and Stoichiometry of Metal Binding When small volumes of concentrated Fe 2+ or Al 3+ aqueous solutions were added immediately after diluting the RC pigments into pH 6-8 phosphate buffers, a relatively fast metal-anthocyanin binding occurred (Figures 2 and S2). Contrary to predictions based on steric hindrance, the observed trend was that acylation by noncoordinating HCA residues favored metal binding. For instance, nonacylated PA weakly bound Al 3+ at pH 7 and 8 (weak spectral changes preventing the kinetic analysis), and strong Al 3+ -P2 binding only occurred at pH 8. By contrast, P5 strongly bound Al 3+ at both pHs. The minimal metal/ligand molar ratio to reach full binding (saturation of the visible band of the complex) is an indicator of the complex's stoichiometry. This ratio lay between 1/3 and 2/3 for PA, PB, P2, and P5 (Figures 2 and S3), as already observed with iron [9]. Thus, mixtures of 1:1, 1:2, and 1:3 complexes were expected in variable proportions according to the metal/ligand molar ratio. Consistently, in our recent work, the Al 3+ (PB)3 and Al 3+ (P6)3 complexes were evidenced by high-resolution mass spectrometry in a dilute am- Kinetic Analysis and Stoichiometry of Metal Binding When small volumes of concentrated Fe 2+ or Al 3+ aqueous solutions were added immediately after diluting the RC pigments into pH 6-8 phosphate buffers, a relatively fast metal-anthocyanin binding occurred (Figures 2 and S2). Contrary to predictions based on steric hindrance, the observed trend was that acylation by noncoordinating HCA residues favored metal binding. For instance, nonacylated PA weakly bound Al 3+ at pH 7 and 8 (weak spectral changes preventing the kinetic analysis), and strong Al 3+ -P2 binding only occurred at pH 8. By contrast, P5 strongly bound Al 3+ at both pHs. The minimal metal/ligand molar ratio to reach full binding (saturation of the visible band of the complex) is an indicator of the complex's stoichiometry. This ratio lay between 1/3 and 2/3 for PA, PB, P2, and P5 (Figures 2 and S3), as already observed with iron [9]. Thus, mixtures of 1:1, 1:2, and 1:3 complexes were expected in variable proportions according to the metal/ligand molar ratio. Consistently, in our recent work, the Al 3+ (PB)3 and Al 3+ (P6)3 complexes were evidenced by high-resolution mass spectrometry in a dilute am- Kinetic Analysis and Stoichiometry of Metal Binding When small volumes of concentrated Fe 2+ or Al 3+ aqu mediately after diluting the RC pigments into pH 6-8 ph metal-anthocyanin binding occurred (Figures 2 and S2). C steric hindrance, the observed trend was that acylation by favored metal binding. For instance, nonacylated PA we (weak spectral changes preventing the kinetic analysis), a occurred at pH 8. By contrast, P5 strongly bound Al 3+ at b The minimal metal/ligand molar ratio to reach full b band of the complex) is an indicator of the complex's stoic 1/3 and 2/3 for PA, PB, P2, and P5 (Figures 2 and S3), as Thus, mixtures of 1:1, 1:2, and 1:3 complexes were expec Kinetic Analysis and Stoichiometry of Metal Binding When small volumes of concentrated Fe 2+ or Al 3+ aqueous solutions were added immediately after diluting the RC pigments into pH 6-8 phosphate buffers, a relatively fast metal-anthocyanin binding occurred (Figures 2 and S2). Contrary to predictions based on steric hindrance, the observed trend was that acylation by noncoordinating HCA residues favored metal binding. For instance, nonacylated PA weakly bound Al 3+ at pH 7 and 8 (weak spectral changes preventing the kinetic analysis), and strong Al 3+ -P2 binding only occurred at pH 8. By contrast, P5 strongly bound Al 3+ at both pHs. The minimal metal/ligand molar ratio to reach full binding (saturation of the visible band of the complex) is an indicator of the complex's stoichiometry. This ratio lay between 1/3 and 2/3 for PA, PB, P2, and P5 (Figures 2 and S3), as already observed with iron [9]. Thus, mixtures of 1:1, 1:2, and 1:3 complexes were expected in variable proportions according to the metal/ligand molar ratio. Consistently, in our recent work, the Al 3+ (PB)3 and Kinetic Analysis and Stoichiometry of Metal Binding When small volumes of concentrated Fe 2+ or Al 3+ aqueous solutions were added immediately after diluting the RC pigments into pH 6-8 phosphate buffers, a relatively fast metal-anthocyanin binding occurred (Figures 2 and S2). Contrary to predictions based on steric hindrance, the observed trend was that acylation by noncoordinating HCA residues favored metal binding. For instance, nonacylated PA weakly bound Al 3+ at pH 7 and 8 (weak spectral changes preventing the kinetic analysis), and strong Al 3+ -P2 binding only occurred at pH 8. By contrast, P5 strongly bound Al 3+ at both pHs. The minimal metal/ligand molar ratio to reach full binding (saturation of the visible band of the complex) is an indicator of the complex's stoichiometry. This ratio lay between 1/3 and 2/3 for PA, PB, P2, and P5 (Figures 2 and S3), as already observed with iron [9] Thus, mixtures of 1:1, 1:2, and 1:3 complexes were expected in variable proportions according to the metal/ligand molar ratio. Consistently, in our recent work, the Al 3+ (PB)3 and ). ML n : metal-ligand complex having a 1:n metal/ligand stoichiometry. While the visible band of the Al 3+ complexes of the RC anthocyanins showed the same pH dependence as for the PSP anthocyanins, the visible band of their Fe 3+ complexes was remarkably insensitive to pH in the subgroup with R 2 = sinapoyl ( Figure S1B). In particular, the BS featuring iron-PB binding at pH 5.68 hit a record high of ca. 110 nm, i.e., twice as much as with Al 3+ at the same pH. Hence, it can be proposed that C7-OH in the iron complexes is dissociated even at low pH (Scheme 2). The colorimetric data of the PSP cyanidin glycosides and their Al 3+ complexes (Table S2) provide additional evidence of the bluing effect induced by raising the pH from 6 to 8, or adding increasing Al 3+ concentrations at a given pH. For comparison, a hue angle of 207.3 was recorded for the PB-Al 3+ complex at pH 7, i.e., a close match for that of the synthetic triarylcarbonium colorant Brilliant Blue (h 0 = 209.4), regarded as a reference for a vibrant cyan hue in the confectionary industry [8]. No such match was observed with the PSP anthocyanins and the best result recorded, i.e., the P6 -Al 3+ complex at pH 8 (h 0 = 221.7) remained off target. However, it confirmed that a single caffeoyl residue at R 1 was sufficient to promote a strong bluing effect. Kinetic Analysis and Stoichiometry of Metal Binding When small volumes of concentrated Fe 2+ or Al 3+ aqueous solutions were added immediately after diluting the RC pigments into pH 6-8 phosphate buffers, a relatively fast metal-anthocyanin binding occurred (Figures 2 and S2). Contrary to predictions based on steric hindrance, the observed trend was that acylation by noncoordinating HCA residues favored metal binding. For instance, nonacylated PA weakly bound Al 3+ at pH 7 and 8 (weak spectral changes preventing the kinetic analysis), and strong Al 3+ -P2 binding only occurred at pH 8. By contrast, P5 strongly bound Al 3+ at both pHs. The minimal metal/ligand molar ratio to reach full binding (saturation of the visible band of the complex) is an indicator of the complex's stoichiometry. This ratio lay between 1/3 and 2/3 for PA, PB, P2, and P5 (Figures 2 and S3), as already observed with iron [9]. Thus, mixtures of 1:1, 1:2, and 1:3 complexes were expected in variable proportions according to the metal/ligand molar ratio. Consistently, in our recent work, the Al 3+ (PB) 3 and Al 3+ (P6) 3 complexes were evidenced by high-resolution mass spectrometry in a dilute ammonium acetate buffer, but not the Al 3+ (P3) 3 homolog [8]. This was confirmed in the present work (Table S3). Under the same conditions, 1:1 and 1:2 iron-anthocyanin complexes were detected with P5 and PB (Table 2, Figure S4). With monoacylated P2, only the 1:1 complex was detected, whatever the M/L molar ratio between 1/6 and 1. The detection of 1:3 complexes is more challenging, as the corresponding ions must bear at least 3 charges for the m/z ratio to fall below 1500, the upper limit of detection. Satisfying agreements between experimental and theoretical m/z values were observed for the main ions and their major isotopes. Moreover, HRMS data were consistent with iron having a +3 oxidation degree in the complexes. For instance, the FeP5 monocation (exp. m/z 1208.2307, 1209.2352, 1210.2398) is proposed to be [P5 − 3H + + Fe 3+ ] + (P5 referring to the flavylium cation, theoretical m/z 1208.2302, 1209.2335, 1210.2362). The corresponding complex involving Fe 2+ would be [P5 − 2H + + Fe 2+ ] + (m/z 1209.2381, 1210.2414, 1211.2440). Despite the possible match of the experimental spectrum with isotopes of the Fe 2+ complex having one or two 13 C-atoms, the intense signal at m/z 1208.2307 ( Figure S4) clearly required an Fe 3+ ion. Unexpectedly, varying the M/L molar ratio between 1/6 and 1 did not strongly impact the signal intensity of the 1:1 complex relative to free ligand (Table S4). Moreover, the signal intensities of the 1:1; 1:2, and 1:3 complexes could not be compared due to the charge-specific ion sensitivity of the MS detector. A simple model assuming stepwise 1:1, 1:2, and 1:3 binding was tested to account for the kinetics of metal binding as a function of the metal concentration. To keep the number of adjustable parameters to a minimum, the rate constants of the first, second, and third steps were assumed to be 3k (3 available binding sites), 2k (2 available binding sites) and k (1 available binding site), respectively. Moreover, the molar absorption coefficients of the ML, ML 2 , and ML 3 complexes (M = metal, L = ligand) were assumed to be ε, 2ε, and 3ε, respectively. Satisfactory curve-fittings of the A(670 nm) vs. time curves for different M/L molar ratios were thus obtained, leading to optimized values for parameters k and ε, and permitting the plotting of the time dependence of the concentrations of the 3 complexes (Table 3, Figure S5). Table 2. Ions detected for the iron complexes of pigments P5 = Cya-3-(Fl)Glc-2-(Sp)Glc-3-Glc and PB = Cya-3-Glc-2-(Sp)Glc-3-Glc from red cabbage (metal/ligand molar ratio = 1). For given pigment and metal ion, the optimized ε values were fairly stable when the metal concentration was varied, which was satisfactory. This was less true for the optimized k values, which suggests that the model of independent binding steps was too simple. With the P5−Al 3+ pair at pH 7, much better curve-fittings were actually obtained by implementing 3 optimizable rate constants in the model. Despite its crude approximations, the model offered a simple way to quantitatively assess the influence of pH, metal type, and acylation pattern on the rate of metal−anthocyanin binding. Overall, binding was faster at pH 8 than at pH 7, and faster with Fe 2+ than with Al 3+ . Overall, under our conditions, acylation did not impede metal binding. Most importantly, pigments having R2 = sinapoyl (PB, P5) formed complexes with much higher molar absorption coefficients (typically, a factor 2-3), which was an obvious advantage for color development. Table 3. Kinetic analyses of metal-anthocyanin binding according to the stepwise formation of ML, ML 2 , and ML 3 complexes (25 • C, monitoring at 670 nm). ML n : metal-ligand complex having a 1:n metal/ligand stoichiometry. For simplicity, the successive binding steps are described by rate constants 3k, 2k, and k, respectively, and each bound ligand by the same molar absorption coefficient ε. Some repetitions are shown. Values between brackets are standard deviations for the curve-fitting procedure. Competition between Metal Binding and Water Addition Metal binding is accompanied by the removal of the B-ring's phenolic protons. In other words, protons and metal ions compete for the O-atoms of the B-ring. Hence, lowering the pH gradually destabilizes the complexes and a minimal pH for the onset of metal binding is expected. Also, a weaker stability for the complexes could mean reversibility in their formation and thus competition with water addition to the free form (flavylium ion), which leads to color loss. The influence of pH on iron binding was investigated with the two isomers PB and P3. For a given pH, the UV-VIS spectra were recorded immediately after pigment addition to buffer in the absence of Fe 2+ and after maximal binding in the presence of Fe 2+ . A rise in visible absorbance in the range 600-700 nm, which is typical of metal binding, could be perceived with both pigments at pH ≥ 3 (data not shown). Although P3 and PB could not be clearly distinguished by the pH for the onset of iron binding, it was obvious that the hyperchromic and bathochromic shifts in mildly acidic solution (pH 4.24) were much more spectacular with PB than with P3 ( Figure 3A,C). Part of the interpretation is rooted in the higher susceptibility of P3 to color loss by reversible water addition, which was both faster and more complete than for PB ( Figure 3B). Indeed, when the pigment and Fe 2+ (1 equiv.) were added to the acetate buffer, a sharp drop of visible absorbance at 530 nm (free pigment) was observed with P3 over the first minute, whereas the increase of visible absorbance at 670 nm (iron complex) was negligible ( Figure 3A). In a second phase, the onset of iron binding occurred, and A(670 nm) slowly increased. In this case, the first step corresponding to the flavylium hydration (fast) was clearly decoupled from the second step (slow) of iron binding. By contrast, with PB, the drop of A(530 nm) over the first minute was limited and accompanied by a rise of A(670 nm), which was then amplified along the second phase. This is evidence that hydration and metal binding now compete from the beginning. It is interesting to note that if hydration was faster than metal binding in the case of P3, the higher affinity of the colored forms (vs. colorless forms) for the metal ion eventually permitted the reversal of the hydration equilibrium and the slow development of metal binding. However, even with PB, the intensity of the complex's band in the pH range 3-6 remained much lower than in neutral solution, where it reached saturation even in the presence of substoichiometric iron concentrations. As the λ max of the iron-PB complex's visible band was pH-independent, the spectrum at pH 7 ( Figure S1B) could be used to calculate the percentage of iron complex at equilibrium at pH 4.24: 47%. A similar calculation with P3 gave only 24%. Attempts to fit the spectral changes observed at 530 nm (free form) and 670 nm (iron complex) to a simple kinetic model, assuming competition with water addition and reversible iron binding, failed. However, a more sophisticated scheme assuming reversible Fe 2+ binding by the colored forms, followed by irreversible autoxidation of bound Fe 2+ with concomitant formation of a Fe 3+ complex (in equilibrium with the free species), provided perfect curve-fittings and acceptable rate constants for the different steps (Table S5). Overall, PB bound Fe 2+ twice as rapidly as P3 did, and the Fe 2+ -PB complex seemed less susceptible to autoxidation than the Fe 2+ -P3 complex. Finally, the blue color development was much more intense with PB, as the molar absorption coefficient of the PB-Fe 3+ complex in the blue domain was ca. twice as large than that of the P3-Fe 3+ complex. These remarkable improvements only reflected the shift of the single sinapoyl residue from the 6 position of Glc-1 to the 2 position of Glc-2. This is a spectacular example of the crucial importance of strong anthocyanidin-hydroxycinnamoyl π-stacking interactions in the development of Attempts to fit the spectral changes observed at 530 nm (free form) and 670 nm (iron complex) to a simple kinetic model, assuming competition with water addition and reversible iron binding, failed. However, a more sophisticated scheme assuming reversible Fe 2+ binding by the colored forms, followed by irreversible autoxidation of bound Fe 2+ with concomitant formation of a Fe 3+ complex (in equilibrium with the free species), provided perfect curve-fittings and acceptable rate constants for the different steps (Table S5). Overall, PB bound Fe 2+ twice as rapidly as P3 did, and the Fe 2+ -PB complex seemed less susceptible to autoxidation than the Fe 2+ -P3 complex. Finally, the blue color development was much more intense with PB, as the molar absorption coefficient of the PB-Fe 3+ complex in the blue domain was ca. twice as large than that of the P3-Fe 3+ complex. These remarkable improvements only reflected the shift of the single sinapoyl residue from the 6 position of Glc-1 to the 2 position of Glc-2. This is a spectacular example of the crucial importance of strong anthocyanidin-hydroxycinnamoyl π-stacking interactions in the development of vibrant blue colors. Long-Term Stability of the Metal Complexes Acylation promoted a moderate increase in color stability upon heating at 50 °C, again with an advantage conferred on pigments with R2 = sinapoyl (Table S6, Figure S6). This trend was hugely emphasized after Fe 2+ addition (0.6 equiv.), a clear indication that the sinapoyl residue on Glc-2 was very efficient for the long-term stabilization of the metal complexes. Again, the comparison between the P3 and PB isomers was striking: a 25% color loss of the Fe 2+ -P3 complex was reached in 15-20 min vs. 4.5 h for the Fe 2+ -PB complex. As already observed [9], the spectroscopic titration of the residual pigment (after acidification to pH 1 for total dissociation of the iron complex and total conversion of the colorless forms into the flavylium ion) provided a more contrasted picture: in the absence of added iron, acylation offered to protect, and this so-called "acylation paradox" [9] could be interpreted by assuming that the colorless forms (hemiketal and chalcones) were much less susceptible to oxidative degradation than the electron-rich anionic base. In other words, being more vulnerable to reversible water addition, nonacylated PA and anthocyanins having a single HCA residue at R1 (P1, P3) were protected against autoxidation at the cost of losing their color. However, after Fe 2+ addition, autoxidation was strongly accelerated for PA because the corresponding iron complex was not stable Attempts to fit the spectral changes observed at 530 nm (free form) and 670 nm (iron complex) to a simple kinetic model, assuming competition with water addition and reversible iron binding, failed. However, a more sophisticated scheme assuming reversible Fe 2+ binding by the colored forms, followed by irreversible autoxidation of bound Fe 2+ with concomitant formation of a Fe 3+ complex (in equilibrium with the free species), provided perfect curve-fittings and acceptable rate constants for the different steps (Table S5). Overall, PB bound Fe 2+ twice as rapidly as P3 did, and the Fe 2+ -PB complex seemed less susceptible to autoxidation than the Fe 2+ -P3 complex. Finally, the blue color development was much more intense with PB, as the molar absorption coefficient of the PB-Fe 3+ complex in the blue domain was ca. twice as large than that of the P3-Fe 3+ complex. These remarkable improvements only reflected the shift of the single sinapoyl residue from the 6 position of Glc-1 to the 2 position of Glc-2. This is a spectacular example of the crucial importance of strong anthocyanidin-hydroxycinnamoyl π-stacking interactions in the development of vibrant blue colors. Long-Term Stability of the Metal Complexes Acylation promoted a moderate increase in color stability upon heating at 50 °C, again with an advantage conferred on pigments with R2 = sinapoyl (Table S6, Figure S6). This trend was hugely emphasized after Fe 2+ addition (0.6 equiv.), a clear indication that the sinapoyl residue on Glc-2 was very efficient for the long-term stabilization of the metal complexes. Again, the comparison between the P3 and PB isomers was striking: a 25% color loss of the Fe 2+ -P3 complex was reached in 15-20 min vs. 4.5 h for the Fe 2+ -PB complex. As already observed [9], the spectroscopic titration of the residual pigment (after acidification to pH 1 for total dissociation of the iron complex and total conversion of the colorless forms into the flavylium ion) provided a more contrasted picture: in the absence of added iron, acylation offered to protect, and this so-called "acylation paradox" [9] could be interpreted by assuming that the colorless forms (hemiketal and chalcones) were much less susceptible to oxidative degradation than the electron-rich anionic base. In other words, being more vulnerable to reversible water addition, nonacylated PA and anthocyanins having a single HCA residue at R1 (P1, P3) were protected against autoxidation at the cost of losing their color. However, after Fe 2+ addition, autoxidation was strongly accelerated for PA because the corresponding iron complex was not stable Attempts to fit the spectral changes observed at 530 nm (free form) and 670 nm (iron complex) to a simple kinetic model, assuming competition with water addition and reversible iron binding, failed. However, a more sophisticated scheme assuming reversible Fe 2+ binding by the colored forms, followed by irreversible autoxidation of bound Fe 2+ with concomitant formation of a Fe 3+ complex (in equilibrium with the free species), provided perfect curve-fittings and acceptable rate constants for the different steps (Table S5). Overall, PB bound Fe 2+ twice as rapidly as P3 did, and the Fe 2+ -PB complex seemed less susceptible to autoxidation than the Fe 2+ -P3 complex. Finally, the blue color development was much more intense with PB, as the molar absorption coefficient of the PB-Fe 3+ complex in the blue domain was ca. twice as large than that of the P3-Fe 3+ complex. These remarkable improvements only reflected the shift of the single sinapoyl residue from the 6 position of Glc-1 to the 2 position of Glc-2. This is a spectacular example of the crucial importance of strong anthocyanidin-hydroxycinnamoyl π-stacking interactions in the development of vibrant blue colors. Long-Term Stability of the Metal Complexes Acylation promoted a moderate increase in color stability upon heating at 50 °C, again with an advantage conferred on pigments with R2 = sinapoyl (Table S6, Figure S6). This trend was hugely emphasized after Fe 2+ addition (0.6 equiv.), a clear indication that the sinapoyl residue on Glc-2 was very efficient for the long-term stabilization of the metal complexes. Again, the comparison between the P3 and PB isomers was striking: a 25% color loss of the Fe 2+ -P3 complex was reached in 15-20 min vs. 4.5 h for the Fe 2+ -PB complex. As already observed [9], the spectroscopic titration of the residual pigment (after acidification to pH 1 for total dissociation of the iron complex and total conversion of the colorless forms into the flavylium ion) provided a more contrasted picture: in the absence of added iron, acylation offered to protect, and this so-called "acylation paradox" [9] could be interpreted by assuming that the colorless forms (hemiketal and chalcones) were much less susceptible to oxidative degradation than the electron-rich anionic base. In other words, being more vulnerable to reversible water addition, nonacylated PA and anthocyanins having a single HCA residue at R1 (P1, P3) were protected against autoxidation at the cost of losing their color. However, after Fe 2+ addition, autoxidation was strongly accelerated for PA because the corresponding iron complex was not stable Attempts to fit the spectral changes observed at 530 nm (free complex) to a simple kinetic model, assuming competition wi reversible iron binding, failed. However, a more sophistica reversible Fe 2+ binding by the colored forms, followed by irrev bound Fe 2+ with concomitant formation of a Fe 3+ complex (in eq species), provided perfect curve-fittings and acceptable rate con steps (Table S5). Overall, PB bound Fe 2+ twice as rapidly as P complex seemed less susceptible to autoxidation than the Fe 2+ -P blue color development was much more intense with PB, as coefficient of the PB-Fe 3+ complex in the blue domain was ca. tw the P3-Fe 3+ complex. These remarkable improvements only reflect sinapoyl residue from the 6 position of Glc-1 to the 2 position of Gl example of the crucial importance of strong anthocyanidin-hydrox interactions in the development of vibrant blue colors. Long-Term Stability of the Metal Complexes Acylation promoted a moderate increase in color stability again with an advantage conferred on pigments with R2 = sinapo This trend was hugely emphasized after Fe 2+ addition (0.6 equiv. the sinapoyl residue on Glc-2 was very efficient for the long-term s complexes. Again, the comparison between the P3 and PB isom color loss of the Fe 2+ -P3 complex was reached in 15-20 min vs complex. As already observed [9], the spectroscopic titration of the acidification to pH 1 for total dissociation of the iron complex and colorless forms into the flavylium ion) provided a more contrasted of added iron, acylation offered to protect, and this so-called "acyla be interpreted by assuming that the colorless forms (hemiketal and less susceptible to oxidative degradation than the electron-rich words, being more vulnerable to reversible water addition, anthocyanins having a single HCA residue at R1 (P1, P3) w autoxidation at the cost of losing their color. However, after Fe 2+ was strongly accelerated for PA because the corresponding iron Attempts to fit the spectral changes observed at 530 nm (free form) and 670 nm (iron complex) to a simple kinetic model, assuming competition with water addition and reversible iron binding, failed. However, a more sophisticated scheme assuming reversible Fe 2+ binding by the colored forms, followed by irreversible autoxidation of bound Fe 2+ with concomitant formation of a Fe 3+ complex (in equilibrium with the free species), provided perfect curve-fittings and acceptable rate constants for the different steps (Table S5). Overall, PB bound Fe 2+ twice as rapidly as P3 did, and the Fe 2+ -PB complex seemed less susceptible to autoxidation than the Fe 2+ -P3 complex. Finally, the blue color development was much more intense with PB, as the molar absorption coefficient of the PB-Fe 3+ complex in the blue domain was ca. twice as large than that of the P3-Fe 3+ complex. These remarkable improvements only reflected the shift of the single sinapoyl residue from the 6 position of Glc-1 to the 2 position of Glc-2. This is a spectacular example of the crucial importance of strong anthocyanidin-hydroxycinnamoyl π-stacking interactions in the development of vibrant blue colors. Long-Term Stability of the Metal Complexes Acylation promoted a moderate increase in color stability upon heating at 50 °C, again with an advantage conferred on pigments with R2 = sinapoyl (Table S6, Figure S6). This trend was hugely emphasized after Fe 2+ addition (0.6 equiv.), a clear indication that the sinapoyl residue on Glc-2 was very efficient for the long-term stabilization of the metal complexes. Again, the comparison between the P3 and PB isomers was striking: a 25% color loss of the Fe 2+ -P3 complex was reached in 15-20 min vs. 4.5 h for the Fe 2+ -PB complex. As already observed [9], the spectroscopic titration of the residual pigment (after acidification to pH 1 for total dissociation of the iron complex and total conversion of the colorless forms into the flavylium ion) provided a more contrasted picture: in the absence of added iron, acylation offered to protect, and this so-called "acylation paradox" [9] could be interpreted by assuming that the colorless forms (hemiketal and chalcones) were much less susceptible to oxidative degradation than the electron-rich anionic base. In other words, being more vulnerable to reversible water addition, nonacylated PA and anthocyanins having a single HCA residue at R1 (P1, P3) were protected against autoxidation at the cost of losing their color. However, after Fe 2+ addition, autoxidation was strongly accelerated for PA because the corresponding iron complex was not stable t the spectral changes observed at 530 nm (free form) and 670 nm (iron ple kinetic model, assuming competition with water addition and nding, failed. However, a more sophisticated scheme assuming ding by the colored forms, followed by irreversible autoxidation of ncomitant formation of a Fe 3+ complex (in equilibrium with the free perfect curve-fittings and acceptable rate constants for the different verall, PB bound Fe 2+ twice as rapidly as P3 did, and the Fe 2+ -PB ss susceptible to autoxidation than the Fe 2+ -P3 complex. Finally, the pment was much more intense with PB, as the molar absorption B-Fe 3+ complex in the blue domain was ca. twice as large than that of x. These remarkable improvements only reflected the shift of the single om the 6 position of Glc-1 to the 2 position of Glc-2. This is a spectacular ial importance of strong anthocyanidin-hydroxycinnamoyl π-stacking development of vibrant blue colors. ility of the Metal Complexes moted a moderate increase in color stability upon heating at 50 °C, ntage conferred on pigments with R2 = sinapoyl (Table S6, Figure S6). ely emphasized after Fe 2+ addition (0.6 equiv.), a clear indication that e on Glc-2 was very efficient for the long-term stabilization of the metal the comparison between the P3 and PB isomers was striking: a 25% e 2+ -P3 complex was reached in 15-20 min vs. 4.5 h for the Fe 2+ -PB y observed [9], the spectroscopic titration of the residual pigment (after 1 for total dissociation of the iron complex and total conversion of the the flavylium ion) provided a more contrasted picture: in the absence tion offered to protect, and this so-called "acylation paradox" [9] could ssuming that the colorless forms (hemiketal and chalcones) were much oxidative degradation than the electron-rich anionic base. In other re vulnerable to reversible water addition, nonacylated PA and ing a single HCA residue at R1 (P1, P3) were protected against cost of losing their color. However, after Fe 2+ addition, autoxidation erated for PA because the corresponding iron complex was not stable Attempts to fit the spectral changes observed at 530 nm complex) to a simple kinetic model, assuming competit reversible iron binding, failed. However, a more sop reversible Fe 2+ binding by the colored forms, followed b bound Fe 2+ with concomitant formation of a Fe 3+ complex species), provided perfect curve-fittings and acceptable ra steps (Table S5). Overall, PB bound Fe 2+ twice as rapidly complex seemed less susceptible to autoxidation than the blue color development was much more intense with coefficient of the PB-Fe 3+ complex in the blue domain was the P3-Fe 3+ complex. These remarkable improvements only sinapoyl residue from the 6 position of Glc-1 to the 2 positio example of the crucial importance of strong anthocyanidininteractions in the development of vibrant blue colors. Long-Term Stability of the Metal Complexes Acylation promoted a moderate increase in color sta again with an advantage conferred on pigments with R2 = This trend was hugely emphasized after Fe 2+ addition (0.6 the sinapoyl residue on Glc-2 was very efficient for the longcomplexes. Again, the comparison between the P3 and PB color loss of the Fe 2+ -P3 complex was reached in 15-20 m complex. As already observed [9], the spectroscopic titration acidification to pH 1 for total dissociation of the iron comp colorless forms into the flavylium ion) provided a more con of added iron, acylation offered to protect, and this so-called be interpreted by assuming that the colorless forms (hemike less susceptible to oxidative degradation than the electro words, being more vulnerable to reversible water ad anthocyanins having a single HCA residue at R1 (P1, autoxidation at the cost of losing their color. However, aft was strongly accelerated for PA because the corresponding Attempts to fit the spectral changes observed at 530 nm (free form) and 670 nm complex) to a simple kinetic model, assuming competition with water additio reversible iron binding, failed. However, a more sophisticated scheme assu reversible Fe 2+ binding by the colored forms, followed by irreversible autoxidat bound Fe 2+ with concomitant formation of a Fe 3+ complex (in equilibrium with th species), provided perfect curve-fittings and acceptable rate constants for the dif steps (Table S5). Overall, PB bound Fe 2+ twice as rapidly as P3 did, and the F complex seemed less susceptible to autoxidation than the Fe 2+ -P3 complex. Final blue color development was much more intense with PB, as the molar abso coefficient of the PB-Fe 3+ complex in the blue domain was ca. twice as large than t the P3-Fe 3+ complex. These remarkable improvements only reflected the shift of the sinapoyl residue from the 6 position of Glc-1 to the 2 position of Glc-2. This is a spect example of the crucial importance of strong anthocyanidin-hydroxycinnamoyl π-sta interactions in the development of vibrant blue colors. Long-Term Stability of the Metal Complexes Acylation promoted a moderate increase in color stability upon heating at again with an advantage conferred on pigments with R2 = sinapoyl (Table S6, Figu This trend was hugely emphasized after Fe 2+ addition (0.6 equiv.), a clear indicatio the sinapoyl residue on Glc-2 was very efficient for the long-term stabilization of the complexes. Again, the comparison between the P3 and PB isomers was striking: color loss of the Fe 2+ -P3 complex was reached in 15-20 min vs. 4.5 h for the F complex. As already observed [9], the spectroscopic titration of the residual pigment acidification to pH 1 for total dissociation of the iron complex and total conversion colorless forms into the flavylium ion) provided a more contrasted picture: in the ab of added iron, acylation offered to protect, and this so-called "acylation paradox" [9] be interpreted by assuming that the colorless forms (hemiketal and chalcones) were less susceptible to oxidative degradation than the electron-rich anionic base. In words, being more vulnerable to reversible water addition, nonacylated PA anthocyanins having a single HCA residue at R1 (P1, P3) were protected a autoxidation at the cost of losing their color. However, after Fe 2+ addition, autoxi was strongly accelerated for PA because the corresponding iron complex was not spectral changes observed at 530 nm (free form) and 670 nm (iron inetic model, assuming competition with water addition and g, failed. However, a more sophisticated scheme assuming by the colored forms, followed by irreversible autoxidation of itant formation of a Fe 3+ complex (in equilibrium with the free ct curve-fittings and acceptable rate constants for the different ll, PB bound Fe 2+ twice as rapidly as P3 did, and the Fe 2+ -PB sceptible to autoxidation than the Fe 2+ -P3 complex. Finally, the t was much more intense with PB, as the molar absorption 3+ complex in the blue domain was ca. twice as large than that of ese remarkable improvements only reflected the shift of the single e 6 position of Glc-1 to the 2 position of Glc-2. This is a spectacular portance of strong anthocyanidin-hydroxycinnamoyl π-stacking opment of vibrant blue colors. f the Metal Complexes d a moderate increase in color stability upon heating at 50 °C, e conferred on pigments with R2 = sinapoyl (Table S6, Figure S6). mphasized after Fe 2+ addition (0.6 equiv.), a clear indication that Glc-2 was very efficient for the long-term stabilization of the metal omparison between the P3 and PB isomers was striking: a 25% 3 complex was reached in 15-20 min vs. 4.5 h for the Fe 2+ -PB erved [9], the spectroscopic titration of the residual pigment (after total dissociation of the iron complex and total conversion of the flavylium ion) provided a more contrasted picture: in the absence offered to protect, and this so-called "acylation paradox" [9] could ing that the colorless forms (hemiketal and chalcones) were much ative degradation than the electron-rich anionic base. In other lnerable to reversible water addition, nonacylated PA and single HCA residue at R1 (P1, P3) were protected against of losing their color. However, after Fe 2+ addition, autoxidation for PA because the corresponding iron complex was not stable he spectral changes observed at 530 nm (free form) and 670 nm (iron e kinetic model, assuming competition with water addition and ing, failed. However, a more sophisticated scheme assuming ng by the colored forms, followed by irreversible autoxidation of comitant formation of a Fe 3+ complex (in equilibrium with the free erfect curve-fittings and acceptable rate constants for the different erall, PB bound Fe 2+ twice as rapidly as P3 did, and the Fe 2+ -PB susceptible to autoxidation than the Fe 2+ -P3 complex. Finally, the ent was much more intense with PB, as the molar absorption Fe 3+ complex in the blue domain was ca. twice as large than that of These remarkable improvements only reflected the shift of the single the 6 position of Glc-1 to the 2 position of Glc-2. This is a spectacular l importance of strong anthocyanidin-hydroxycinnamoyl π-stacking velopment of vibrant blue colors. ty of the Metal Complexes oted a moderate increase in color stability upon heating at 50 °C, age conferred on pigments with R2 = sinapoyl (Table S6, Figure S6). ly emphasized after Fe 2+ addition (0.6 equiv.), a clear indication that n Glc-2 was very efficient for the long-term stabilization of the metal e comparison between the P3 and PB isomers was striking: a 25% + -P3 complex was reached in 15-20 min vs. 4.5 h for the Fe 2+ -PB observed [9], the spectroscopic titration of the residual pigment (after for total dissociation of the iron complex and total conversion of the he flavylium ion) provided a more contrasted picture: in the absence on offered to protect, and this so-called "acylation paradox" [9] could uming that the colorless forms (hemiketal and chalcones) were much xidative degradation than the electron-rich anionic base. In other vulnerable to reversible water addition, nonacylated PA and a single HCA residue at R1 (P1, P3) were protected against ost of losing their color. However, after Fe 2+ addition, autoxidation ted for PA because the corresponding iron complex was not stable ). Attempts to fit the spectral changes observed at 530 nm (free form) and 670 nm (iron plex) to a simple kinetic model, assuming competition with water addition and ersible iron binding, failed. However, a more sophisticated scheme assuming ersible Fe 2+ binding by the colored forms, followed by irreversible autoxidation of nd Fe 2+ with concomitant formation of a Fe 3+ complex (in equilibrium with the free cies), provided perfect curve-fittings and acceptable rate constants for the different s (Table S5). Overall, PB bound Fe 2+ twice as rapidly as P3 did, and the Fe 2+ -PB plex seemed less susceptible to autoxidation than the Fe 2+ -P3 complex. Finally, the e color development was much more intense with PB, as the molar absorption fficient of the PB-Fe 3+ complex in the blue domain was ca. twice as large than that of P3-Fe 3+ complex. These remarkable improvements only reflected the shift of the single poyl residue from the 6 position of Glc-1 to the 2 position of Glc-2. This is a spectacular mple of the crucial importance of strong anthocyanidin-hydroxycinnamoyl π-stacking ractions in the development of vibrant blue colors. Long-Term Stability of the Metal Complexes Acylation promoted a moderate increase in color stability upon heating at 50 °C, in with an advantage conferred on pigments with R2 = sinapoyl (Table S6, Figure S6). s trend was hugely emphasized after Fe 2+ addition (0.6 equiv.), a clear indication that sinapoyl residue on Glc-2 was very efficient for the long-term stabilization of the metal plexes. Again, the comparison between the P3 and PB isomers was striking: a 25% r loss of the Fe 2+ -P3 complex was reached in 15-20 min vs. 4.5 h for the Fe 2+ -PB plex. As already observed [9], the spectroscopic titration of the residual pigment (after ification to pH 1 for total dissociation of the iron complex and total conversion of the rless forms into the flavylium ion) provided a more contrasted picture: in the absence dded iron, acylation offered to protect, and this so-called "acylation paradox" [9] could nterpreted by assuming that the colorless forms (hemiketal and chalcones) were much susceptible to oxidative degradation than the electron-rich anionic base. In other ds, being more vulnerable to reversible water addition, nonacylated PA and ocyanins having a single HCA residue at R1 (P1, P3) were protected against oxidation at the cost of losing their color. However, after Fe 2+ addition, autoxidation strongly accelerated for PA because the corresponding iron complex was not stable Long-Term Stability of the Metal Complexes Acylation promoted a moderate increase in color stability upon heating at 50 • C, again with an advantage conferred on pigments with R2 = sinapoyl (Table S6, Figure S6). This trend was hugely emphasized after Fe 2+ addition (0.6 equiv.), a clear indication that the sinapoyl residue on Glc-2 was very efficient for the long-term stabilization of the metal complexes. Again, the comparison between the P3 and PB isomers was striking: a 25% color loss of the Fe 2+ -P3 complex was reached in 15-20 min vs. 4.5 h for the Fe 2+ -PB complex. As already observed [9], the spectroscopic titration of the residual pigment (after acidification to pH 1 for total dissociation of the iron complex and total conversion of the colorless forms into the flavylium ion) provided a more contrasted picture: in the absence of added iron, acylation offered to protect, and this so-called "acylation paradox" [9] could be interpreted by assuming that the colorless forms (hemiketal and chalcones) were much less susceptible to oxidative degradation than the electron-rich anionic base. In other words, being more vulnerable to reversible water addition, nonacylated PA and anthocyanins having a single HCA residue at R1 (P1, P3) were protected against autoxidation at the cost of losing their color. However, after Fe 2+ addition, autoxidation was strongly accelerated for PA because the corresponding iron complex was not stable enough, and the protection offered by iron binding was significant for P4 and PB only. In our recent work, adding Fe 2+ also was shown to increase the yield in some major degradation products of PA and P1 [18]. In summary, strong π-stacking interactions within the complexes ensured an efficient iron sequestration, thus preventing the pro-oxidant activity of loosely bound iron ions [16]. The tight metal binding also explained the total inhibition of intramolecular acyl transfer, normally occurring when solutions of P4 or PB are heated [18]. Materials and Methods Red cabbage and purple sweet potato anthocyanins were isolated by preparatory LC according to already-published procedures [1,2]. Their structures are presented in Scheme 1. Red Cabbage Anthocyanins Metal-anthocyanin binding experiments were carried out according to previously reported procedures [13]. Fresh 5 mM solutions of Fe 2+ and Al 3+ were respectively pre-pared from FeSO 4 -7H 2 O and AlCl 3 -6H 2 O (Sigma-Aldrich, St-Quentin Fallavier, France) in 1 mM aqueous HCl. Concentrated stock solutions of pigment (5 mM) were prepared in 50 mM aqueous HCl. Absorption spectra were recorded on an Agilent 8453 diode-array spectrometer in thermostated and magnetically stirred quartz cuvettes (pathlength = 1 cm). The following solutions were directly added to the cuvette in this order: 2 mL of 10 mM phosphate buffer (pH 7 or 8), 20 µL of anthocyanin stock solution and, after a few seconds (negligible formation of colorless forms), a small volume of the 5 mM Fe 2+ or Al 3+ solution (final iron/anthocyanin molar ratio = 1 or 2). The full UV-VIS spectra were recorded in kinetic mode for 1 to 2 min. For an optimal sensitivity, the detection in the visible range was set at 550 or 610 nm for Al 3+ (close to the complex's λ max ) and at 670 nm for Fe 2+ (charge-transfer contribution of the Fe 3+ complexes). The hyperchromic and bathochromic shifts were calculated from the initial (free ligand) and final (metal complex) spectra as (A max,f − A max,0 )/A max,0 and λ max,f − λ max,0 , respectively. Binding experiments were also carried out in the pH range 2-6 (50 mM acetate buffer) to determine the pH for the onset of metal binding and investigate the competition with water addition to the flavylium ion. Purple Sweet Potato Anthocyanins The anthocyanin isolates were diluted to a 50 µM concentration in buffers of pH 6 (0.1 M sodium acetate), 7, and 8 (0.25 M TRIS). Small volumes of concentrated Al 2 (SO 4 ) 3 solutions were then added to reach metal/anthocyanin ratios of 0.5, 1, and 5. Samples were equilibrated for 30 min at room temperature in the dark prior to analysis (in triplicates). UV-VIS spectra were collected from 380 to 700 nm using 300 µL samples in poly-D-lysinecoated polystyrene 96-well plates with a SpectraMax 190 Microplate Reader (Molecular Devices, Sunnyvale, CA, USA). Kinetic Analyses The kinetic curves were analyzed with the Scientist software (Micromath, St. Louis, MO, USA). Sets of differential equations characteristic of the different kinetic processes (metal binding, water addition to the flavylium ion) were implemented in the models, as well as initial concentrations. Optimized values for the adjustable parameters (rate constants, molar absorption coefficients) and their standard deviations are reported. Colorimetric Data Color characteristics were expressed in the L*a*b* coordinates. L* corresponds to the light intensity, varying from 0 (no light) to 100. Parameters a* and b* quantify the contribution of four colors: green (−a*), red (+a*), blue (−b*) and yellow (+b*). With RC anthocyanins, the method to generate the L*a*b* coordinates and the corresponding color patches was as described in our recent work [13]. With PSP anthocyanins, the colorimetric data were generated as previously reported [2]. High-Resolution Mass Spectrometry (HRMS) Stock solutions of anthocyanins P2, PB, and P5 were diluted to 0.1 mM in a 50 mM ammonium acetate buffer at pH 7. A 0.5 µL volume was injected in the same solvent (flow injection analysis) over 1 min into an Orbitrap Exploris 480 mass spectrometer Exploris 480 (Thermo Fisher Scientific, Waltham, MA, USA) equipped with an H-ESI source. A static spray voltage of -3.5 kV in positive mode and of 2.5 kV in negative mode was applied with an ion-transfer tube temperature of 280 • C, a vaporizer temperature of 300 • C, and a N 2 sheath gas pressure of 40 psi. The full scan was recorded at a resolution of 24 × 10 4 and corrected with an internal mass calibrant. Ions were searched between m/z 400 and 1600. The elution lasted ca. 0.3 min and an average spectrum was determined with the Freestyle 1.6 software based on the 20-30 spectra of intensities higher than 10%. A targeted detection of the Fe 2+ , Fe 3+ , and Al 3+ complexes of stoichiometries 1:1, 1:2, and 1:3 and with charges ranging from −3 to +4 was carried out. Based on proposed raw formulae, the theoretical
17,070.2
2021-04-27T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Research on Stability Prediction of the Crankshaft CNC Tangential Point Tracing Grinding As the key part of internal combustion engines, crankshaft with high efficiency and accuracy processing has always been the target of the engine manufacturer’s pursuit. Grinding is used to obtain the ultimate dimensional accuracy and surface finish in the crankshaft machining. Grinding of the main journals and the pin journals can be accomplished in a single clamping operation by CNC Tangential Point Tracing grinding technology. However, the chatter in the grinding process is harmful to the precision and surface quality. In this paper, stability lobe diagram is developed to predict the grinding chatter. First the dynamic model of Tangential Point Tracing grinding system is established. Then the limit formula of the critical grinding depth is calculated and the stability lobe diagram of the grinding system is presented. Finally, the validation experiments are carried out on the crankshaft grinding machine and the results are consistent with the calculation. Introduction Crankshaft is one of the key components of engines in automotive industry and its rotation is the power source of the engine.It consists of two important parts: the crankshaft main journal and crankpin.The crankshaft main journal is mounted on the cylinder, while the crankpin is connected to the big end hole of the connecting rod and its other end hole is connected to the cylinder piston.It is a typical slider crank mechanism and it turns the reciprocating motion of the connecting rod into rotating motion.The quality of crankshaft determines the performance of the engine.The crankshaft of an engine is shown in Figure 1. The traditional crankshaft grinding process of main journal is similar to the cylindrical grinding.The crankpin is adjusted to the center of the grinding by the eccentric fixture and each crankshaft needs special fixture.It has long auxiliary hours and low processing precision when the clamp is adjusting by the operator [1]. Nowadays, the crankshaft is mechanized by the method of CNC Tangential Point Tracing grinding.Grinding of the main journals and the pin journals can be accomplished in a single clamping operation.The grinding method can avoid the positioning error caused by multiple loadings and save the adjustment time.High machining flexibility, accuracy, and efficiency are also improved.CNC Tangential Point Tracing Grinding Crankshaft is high technology processing and it was called Oscillate grinding [2] or Chasing the pin [3] by some scholars.Tangential Point Tracing grinding mode is composed of linkage between the workpiece rotation ( axis) and reciprocating motion of the grinding wheel ( axis) to achieve the eccentric circle machining. The chatter in the grinding process can result in increased tolerance of dimension and position, the surface roughness and waviness, which seriously affect dimensional accuracy and surface finish of the crankshaft.The chatter in the machining is usually accompanied by considerable noise [4]. There are lots of measures that can effectively control chatter such as using the drive to improve the dynamic stiffness and damping of the grinding machine system to reduce the regenerative phase [5], but these methods need to change the structure of machine tools and they are not suitable for the users.The stability of the stable region and the unstable region can be visually described by the stability lobe diagram [6,7].In order to avoid the grinding chatter, the Tangential Point Tracing grinding was set as the research object.The stability lobe chart was used to visually describe the grinding stability region and the unstable region.To predict the grinding stability, the machining quality and the machining efficiency of the grinding process are ensured [8]. The Tangential Point Tracing Grinding With increased demands of industry, Tangential Point Tracing grinding process has been developed to machine nonround shaped parts such as crankshaft and camshaft allowing reduction of nonproductive time and reclamping inaccuracies [9]. The grinding point moves along the surface of the crankpin, while the grinding wheel is always tangent to the crankpin in the grinding process.Figure 2 illustrates the concept of Tangential Point Tracing grinding [3].The crankpin rotates around the axis followed by horizontal movement of the wheel head along the axis.All of the main journals and crankpins of a concentrically clamped crankshaft can be machined in one fixture that improves the efficiency and accuracy of the products. Figure 3 is the schematic of Tangential Point Tracing grinding motion model.The crankpin is rotated around the point, and the wheel is followed by the movement.The trajectory equation of the tangent points coordinates can be expressed as follows: In Figure 3, is the crankshaft turning center. denotes the center of the crankpin. is the center of the grinding wheel. The eccentric distance of the crankpin is . and correspond to the radius of the crankpin and the grinding wheel, respectively. Here, and denote the rotation angle of the crankshaft and the angle between and . and is describe the position of the tangent point in rectangular coordinate system, where is origin of coordinate system.The movement equation of wheel center is as follows: = = cos + ( + ) cos . (2) The Dynamic Model of Tangential Point Tracing Grinding System 3.1.Modeling and Formulation.The crankshaft is supported by the centers of the headstock and tailstock when it is machining.In order to reduce the deformation, main journals are supported by center rests.The headstock provides the low speed rotational drive to the crankshaft.The crankpin moves around point in the plane while the grinding wheel is doing a reciprocating movement in direction as shown in Figures 3 and 4(a).In this study, a simplified dynamic model of Tangential Point Tracing grinding machine and analysis are presented in Figure 4(c). According to Newton motion law, four discrete mass dynamic equations are written as follows: For computer analysis and calculation, the upper type is changed into matrix form; the mass matrix of the system is as follows: Damping matrix is Stiffness matrix is The generalized coordinates and generalized forces are expressed as And we can express the above kinetic equation as The Crankshaft Stiffness Is Different in the Circumferential Direction.The stiffness of the crankshaft varies not only along the axial direction, but also in the radial direction.It leads to the crankpin deformation in the circumferential direction and affects accuracy of the crankpin.In the cylindrical grinding, the direction of the normal grinding force ( ) and the tangential grinding force ( ) is invariable, but they change in the Tangential Point Tracing grinding.The grinding force is shown in Figure 5 where the angle of the crankpin is : The relationship between normal grinding force and tangential grinding force can be written as follows: Calculation of Critical Grinding Depth Adjustment of the grinding process and modification of the machine tools structure are two approaches to avoid chatter. In the first approach, the stability lobe diagram that predicts the onset of chatter is used to determine the critical grinding depth and spindle speed to eliminate or minimize chatter behavior in machining [10]. It is important to study chatter mechanisms to predict critical grinding depth.We need to know grinding chatter boundaries and growth rates, which is helpful to design a grinding process without chatter according to the chatter boundaries. In order to simplify the model of crankshaft cut into grinding system, the kinematic differential equation of the grinding wheel workpiece grinding system dynamics model is expressed as [11] q () + q () + () = ± () .Dynamic grinding force is usually proportional to the removal rate of material, which has the following formula: Among them () are moments of the surface of the workpiece (or wheel) surface ripple amplitude.In this model, only the delay effect from the workpiece is considered.The delay time is assumed to be constant and equals the workpiece's rotation period () [12]: And then After combining (11) and (13), we obtain the following expression: Laplace transformations to the upper formula can be Then where () is input and 0 () is the output of the dynamic grinding process transfer function: Let To let the divisor be zero, the crankshaft (or wheel) system for the characteristic equation of regenerative chatter is According to the first discrimination method of stable Lyapunov system, = + ; the system is in the critical state of stability when is 0: According to Euler equation, − = cos() − sin() will be substituted into the above equation: After rearranging (18), we obtain the following expression: = will be substituted into (23): The real part of ( 21) and ( 27) is equal to To make (29) for a partial derivative of , then let the result approach zero: Bring (30) into (29); then we get the critical grinding depth: It is shown that the presented approach can be used to predict the crankshaft grinding stability [5,13]. The grinding wheel is easily worn and the regenerative chatter of both workpiece and grinding wheel should be considered. Critical grinding depth of the crankshaft: Critical depth of the grinding wheel wear: Experimental Study The complicated phenomena in engineering cannot be concluded only by theoretical analysis and the accuracy of theoretical analysis results need be verified by experiment. The Test Experiment of Crankshaft Circumferential Stiffness Change. As the crankshaft is an elongate and complex shape shaft, the crankpin deforms during the grinding process.The center frame is generally used in the machining of crankshaft to eliminate the influence of the gravity, so the grinding force is the main cause of crankpin deformation. In order to further study the influence of grinding force on the deformation of crankpin, we take the crankshaft of D06A-101-30 diesel engine as the experimental object.The number 1 crankpin was applied to the 200 N vertical constant force to simulate the idea that the crankshaft was stressed by the normal grinding force of 200 N.In the experiment, two laser displacement sensors were used to measure the deformation of crankpin in two directions, as shown in Figure 6. First, the crankshaft is rotated without the weight, and the position of the crankpin is measured at 5 degrees from the angle of crankpin which is 0 degrees.Then, under the weight of 200 N, repeat the above measurement procedure and another group of position dates is obtained.The difference between the two groups is the deformation in the radius direction of crankpin under the grinding force 200 N. Finally we can achieve the deformation and stiffness of crankpin as in Table 1 and Figure 7: where and represent stiffness of crankpin and grinding force, respectively.Δ denotes the deformation in the radius direction of crankpin. According to the experimental results and the structural characteristics of the crankshaft, we use cosine curve fitting its stiffness, as shown in Figure 8.We can obtain the expression of the stiffness of the crankshaft. If we plug the parameters of Table 2 into the above formulas, we can obtain the stability lobe diagram shown in Figure 9. In the diagram, above the lobe line is "The unstable region" of the grinding system and below the lobe line is "The stability region." The crankshaft is a complex shape shaft and its circumferential stiffness is different [14,15].In the diagram, it is shown that accounting for the uncertainty or variability of the process parameters can influence the stability boundary and the fuzzy stable region has been formed [16]. As the grinding wheel speed increasing, the limit grinding depth also has the tendency of increase.Therefore, by increasing the grinding wheel speed and keeping the grinding depth less than the critical depth, we can make the process in a stable state and the processing efficiency is improved. Experimental Results Contrast.In order to verify the correctness of the stability limit diagram, we carried out the relevant grinding experiments [17][18][19].The specific parameters of the workpiece and grinding wheel are shown in Table 3.We apply Bruel & Kjaer 4366 accelerometer to the grinding machine in order to detect chatter vibrations.The sensor was mounted on the tailstock center (Figure 11).In the process of grinding, the acceleration amplitude of regenerative chatter is increased rapidly with the grinding time.Figures 12 and 13 display the normal and the chatter signals. Crankshaft Tangential Point Tracing grinding setup is shown in Figure 10.Each test was measured more than three times during grinding, so that the signals produced were sufficient to obtain information.The crankshaft rotation speed is set to 6 r/min and the speed of the grinding wheel is 1250 r/min, 1450 r/min, and 1650 r/min in grinding experiment.In the case of certain rotational speed, the boundary value of the chatter is determined continuously by changing the depth of the grinding.The chatter points are basically over the stability limit of the curve or near and so the experimental results are consistent with the predictions of the stability limit diagram.Therefore, it is proved that the prediction method is effective and reliable.We can see chatter marks (Vibration Waviness) with naked eye on the crankpin surface as in Figure 14.Roundness measurement results show that the instability of the regenerative chatter caused by the instability of the grinding phenomenon makes the roundness of the workpiece poor, as shown in Figure 15. Conclusions To study how to avoid chatter in crankshaft Tangential Point Tracing grinding, Stability lobe diagram has been developed based on dynamic model to predict chatter and some conclusions have been drawn as follows: (1) The dynamic equation of the grinding system has been constructed by the dynamic analysis of the grinding system. (2) Expression of the critical crankshaft grinding depth has been developed based upon the work of Altintas and Budak [5] and Stepan [13]. (3) Through the experimental study, the law of crankshaft rigidity is obtained. (4) The stability of the grinding system can be predicted by using the method of drawing the stability diagram. (5) Experimental results show that the prediction method is consistent with the experimental data.The grinding stiffness and contact stiffness's equivalent stiffness of the crankshaft and the grinding wheel: 2 = /( + ) 3 : Nomenclature Equivalent support stiffness of the grinding wheel system in the direction of the generalized coordinates 3 : The equivalent damping of the grinding wheel system in the direction of the generalized coordinate system 1 : Masses of the crankshaft 2 : M a s s e so ft h eg r i n d i n gw h e e l 5 : Equivalent stiffness of the support system of the grinding wheel of direction in generalized coordinates 5 : Equivalent damping of the support system of the grinding wheel of direction in generalized coordinates 1 : The force of the grinding system in the direction of of the generalized system 2 : The force of the grinding system in the direction of of the generalized system 1 and 2 : The displacement of the crankshaft horizontal and vertical direction shift 3 : The displacement of wheel horizontal direction shift : The rotation angle of crankpin : Th ea n g l eb e t w e e n and (Figure 3) : The eccentric distance of the crankpin : The radius of the grinding wheel : Th er a d i u so ft h ec r a n k p i n : The normal grinding force : The tangential grinding force : The additional couple : The friction coefficient between the contact surface of the grinding wheel and the crankpin (): The dynamic grinding force : Th et i m e ℎ : The grinding depth of the workpiece ℎ : The grinding depth of the wheel ℎ and ℎ : The opposite of its direction, assuming the grinding depth of the workpiece is a positive direction : The grinding force coefficient of workpiece (or grinding wheel) ℎ: The grinding depth of workpiece (or grinding wheel) : Th eg r i n d i n gc o n t a c tw i d t h 0 (): The vibration pattern of the surface of the workpiece (or wheel) for the moment : The rotation period of workpiece (or wheel) : The square of natural frequency for the system : Th es y s t e me q u i v a l e n td a m p i n g : Th ef r e q u e n c yr a t i o . Figure 4 : Figure 4: A simplified dynamic model of Tangential Point Tracing grinding machine. Table 3 :Figure 8 : Figure 8: Crankpin Stiffness of the experiment and fitting curve. Figure 11 : Figure 11: The position of acceleration sensors. Table 1 : The deformation and stiffness of crankpin. Table 2 : The parameter of the grinding process.
3,847.2
2015-10-01T00:00:00.000
[ "Materials Science" ]
Dysosma versipellis Extract Inhibits Esophageal Cancer Progression through the Wnt Signaling Pathway Objective. In this study, we aim to investigate the effect of Dysosma versipellis extract on biological behavior of esophageal cancer cells and its underlying mechanisms. Methods. A total of 30 BALB/C nude mice (class SPF) were equally and randomly divided into the control group, model group, and Dysosma versipellis group. CP-C cell of esophageal cancer was subcutaneously injected into the model group as well as the Dysosma versipellis group, and the same amount of normal saline into the control group, in order to compare the tumorigenesis of nude mice of three groups. Wnt, β-catenin, and p-GSK3β/GSK3β expression in tumor tissues was detected using Western blot. CP-C cells in logarithmic growth were selected and divided into 4 groups, including the control group, podophyllotoxin group, Wnt activator group, and combined group (mixture of podophyllotoxin and Wnt activator). ,e cell viability, apoptosis, and invasion ability, Wnt, β-catenin, and p-GSK3β/GSK3β expression level of CP-C cells in each group were detected via MTT assay, flow cytometry, transwell, and Western blot, respectively. Results. ,e tumorigenesis rates of the control group, model group, and Dysosma versipellis group were 0%, 90% (1 tumor-free mouse), and 80% (2 tumorfree mice), respectively. ,e tumor mass in the Dysosma versipellis group was significant less than that in the model group. Based on the results of Western blot, Wnt, ß-catenin, and p-GSK3β/GSK3β expression of the Dysosma versipellis group was lower than that of the control group.,e in vitro viability test indicated that there was a significant difference in cell viability exhibited among four groups. Cell viability level in the 3 groups, including the combined group, blank group, and Wnt activator group, was higher than the podophyllotoxin group at each time point. In vitro apoptosis assay revealed that significant differences in cell apoptosis exhibited among four groups. Cell apoptosis rate was higher in the podophyllotoxin group compared to the remaining three groups. ,e Wnt activator group showed the lowest cell apoptosis rate. ,e in vitro invasion assay demonstrated that numbers of transmembrane cell in the 3 groups, involving the combined group, blank group, and Wnt activator group, showed a higher level than the podophyllotoxin group. ,e results of Western blot manifested that the podophyllotoxin group showed lower level of Wnt, ß-catenin, and p-GSK3β/GSK3β expression compared to the other 3 groups. Conclusion. Podophyllotoxin in Dysosma versipellis has an excellent antiesophageal cancer effect and is able to inhibit cell viability as well as invasion ability and promote apoptosis of esophageal cancer cells by inhibiting the Wnt signaling pathway, which could be potentially used in future clinical treatment of esophageal cancer. Introduction Esophageal cancer is one of the most common epithelial malignancies in the digestive system, mainly composed of squamous cell carcinoma and adenocarcinoma, which affects more than 450,000 people worldwide [1]. It ranks the eighth among most common cancers and sixth among lifethreatening cancers [2]. Despite the progresses have been made in radiotherapy, chemotherapy, neoadjuvant therapy, and immunotherapy, the patients with esophageal cancer have poor prognosis, with 15-25% overall 5-year survival [3]. e esophageal cancer patients with poor prognosis are associated with advanced diagnosis and metastatic tendency, even if the tumor is superficial [4]. In recent years, many scholars have determined the therapeutic effect of Chinese herbal extracts on tumors, with less toxicity and side effects [5,6], in which Dysosma versipellis has gained great popularity as an antitumor drug. Khaled et al. [7] have documented that Dysosma versipellis extract, deoxypodophyllotoxin, can inhibit proliferation of breast cancer cells by interfering with the cell cycle to regulate proteins such as cyclin B1, CDC25c, and CDK1, as well as destroying cytoskeleton and inducing cell cycle arrest in G2/M. is regulatory mechanism is similar to the results reported by Juan et al. [8], that is, in G2/M, podophyllotoxone extracted from Dysosma versipellis inhibits the proliferation of prostate cancer cells by blocking cell cycle. Podophyllotoxin, as another extract of Dysosma versipellis, shows significant excellent activity to drug-sensitive and drug-resistant or even multidrug-resistant cancer cells by inhibiting tubulin polymerization. Podophyllotoxin and its derivatives are mainly used as strong antiviral drugs and antitumor drugs [9,10]. However, few reports have been done on the effect of podophyllotoxin in esophageal cancer, and its specific role in esophageal cancer remains unclear. Wnt signaling represents one of a series of pathways, including notch Delta, hedgehog, transforming growth factor B/bone morphogenetic protein, and hippo. Wnt signaling has become a basic growth control pathway in various fields from cancer and development to early animal evolution [11]. It is associated with inducing cell proliferation and forming growth tissue, which plays a role as a directional growth factor in this process [10,12,13]. Reportedly, inhibiting typical Wnt signaling pathways could promote apoptosis in esophageal cancer cells [14,15], which is assumed to be the underlying mechanisms for the antitumor effect of Dysosma versipellis extract in our study. is study explored the effect of Dysosma versipellis extract podophyllotoxin on the biological behavior of esophageal cancer cells and analyzed its underlying mechanism, aiming to provide experimental evidence for the new drug of esophageal cancer treatment in clinical research. Experimental Subjects. A total of SPF-graded 30 BALB/ C nude mice (Vital River Laboratories, SCXK (Shanghai) 2017-0011), aged 6-8 weeks and weighing 16-19 g, were housed for 7 days with humidity of 60-80% at temperature of 22 ± 2°C, with freedom to eat and drink, in the condition of light exposure and avoidance alternately for 12 h, respectively. All animal experiments were conducted under the approval of Ethics Committee of our hospital and in accordance to the Guide for the Care and Use of Laboratory Animal. Extensive efforts were made to ensure minimal suffering of the animals included in the study. Human esophageal cancer cell CP-C (CP-94251) was purchased from ATCC and the item number was ATCCCRL-4029. Extraction of Dysosma versipellis. e Dysosma versipellis (20 kg, No. SJ-JC14619) was purchased from Shanghai Jichun Industrial Co., Ltd. e specific procedures for the extraction method were based on Xu et al. [16], containing extraction, chromatography, and separation. With concentration adjusted to 20 μM, the extract was identified and confirmed as podophyllotoxin by Guangzhou Fuda Detection Center. Subcutaneous Transplantation Tumor. A total of 30 BALB/C nude mice (class SPF) were equally and randomly divided into the control group, model group, and Dysosma versipellis group. CP-C cells (2 × 10 6 cells in 100 μl) at the concentration of 2 × 10 7 /ml in esophageal cancer were subcutaneously injected into the model group as well as the Dysosma versipellis group and same volume of normal saline into the control group. After 7 days, the Dysosma versipellis group was intraperitoneally injected with 2 mL Dysosma versipellis extract at the concentration of 20 μM once a week. All subjects were treated with Dysosma versipellis extract at 8,15,22,29,36,43,50, and 57 days for 8 times and were sacrificed by cervical dislocation after feeding for 60 days. Tumor mass was weighed using an electronic scale (Beijing Jinda Sunshine Technology Co., Ltd.). Cell Culture Intervention. e esophageal cancer cell medium, composing of 90% high glucose DMEM (containing 4 mM L-glutamine, sodium pyruvate) and 10% fetal bovine serum, was supplied by the North Nano Biological Co., Ltd. and was cultured at 37°C in 95% air and 5% carbon dioxide. CP-C cells in logarithmic growth were selected and divided into 4 groups: control group, podophyllotoxin group, Wnt activator group, and combined group (mixture of podophyllotoxin and Wnt activator). Wnt activator methyl vanillate was purchased from CSNpharm with item number CSN23594 and concentration of 5 mL. In Vitro Cell Viability Experiment Using MTT. Esophageal cancer cells were made into 4 × 10 6 cells/mL cell suspension with one for each array and routinely inoculated. e cells were treated with varying concentrations of podophyllotoxin (0.01, 0.1, 0.2, 0.4, and 0.8 μM) for 48 h to evaluate the IC50 of podophyllotoxin at 48 h. e constituent of samples were as follows. Control group: 200 μL cell suspension; podophyllotoxin group: 20 µL podophyllotoxin at the concentration of 0.2 μM + 180 µL cell suspension; Wnt activator group: 20 μL methyl vanillate + 180 μL cell suspension; combined group: 10 μL podophyllotoxin at the concentration of 0.2 μM + 10 μL methyl vanillate + 180 μL cell suspension. Samples in each group were cultured in the 96-well plate for 12 h, 24 h, 48 h, and 72 h, respectively; then, 20 μL MTT (5 mg/mL) solution was added to remove the supernatant containing impurities, dimethyl sulfoxide solution was filled into samples and placed on a horizontal shaking table for 10 min, and then, absorbance at a wavelength of 570 nm was measured by a microplate reader (Shanghai Spark Biotechnology Co., Ltd.). MTT assay kit was purchased from Beijing Equation Jiahong Technology Co., Ltd. Cell Apoptosis Experiment Using Flow Cytometry. Esophageal cancer cells were digested with 0.25% trypsin and washed twice with PBS, and 100 μL binding buffer was added to prepare a suspension of 1 × 10 6 cells/mL, followed by addition of 5 μl Annexin V and 5 μl propidium iodide (PI) (Shanghai Yisheng Biotechnology Co., Ltd., 40302ES20). en, samples were incubated at room temperature for 5 min in the dark and detected using the FC500MCL flow cytometry system (FACS Canto II, USA). Independent experiment of each sample was repeated 3 times to take the average. In Vitro Cell Invasion Experiment Using Transwell Assays. Cell suspension (5 × 10 5 /mL) was prepared with 100 μL cells seeded into the chamber of transwell coated with Matrigel, and the number of cell-penetrating was calculated after 24 hours, and 3 independent experiments were performed simultaneously. Transwell chamber was purchased from Shanghai Shengbo Biomedical Technology Co., Ltd. Western Blot. e protein in tissues or cells was extracted by the repeated freeze-thaw method, and protein concentration was determined via the BCA method. With protein concentration adjusted to 4 μg/μL, proteins were separated by 12% polyacrylamide gel electrophoresis with initial voltage of 90 V; then, samples were moved to the appropriate position of the separation gel with voltage elevated to 120 V. After electrophoresis, film formation was performed under the constant pressure of 100 V for 100 min and blocked at 37°C for 60 min. en, the transfer membrane was sealed off for 2 h using 5% skim milk powder. With immune reaction carried out, the membrane reacted with primary antibody (1 : 1000), incubated at 4°C overnight, washed three times with PBS buffer for 5 min each time the next day, and then incubated with the secondary antibody (1 : 1000) at room temperature for 1 h. Final developing and fixing were carried out using ECL luminescent reagent. Gray value of each band was quantified and analyzed using Quantity One software. Relative expression level of the protein was calculated by the target band gray value/band gray value of internal reference. BCA protein kit, ECL luminescence kit, and trypsin were purchased from ermo Scientific ™ , with product numbers 23250, 35055, and 90058, respectively. e primary antibodies including rabbit anti-Wnt, anti-β-catenin, anti-GSK3β, anti-p-GSK3β, and ßactin were purchased from Abcam, Cambridge, UK, with Catalog items ab219412, ab32572, ab75814, ab32391, and ab8226, respectively. e goat anti-rabbit IgG (ab6721, Abcam) was selected as secondary antibody. Statistical Approach. All statistical analyses in this research were performed using the SPSS19.0 (Asia Analytics Formerly SPSS China), in which measurement data were presented in terms of (mean ± standard deviation). Student's t-test was applied in group comparison. ANOVA for repeated measures was employed in the comparison within groups at different time points. Single factor variance was applied in multigroup comparison, and LSD was used in the post hoc test. P < 0.05 was considered as statistical significance. Activation of the Wnt Signaling Pathway in Mice with Subcutaneous Tumorigenesis of Esophageal Cancer Cells. According to the results of Western blot, the expression level of Wnt in the tumor tissue of the model group (n � 9) was higher than that in the Dysosma versipellis group (n � 8). e expression level of ß-catenin in tumor tissue of the model group (n � 9) and Dysosma versipellis group (n � 8) was (0.870 ± 0.012) and (0.701 ± 0.017), respectively. Compared to the Dysosma versipellis group (n � 8), the higher expression level of p-GSK3β/GSK3β was revealed in the model group (n � 9). Expression level of Wnt, ß-catenin, and p-GSK3β/ GSK3β in tumor tissue of the Dysosma versipellis group was lower than that in the control group (P < 0.05, Figure 2). Dysosma versipellis Extracts Inhibit Esophageal Cancer Cell Viability. According to MTT assays, cell viability was decreased as the concentration of podophyllotoxin was increased (Figure 3(a)), and IC 50 of podophyllotoxin in esophageal cancer cells was 0.2 μM, which was used in subsequent experiment in our study. Significant differences in cell viability exhibited among four groups. e viability of esophageal cancer cell in the Wnt activator group was the highest one. e podophyllotoxin group revealed the lowest viability at all-time points compared to the other three groups (P < 0.05, Figure 3(b)), suggesting podophyllotoxin can effectively inhibit, while Wnt activators can effectively promote the viability of esophageal cancer cells. Dysosma versipellis Extracts Induce Esophageal Cancer Cell Apoptosis. According to in vitro apoptosis assays, significant differences in cell apoptosis exhibited among four groups. Cell apoptosis rate in the Wnt activator group, control group, and combined group was lower than that in the podophyllotoxin group (P < 0.05, Figure 4). In addition, the Wnt activator group indicated the lowest cell apoptosis rate. e data suggested that podophyllotoxin can effectively promote while Wnt activators can effectively inhibit apoptosis of esophageal cancer cells. Dysosma versipellis Extracts Inhibit Esophageal Cancer Cell Invasion. According to in vitro invasion assays, significant differences in the numbers of transmembrane cell exhibited among four groups. It was found that the podophyllotoxin group showed lower numbers of transmembrane cell compared to the remaining three groups (P < 0.05, Figure 5) e Wnt activator group showed the highest level. ese findings indicated that podophyllotoxin can effectively inhibit, while Wnt activators can effectively promote invasion of esophageal cancer cells. Dysosma versipellis Extracts Inhibit the Activation of the Wnt Signaling Pathway in Esophageal Cancer. According to results of Western blot, significant differences in the expression level of Wnt, ß-catenin, and p-GSK3β/GSK3β exhibited among four groups. e Wnt, ß-catenin, and p-GSK3β/GSK3β expression in the combined group, blank group, and Wnt activator group showed a higher level than the podophyllotoxin group (P < 0.05, Figure 6). Podophyllotoxin can effectively inhibit activation of the Wnt signaling pathway. Discussion Recent years, plant active ingredients with little toxicity in Chinese herbal medicines, which is frequently used for clinical treatment due to its ability to inhibit growth and proliferation of cancer cells, have drawn great attention among public [17,18]. Dysosma versipellis is a unique podophyllum plant of barberry family in China, which features excellent biological activity and serves as a new anticancer drug [19]. Yet, few reports have been conducted on its function in malignant tumor, let alone esophageal cancer. Given Dysosma versipellis is a valuable and rare species in China, identifying its active ingredients is of a great significance for the rational application. Podophyllotoxin is an active constituent in Dysosma versipellis. Xu et al. [16] extracted 15 active constituents of Dysosma versipellis and determined that podophyllotoxin has the strongest antiproliferative effect and is capable of inducing tumor cell apoptosis in prostate cancer, breast cancer, gastric cancer, as well as fibroblasts in mouse embryo. In this study, we extracted podophyllotoxin in Dysosma versipellis according to the method of Xu and investigated its effect on the growth of esophageal cancer using subcutaneous transplantation experiment. e result showed that the growth rate of subcutaneous esophageal cancer tumors in mice significantly inhibited after the intervention of podophyllotoxin, suggesting antitumor activity of podophyllotoxin in esophageal cancer. We further analyzed the effect of podophyllotoxin on biological behavior of esophageal cancer cells and discovered that podophyllotoxin can inhibit cell viability and invasion, which promotes apoptosis of esophageal cancer cells. Guerram et al. [20] also documented that deoxydipurine has strong in vitro cytotoxic activity and is able to exert the antiglioma effect by inducing cell cycle At present, there are few reports on the antitumor mechanism of podophyllotoxin, while the Wnt signaling pathway has been reported to be closely associated with the development of esophageal cancer in many studies. Cao et al. [21] reported that naked cuticle homolog 2 can inhibit the proliferation, colony formation, cell invasion and migration, as well as induce G1/S point arrest of esophageal cancer cells by inhibiting the activation of the Wnt signaling pathway. Zhang et al. [22] also confirmed that DACT 2 can inhibit the growth of esophageal cancer via the Wnt signaling pathway. As a result, we hypothesized that regulation of the Wnt signaling pathway is also a mechanism of podophyllotoxin against esophageal cancer. According to the result of several experiments, compared with the intervention of podophyllotoxin alone, the joint use of podophyllotoxin and Wnt signaling activator could restore cell proliferation and invasion ability to some extent and significantly reduce the level of apoptosis in esophageal cancer. However, compared with esophageal cancer cells without any intervention, the viability and invasion ability of esophageal cancer cells in the combined group were inhibited and levels of apoptosis were increased, suggesting that Wnt signaling activators can antagonize the cytotoxicity of podophyllotoxin, and inhibiting the Wnt signaling pathway is one of the mechanisms by which podophyllotoxin exerts cytotoxicity. However, there are no similar reports at present. We expect more similar studies in the future to verify our result. Furthermore, another mechanism of Dysosma versipellis extract was investigated in the study of scholar Liang [23], in which researchers extracted kaempferol from Dysosma versipellis and determined that kaempferol could inhibit angiogenesis by suppressing expression of vascular endothelial growth factor receptor 2 in human umbilical vein endothelial cells. It is well known that inhibition of angiogenesis plays an extremely important role in treating tumors given it is able to block the blood supply of tumor tissues, thereby "starving" the tumor cells to "death" [24,25]. We would like to see the same effect of podophyllotoxin be determined in future studies. However, there still exist some limitations in this study. We only verified the therapeutic effect of Dysosma versipellis extracts on esophageal cancer in the cell and animal model, and further clinical validation remains to be investigated. Besides, investigation of Wnt activator resisting to antitumor effects of Dysosma versipellis extracts on esophageal cancer should be considered gravely in further study, since we proposed the antitumor effects of Dysosma versipellis extracts achieved by regulating the Wnt signaling pathway. Moreover, in vitro experiments based on multiple types of esophageal cancer cells should be performed further to avoid single-cell bias during result interpretation. In summary, podophyllotoxin in Dysosma versipellis has an excellent antiesophageal cancer effect. It is able to inhibit cell viability and invasion to promote apoptosis of esophageal cancer cells by inhibiting the Wnt signaling pathway, which could be potentially used in future clinical treatment of esophageal cancer. Data Availability e data used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that they have no conflicts of interest. Authors' Contributions e authors Yanchun Pu and Ping Jinse contributed equally to the work.
4,480
2021-10-20T00:00:00.000
[ "Medicine", "Biology" ]
Sharpening the Axe: Identifying and Closing Gaps Within the Training Space of the South African Private Security Industry : The ever-present threat of crime in South Africa continues to drive the rise and demand for Private Security Industry (PSI) services amongst various governmental institutions, businesses and citizens to ensure their safety. This rise for the Private Security Services (PSS) persistently presents new challenges to the Private Security Industry Regulatory Authority (PSiRA) such as poor security training standards and deployment of untrained security officers attached to the Private Security Companies (PSCs), which negatively impacts the professionalism of the security industry. This study was guided by this objective: Identifying and closing the gaps within South African training space of PSI. This qualitative study was guided by the exploratory research design. The judgemental sampling technique was adopted to sample 40 participants confined to Gauteng (GP), Kwa-Zulu Natal (KZN) and Western Cape (WC) Provinces. The selected relevant stakeholders were attached to the South African Police Service (SAPS), Department of Labour (DoLl), Department of Home Affairs (DoH), National Prosecuting Authority (NPA) and the legal fraternity) and the PSI Directors and security officers to form part of the Focus Group Discussions (FGDs) and hybrid semi-structured interviews. The findings highlighted that the majority of the participants agreed that the private security industry training space is affected by security service providers letting PSiRA down by not training at, but rather engaging in, criminal activity through selling and people buying PSiRA security certificates. Some of the challenges mentioned include; poor training standards, non-compliance to legislated training standards, unqualified security training facilitators and outdated security Grades, misuse of security equipment leading to serious injuries and death, corruption in the industry, some of it being perpetrated by PSiRA inspectors, the very people who are supposed to keep the industry in check. This study recommends that, in order to overcome the challenges in the security training space, PSiRA needs to do away with the outdated security Grades by developing a new policy framework which will enable the creation of a new security-training curriculum and revise the training methods to suit the ever-changing security industry. INTRODUCTION AND PROBLEM FORMULATION Protection of life, property and enforcement of social rules through trained men has always been a priority and considered a profession for the earliest man on earth. These needs have remained imperative to this very day and have contributed to the development of contemporary high-tech private security and modern law enforcements throughout the world, (Sefalafala and Webster, 2013). The Private Security Industry (PSI) continues to be an essential component of modern society and a strong influencer of the structure and function of modern policing. Private security has evolved into a multi-faceted profession with multiple specialities, employing millions of people, contributing greatly to South Africa's Gross Domestic Product (GDP), as well as demonstrating high probability of sustained growth in the future (Govender, 2013;Sefalafala and Webster, 2013). It is an inarguable fact that security guards are the sentinels in private security companies, they are the enforcers of policies and laws on the client premises. They are the key elements of peacekeeping inside company and client premises as they prevent theft, and other crimes from happening. For them to execute these duties well, security guards need proper training. Professional training provides the security personnel with the right knowledge, skills, and instincts crucial to the company's safety and survival. In today's modern world, security guards need to be a cut above the rest through professional and quality training. The protection of lives and property is an awesome societal responsibility, and the public interest demands that persons entrusted with such responsibilities be competent, well-trained, and of good moral character (Nemeth, 2012). The security personnel should undergo intense training programmes to ensure both the safety of the organisation and their own survival. According to Berg and Gabi (2011), low standards negatively impact on the professionalism of the PSI, as well as on accountability. Officers are the frontline of contact with the public and, if not properly trained, may misuse their mandate and violate the human rights of members of the public. However, due to poor training of security officers in South Africa, a lot of challenges have emerged that are linked to poor training such as noncompliance, unethical conduct, excessive use of force, assaults, injuries and murder of innocent civilians at the hands of security guards, corruption and criminality. The reviewed studies also discovered the rise of bogus training centres in the PSI, where people attempt to register with the Authority using fraudulent documents. There were several arrests made by the Authority following the information provided regarding the bogus training centres and individuals responsible for manufacturing fake training certificates, matric certificates, as well as fake identity documents. According to Independent Online [IOL], 2020), a suspect was arrested by the Mtubatuba SAPS who were acting on information provided by an informant who was caught trying to register with a fake security and training certificate, at the PSiRA office in the KZN Province, Durban. These challenges caused huge dents in PSiRA's normative regulatory system that incapacitated it to adequately attend to the industry's major challenges and fully professionalise the industry (Berg and Howell, 2017). Efforts to upgrade and professionalise the industry in South Africa have been done through enactment of various pieces of legislation and establishment of an independent regulatory body by the State, PSiRA (Sefalafala and Webster, 2013). However, even with such stringent efforts to professionalise the industry, challenges in the security training space continue to delay and impede the effectiveness of the enacted legislation. Some of the impeding factors were mentioned during an interview with Mr. Van Staden, the Chief Operations Officer (COO) of PSiRA, who stated that "the industry is very much interested in having access to cheaper labour in order to render the services because they have got the inherent situation where clients see security as a grudge purchase; they don't really want to pay for it, but they require it because there are other pressures, maybe from their insurance houses. So, they are obliged to contract a private security to render a service that's needed, but it's still considered a grudge purchase and they want to get that service as cheap as possible." This contributes to the deployment of untrained security officers by private security companies so as to quickly meet the clients' security needs. Between April 2018 and March 2019 compliance inspections revealed that close to 238 businesses were deploying unregistered security officers, 206 businesses deploying untrained security officers, and 1 792 unregistered security officers and 2 618 untrained security officers (PSiRA Annual Report, 2018/2019). The highest of these incidences were mainly reported in GP, WC and KZN after inspections of 6 253 PSCs business premises by PSiRA inspectors. The problem under research offer the challenges encountered in the training of security officers in the PSI leading to security companies deploying untrained security officers in GP, KZN and WC Provinces. Cobalt Security Services (2020), mentions that, in the event of a security breach security guards should be the first line of defence, however, if they are untrained, it is going to be difficult for them to neutralise the threat effectively without causing harm to themselves or others. According to the PSiRA 2018/2019 Report, there are about 2 078 untrained security guards working in GP, WC and KZN that have been discovered through inspections conducted during this financial period. The Figure 1 above, illustrates the number of untrained security officers discovered through inspections conducted during the 2018/2019 financial year. It is evident from Figure 1 above, that untrained security officers do operate in the industry and their presence can have adverse outcomes. Due to lack of training, they can engage in bad practices, which could have serious consequences especially when such bad practices relate to the use of force and firearms which can result in serious injuries and sometimes death (Sefalafala and Webster, 2013). The PSiRA Act (No. 56 of 2001), Chapter 3 Sub-section 23 (c) outlines some of the requirements for registration for people wishing to become security officers; one of them being to comply with the relevant training requirements prescribed for registration as a security service provider. This criterion is clearly meant to deter the entry of all those who are considered to be unfit to work in the industry, but still the untrained security officers slip through this vetting process. Training of security guards has been a contentious issue in South Africa and the impact of low training have had a huge impact on the professionalisation of the industry (Nemeth, 2012). Challenges Affecting the Private Security Training Space Training for private security officers in the PSI is a concern for civil liberties and reputation of the profession. Critics argue that there is a lack of training and education standards in the PSI (Berg and Gabi, 2011;Loader and White, 2015). The effects of the poor training of private security guards go a long way as their lack of skills could result in poor handling of security equipment such as firearms, pepper spray, handcuffs, button sticks or security dogs. Untrained security guards who carry firearms assume an increased risk of injury to themselves and others which can result in serious injuries or death. The gaps in the PSI training space could be explained by the minimal recruitment requirements and the lack of proper training standards (Abudu et al., 2013). This gets even more complicated when security officers work in mass public spaces such as shopping malls or universities. Training improves tactical and operational competence; thus indirectly contributing to improved security and professional conduct by security officers. The lack of proper ethics, training, and educational readiness results in a probable scarcity of skilled and compliant security practitioners. The promotion of these traits and professional characteristics could curb a plethora of common private security enforcement problems, including the unnecessary use of force, false imprisonment, false arrest assertions, civilian death, improper or illegal search and seizure techniques, proliferation of lawsuits, misuse of weaponry, and abuse of authority. Many PSI personnel are only temporary or part-time employees who are often underpaid and untrained for their work. The protection of lives and property is a huge societal responsibility, and the public interest demands that persons entrusted with such responsibilities to be competent, well trained, and have good moral character (Nemeth, 2012). Professionalisation of the private security industry remains a hard-to-reach target without commitment to good training standards within the PSI (Nemeth, 2017). Ndungu (2020), expresses that training is one of the main private security area that needs regulatory strengthening and improvement. There are many security officers who are conscientious and proficient in their work at the same time others are inadequately trained and thus present a potential hazard to the public especially in crowd control situations and to their employer. Training of security officers is important for the development of appropriate social and ethical behaviour and for proper use of authority entrusted to them and security equipment such as firearms, handcuffs, security dogs, and button sticks and to prepare security officers for potential threats associated with dealing with security threats. One other challenge affecting the PSI training space is the lack of distinct role between PSiRA and Safety & Security Sector Education and Training Authority [SASSETA]. There is no collaboration between the two entities. The training that is offered by PSiRA is done in a week and it does not give out the desired competence compared to the SASSETA one. The National Qualifications Framework [NQF] uses the credit-bearing system which determines the duration and the quality of the course. When one looks into the security officer training Grade E, offered by PSiRA, there is a big gap. PSiRA offers a 5-day course and the same programme in another platform on the NQF system, offered by SASSETA runs for a minimum duration of 3 months which is not quick, convenient and cost-effective for security learners who opt for the PSiRA Grades so they can quickly get employment. This does not guarantee the quality of the training as training can be rushed and certain key aspects of security can be ignored in the PSiRA Grades. The government has a broader policy in terms of developing occupations through the Sector Education and Training Authority [SETAs], although PSiRA is a regulator, it does not really participate significantly enough in ensuring that the right qualifications are being developed by the QCTOs in this particular environment. PSiRA is a self-funding organisation, so it derives its revenue from holding on to security training of security officers, it does not really partner with the SETA because it does not want the SETA to take over the quality assurance of training because it will lose revenue emanating from submitted course reports (PSiRA, 2018). The other challenge in the PSI training space is in the area of the Firearm Act and PSiRA enforcement department, Berg and Gabi (2011) observed that firearms were not being effectively monitored resulting in firearms being issued and used by untrained, unqualified and unlicensed staff. There was a lack of a database for registering PSC firearms and lack of effective inspection of firearm serial numbers and the dependence of PSiRA on SAPS in dealing with misconduct. SAPS is legally able to conduct firearm inspections according to the Firearms Act, 2000. The PSCs have negative perceptions and suspect the firearms controls as just another moneymaking scheme by the authorities. Other challenges exposed in the PSI training space according to Provost (2017), emerge from the unequal buying power between the rich and the poor which exacerbates disparities between the wealthy clients who are protected by increasingly sophisticated systems and the poor, who may need to resort to informal and sometimes illegal means to secure their safety. This fuels the use of untrained security officers and unlicensed firearms to provide very cheap and sub-standard private security services to those willing to pay less for security services, which adversely tarnishes the image of the private security industry. Berg andGabi (2011), Loader andWhite (2015), agree that there is lack of adequate oversight in the regulation of training in the security industry. According to the North Atlantic Treaty Organisation [NATO] -Democratic Control of Armed Forces [DCAF] (2010), the lack of oversight mechanisms and absence of adequate rules and regulations can create an atmosphere prone to corrupt tendencies by untrained private security officers leading to loss of revenue and other resources. When untrained security officers easily gain access to firearms during their duty, registered or illegal, it makes it easy for them to take the law into their hands and act maliciously which could lead to serious accidents or deaths (Tracey, 2011). While private security industry is a vast crime prevention and reduction resource, it will for the most part remain only a potential resource until steps are taken to improve the quality of private security training. Effectiveness of PSiRA's Strategies in Ensuring Quality Training of Security Officers within the Industry The PSiRA establishes the minimum mandatory standards of training for private security personnel, which is an important part of any regulation (PSiRA, 2018). Training is conducted by private security training service providers who are PSiRA accredited to provide security training using PSiRA approved training content. Standardisation of private security training levels the playing field for PSCs and mitigates against the downward competitive forces which often create a 'race to the bottom' in which training quality suffers. One of the objectives of the Private Security Regulator is to promote high standards in the training of security service providers and prospective security providers. The security training centres play a major role in the development of the security industry itself and PSiRA mandates them in terms of their accreditation to fulfil the development. The skills development part of it which is inherently challenging as not all training centres are accredited by PSiRA. Training service providers are required by law to be registered and accredited with PSiRA before they can offer any training services. Therefore, PSiRA determines the minimum statutory training standards for the industry, accredits training centres and instructors to present PSiRA statutory courses which include Security Grades E -A, Assets in Transit, Reaction Services, Event Security and Dog Handlers, evaluates and processes course reports, liaises with South African Qualifications Authority (SAQA), QCTO and SASSETA in respect of the development of NQF qualification and programs for all categories or classes of security service providers, and recognition of prior learning (PSiRA, 2018). The security training service providers can be accredited and inspected for evaluation, if they complete the Accreditation Application form (PSiRA Form 47 A), have proof of registration with PSiRA in the form of a registration certificate copy, proof of payment of the prescribed accreditation fee, receipt of the settlement of annual fees, lease agreement (of the approved infrastructure assessment for the purpose of training), with a 6 month validity period and a signed confirmation letter (on an official letter head of the training centre), proof of a fire department letter or affidavit, signed confirmation letter (on the official letterhead) of instructor(s), instructor certificate and employment contract, policy and procedures prescribed for the management and administration of training, proof of a telephone line and proof of a fax line (PSiRA, 2018). One of the requisite elements that give effect to PSiRA's mandate in ensuring a well-regulated PSI training space is the development and implementation of a compliance and enforcement strategy. Part of the enforcement is to deter bad behaviour. Issues of ethics, moral degradation and threats posed to the general public by the private security industry can no longer be ignored (Berg, 2017). Compliance is a state of accordance between a professional member's behaviour or products on the one side, and predefined explicit rules, procedures, conventions, standards, guidelines, principles, legislation or other norms on the other side (Foorthuis and Bos, 2011). PSiRA ensures compliance to legislation through active monitoring and investigating the affairs of security training service providers and security officers with its inspectors from the Law Enforcement department. The PSiRA determines and enforces minimum standards of occupational conduct and training in respect of security service providers. In this regard, the Authority has a dual responsibility of determining minimum standards of training and occupational conduct in respect of security service providers as well as enforcing such standards (PSiRA, 2018). The PSI makes use of firearms as one of their primary means of deterring crime, self-protection and safeguarding their clients. Therefore, firearms need to be registered and regulated, which is done through the Firearms Control Act (No. 60 of 2000) and the Regulations of 2004. The Firearms control legislation requires all security personnel to be trained at an accredited training facility and acquire a competency certificate before being issued with a firearm by a PSC (the Firearms Control Act, No. 60 of 2000). Compliance inspections can be regulatory, training, infrastructure, or accreditation inspections. PSiRA is also involved in a number of operations with other state agencies like SAPS, Department of Labour [DEL], DoH and the Firearms Control Registry. The focus of these operations is compliance with the PSiRA Act, 2001 and checks for deployment of unregistered and untrained security officers, illegal immigrants as well as compliance with the Firearms Control Act, 2000. About 53 such operations were conducted in the 2018-2019 financial year. Other inspections done by PSiRA to ensure compliance in the private security industry include 2 298 improper conduct investigations, 673 improper conduct investigation dockets pertaining to exploitation of labour, 1 056 criminal investigations, 1 498 firearm inspections, 1 885 charge sheets and summonses, complaints and help desk and prosecutions (PSiRA, 2019). The competency certificate is renewed every five years and to qualify one needs to be a South African citizen, demonstrate knowledge of the firearms legislation, be mentally stable and not inclined to violence, not have been convicted within or outside South Africa of any offences outlined by the Act and they have completed a prescribed test in the knowledge of the Act, as well as having completed training and tests with regard to handling a firearm and tests for using a firearm in the course of security duties, the Firearms Control Act (No. 60 of 2000). The Act places the onus on PSCs to keep a register of all their firearms and to make provision for the storage and/or transportation. Recommended Approaches to Address Challenges in the Security Training Space which can Help with Professionalising the Industry Training typically has specific goals of improving one's competence, capacity, performance, and knowledge (United Nations Office on Drugs and Crime [UNODC], 2014). Regulatory bodies throughout the United States (US) have been placing heightened emphasis on education and training as part of the minimum qualifications of an applicant. One of the most important issues in raising the standards of the private security industry is training of security personnel. There are mandatory training standards that must be met for security personnel to be registered with any PSiRA (Mccrie, 2017). PSiRA requires security personnel to undergo at least 40 hours of training per security Grade, while Hungary basic security officer training is mandated at 320 hours (UNODC, 2014). Mandatory training standards can help to prevent the risk of security officers acting in a way that is inappropriate to public safety and prevention of crime as well as enhancing efforts to increase professionalisation and delivery of quality services (UNODC, 2014). Berg (2017), believes that training must be monitored for compliance purposes, not for commercialising it for revenue stream by the selffunding training colleges which results in the security industry attracting and being flooded by school-leavers desperate for employment opportunities which damages the image and the reputation of the PSI. PSiRA (2019), strongly maintains that diversification of industry training needs to be enabled by the lifting of the moratorium in respect of new applications for security training providers, which allows security training providers to open new branches in different provinces. PSiRA does not tolerate unscrupulous security training providers, therefore, an effort is created for fair participation of the industry environment. In August 2018, the number of accredited security training providers stood at 480. Thirteen capacity-building workshops were conducted in all the provinces. PSiRA should into a Memorandum of Understanding (MoU) with the Technical and Vocational Education and Training (TVET) Council, security training providers, DoE, SASSETA, QCTO, SAQA and NQF to gain traction in the TVET landscape. The MoUs will enable PSiRA to establish and recognise these structures as assessment centres for credible assessment in certifying and accrediting prospective and constituent security officers (PSiRA, 2019). According to PSiRA (2019), it is recommended that by the year 2030, PSiRA should properly vet and screen security training provider applicants, increasing training and skills development, and accrediting new training centres to provide an equal opportunity for all businesses interested in providing security training to apply for accreditation. PSiRA should increase the national footprint of training centres in the country in order to improve accessibility to prospective persons interested in a career in the PSI (PSiRA, 2019). PSiRA needs to take considerable steps to review the training curriculum of all security courses in order to improve the competency and skills required to render private security training services in increasingly challenging external environments. METHODS AND MATERIALS The objective of this qualitative study was to identify and close the existing gaps within South African training space of PSI. This was accomplished through the application of exploratory research design. The judgemental sampling method was employed to select 40 participants, extracted from the following stakeholders; the SAPS, DoL, DoH, NPA, legal fraternity personnel', the PSI Directors and security officers from GP, KZN and WC Provinces. The targeted sample was subjected to the FGDs and hybrid semistructured interviews. The collected data was analysed through the use of Thematic Content. The ethical clearance was obtained from the targeted PSIs, while adhering to PSiRA research ethical principles in the service. Moreover, the Tshwane University of Technology (TUT) policy on research ethics also ethically guided this study. RESULTS AND DISCUSSION Consciousness of the training challenges and addressing them in the private security industry is important for PSiRA if it is to successfully professionalise the PSI. For the purposes of this study, various statements were formulated by participants with the intention to describe the predominant training challenges affecting the industry, the deployment of untrained security officers and their impact on the overall image of the security industry at large. Demographic Characteristics of the Selected Participants Biographical information includes the data about the individual's experiences, age, gender, or skills which are strong indicators that set individuals apart. As initially explained, there were 40 participants in this study. The biographical data of these participants are presented according to the targeted provinces, participant number, work experience, age, and gender, as depicted in Table 1. The results from Table 1 shows that 5 participants for the FGDs were from GP Province; their work experience ranging from 13 to 35 years; their ages ranged from 40 to 60 and were predominantly male with 1/5 female participants. Moreover, the results show that 5 participants were from KZN Province; their work experience also ranged from 15 to 35 years; their ages ranged from 40 to 60 and were largely male with 1/5 female participants. The results further reveal that 5 participants were from WC Province; their work experience ranging from 14 to 22; their ages ranged from 40 to 60 years and 2/5 of the participants were female. The results from Table 2 show that 8 participants were from GP Province; their work experience ranged from 4 to 27 years; their ages ranged from 30 to 60 and were predominantly male. Moreover, the results show that 6 participants were from KZN Province; their work experience also ranged from 5 to 19 years; their ages ranged from 30 to 80 and were largely male. The results further reveal that 6 participants were from WC Province; their work experience ranged from 10 to 27; their ages ranged from 40 to 60 years and 1/3 of the participants were female. The following section presents the qualitative findings from FGDs and hybrid semi-structured interviews conducted with participants. It presents the analysis of their verbal responses during the interviews and FGDs and presented as themes. Themes arise from engagement of a particular researcher with the text, as the researcher to address a particular research question. Themes are, therefore, pragmatic tools to help the researcher produce an account of the data. A code in qualitative inquiry is often a word or short phrase symbolically assigning a summative, salient, essence-capturing and evocative attribute for a portion of language-based or visual data (Saldana, 2015). The content and thematic analysis performed on the FGDs and hybrid semi-structured interview transcripts yielded four themes discussed below. It is worth mentioning that there are overlaps between some of the following emerged study themes: • Theme 1: Challenges affecting the private security training space. • Theme 2: Effectiveness of PSiRA's strategies in ensuring quality training of security officers within the industry. • Theme 3: Recommended approaches to address challenges in the security training space which can help with professionalising the industry. Theme 1: Factors Affecting the Private Security Training Space The PSI still continues to undergo various regulatory changes in a bid to professionalise it; however, it continues to face challenges and many factors working against PSiRA's professionalisation efforts. To answer the research questions: What are the main challenges affecting the private security industry training space? The participants provided information about their observations and experiences concerning challenges within the security training space within GP, KZN and WC Provinces. From the findings of this study the following sub-theme emerged in an endeavour to illuminate on the challenges in the security training space. The sub-themes are; Training, Corruption and Criminality. Theme 2: Effectiveness of PSiRA's Strategies in Ensuring Quality Training of Security Officers within the Private Security Industry In terms of section 4 of the PSiRA, 2001, this Authority must take steps as may be necessary to develop and maintain standards (including training standards), and regulate practices of security service providers. The Authority must take steps that may be expedient or necessary in connection with the training of security service providers to ensure a high quality of training, and in particular the determination and accreditation of qualifications required by security service providers to perform particular types of security services and taking reasonable steps to verify the authenticity of training presented by persons for the purposes of this Act. To answer this research question: How effective are PSiRA's strategies in ensuring quality training of security officers within the industry? The participants were requested to provide their experiences and observations about the strategies of PSiRA. From the analysis, the following four subthemes emerged as strategies being used by PSiRA to ensure quality training from this question: Transformative strategies, Strategic collaboration with stakeholders and Law enforcement. Theme 3: Recommended Approaches to Address Challenges in the Security Training Space which can Help to Professionalise the Private Security Industry This question was also posed to the selected participants: What recommended approaches can address the challenges in the private security training space which can help to professionalise the industry? The participants were requested to provide possible corrective approaches and measures that PSiRA could implement to advance the professionalisation of the private security industry. Six codes emerged from the analysis of the participants' responses. These codes were: updating the training curriculum and method of training, empowering and training more inspectors, building relationships and working closely with education sector stakeholders. CONCLUSIONS AND RECOMMENDATIONS The following conclusions can be drawn from the results of the study. Training security officers leads to security officers becoming more attentive to details and be more alert while on the job. With increased alertness, security officers will be better able to recognise and report any incidents which will also increase the efficiency of the communication. Security guard training will teach the importance of communication that is clear, concise, and easy to understand and its role in effective security operations. Properly training security officers improves the intelligence of the security officers, allowing them to more easily handle circumstances in a responsible and appropriate manner while under duress. Having the training to make difficult decisions during stressful circumstances will not only increase the efficiency of the security guard's performance, but it will also optimize the safety of the security guard. Despite the persistent challenges in the private security training space, if all stakeholders (Security training providers, Department of Education (DoE), SASSETA, QCTO, SAQA and NQF), worked together with PSiRA to address them, they would effectively professionalise the private security industry. To overcome the challenges in the private security training space, the following recommendations are made: • Update the new training curriculum through a policy framework to ensure that training material is relevant and up to date and ensuring quality training. The PSI keeps growing and evolving. With that growth and evolution, the curriculum should be reactive and incorporate these new changes, so that security training can produce security officers that are competitive and proactive in crime fighting. Some of the proposed changes with regard to training include setting minimum training instructor standards, changing the method of training from instructor-based approach to a more flexible and interactive student-centric approach. • Empower and train more inspectors. Importantly: trained PSiRA inspectors could attend to a lot of challenges in the training space by regular inspections of security training service providers, not only to enforce the legislation, but be capacitated to teach and train security service providers on security training issues and concerns. • Build relationships and collaborative partnerships with education sector stakeholders. PSiRA is advised to forge working relationships and collaborative partnerships with key stakeholders in the education and skills development sector such as Department of Education, SASSETA, QCTO, NQF and SAQA to get a helping hand in addressing the challenges in the security training space. These stakeholders could assist PSiRA with the development of new qualifications thus further expanding the private security industry ensuring the sustainability of the PSI.
7,453.6
2021-10-21T00:00:00.000
[ "Computer Science" ]
A missense mutation in the agouti signaling protein gene (ASIP) is associated with the no light points coat phenotype in donkeys Background Seven donkey breeds are recognized by the French studbook and are characterized by a black, bay or grey coat colour including light cream-to-white points (LP). Occasionally, Normand bay donkeys give birth to dark foals that lack LP and display the no light points (NLP) pattern. This pattern is more frequent and officially recognized in American miniature donkeys. The LP (or pangare) phenotype resembles that of the light bellied agouti pattern in mouse, while the NLP pattern resembles that of the mammalian recessive black phenotype; both phenotypes are associated with the agouti signaling protein gene (ASIP). Findings We used a panel of 127 donkeys to identify a recessive missense c.349 T > C variant in ASIP that was shown to be in complete association with the NLP phenotype. This variant results in a cysteine to arginine substitution at position 117 in the ASIP protein. This cysteine is highly-conserved among vertebrate ASIP proteins and was previously shown by mutagenesis experiments to lie within a functional site. Altogether, our results strongly support that the identified mutation is causative of the NLP phenotype. Conclusions Thus, we propose to name the c.[349 T > C] allele in donkeys, the anlp allele, which enlarges the panel of coat colour alleles in donkeys and ASIP recessive loss-of-function alleles in animals. Electronic supplementary material The online version of this article (doi:10.1186/s12711-015-0112-x) contains supplementary material, which is available to authorized users. Background Mutations in the gene ASIP (agouti signaling protein) result in various coat patterns in domestic mammals (http://omia.angis.org.au) including mouse (www.informatics.jax.org), dog [1], cat [2], rabbit [3], horse [4], sheep [5][6][7][8], rat [9] and alpaca [10]. Only a few coat colours, patterns and textures have been described in domestic donkeys (Equus asinus). In donkeys, the coat colour can be white or coloured, i.e. black, bay, grey and red with or without white spotting; hair texture is variable and includes the longhair phenotype, in addition to the common shorthair phenotype. Recently, we started to investigate the molecular aetiology of these phenotypes and identified three underlying loss-of-function mutations in the MC1R (melanocortin 1 receptor) and FGF5 (fibroblast growth factor) genes that are responsible respectively, for the red colour and longhair phenotype in donkeys [11,12]. Most coloured donkeys are born with a pangare or light points (LP) pattern that associates cream to grey-white hair on the belly, around the muzzle and around the eyes. In the American miniature donkey breed, all coat colours and patterns are admissible and foals with a no light points (NLP) coat are often obtained from LP breeding stock ( Figure 1). This has led breeders to suspect a recessive inheritance pattern for the NLP pattern. For the seven French donkey breeds (Pyrenean, Berry Black, Poitou, Cotentin, Provence, Bourbonnais and Normand), the NLP pattern is not officially recognized. However, dark NLP donkeys are occasionally born to bay Normand parents ( Figure 1). Animals and ethics statement One hundred and twenty seven donkeys from six breeds were included in the study. They were all sampled in France between September 2012 and October 2014 and included Normand (n = 35), Provence (n = 14), Poitou (n = 13), Pyrenean (n = 13) and Berry Black (n = 2) breeds and miniature donkeys (n = 50). All donkeys were included at the owners' request. Pictures and hair samples were sent directly by owners or collected by a veterinarian (MA). All animals were client-owned donkeys on which no harmful invasive procedure was performed; thus, according to the legal definitions in Europe (Subject 5f of Article 1, Chapter I of the Directive 2010/63/UE of the European Parliament and of the Council), no animal experiment was carried out. DNA extraction DNA was extracted from hair roots using a Maxwell® 16 Instrument (Promega Corporation, Madison, USA), according to the manufacturer's protocol. Accession numbers Genomic coding sequences of ASIP from bay LP and NLP Normand donkeys were submitted to GeneBank. Accession numbers are KJ126712 for the LP allele and KP717040 for the NLP mutant allele. Findings Because whole-genome mapping tools are still lacking for donkey, we decided to screen directly for variants that affect ASIP function in two NLP and two LP control Normand donkeys that originated from a comprehensive panel of 127 donkeys from six breeds. The Ensembl ASIP equine genomic sequence was used to design three sets of intronic primers [See Additional file 1: Table S1] that allowed successful amplification of the three exons of the donkey ASIP gene. Then, we sequenced the three ASIP exonic amplicons and performed pair-wise base-to-base comparisons of the sequences between the LP and NLP donkeys and a horse reference sequence; we found that the coding sequences and the sequences that covered intron-exon boundaries were highly conserved between horse and donkey and detected only two variants between donkey sequences and the bay horse reference sequence [See Additional Table S2]. Only the c.349 T > C SNP (single nucleotide polymorphism) produced a substitution p. (Cys117Arg) and was consistent with the recessive mode of inheritance of the NLP pattern. Indeed both NLP donkeys were homozygous C/C for the mutant allele of c.349 T > C SNP, while one control donkey was heterozygous C/T, and the other control donkey and the bay horse were homozygous T/T for the reference allele [See Additional file 1: Table S2]. The second non-coding variant was located in the 3'UTR (untranslated region) region of the ASIP gene and was not associated with the NPL phenotype [See Additional file 1: Table S2]. PROVEAN, PolyPhen-2 and SNAP predicted that the p.(Cys117Arg) substitution was deleterious. Hence, the complete cohort of 127 donkeys was genotyped for SNP c.349 T > C. The nine NLP donkeys, including three NLP Normand and six NLP miniature donkeys, were all homozygous C/C, while the 118 LP donkeys were either homozygous T/T (n = 104) or heterozygous C/T (n = 14). The three NLP Normand donkeys were born to a single male mated with three females, which were all four heterozygous C/T. The complete concordance between the recessivelyinherited NLP pattern and the c.349 T > C variant (Table 1) supported our hypothesis that this SNP is associated with the NLP trait (Chi square test p = 1.86 × 10 −29 ). To estimate the functional importance of the donkey ASIP cysteine 117 amino acid, we aligned the donkey ASIP protein sequence with the ASIP sequences of nine vertebrates and found that is was fully conserved (Figure 2). This result confirmed the 100% conservation previously reported for the 10 cysteine amino acids of the C-terminal Cys-rich domain of ASIP [18][19][20][21] the functional role of which was investigated by mutagenesis experiments. Perry and collaborators reported that in mouse, 13 mutated ASIP proteins displayed a partial (n = 4) or a total (n = 9) loss of activity [21]. In particular, they found that eight of the 10 cysteines located in the Cys-rich C-terminal tail of ASIP, including the murine cysteine 113 that corresponds to the donkey cysteine 117, were critical for protein activity [21]. Altogether these results strongly support that, in donkeys, the ASIP cysteine 117 has an essential role for ASIP function. In conclusion, the complete association between the c.349 T > C mutation and the NLP phenotype and its inheritance pattern, on the one hand, and the high probability that the resulting substitution of the conserved cysteine 117 residue leads to loss of function of the mutated protein, on the other hand, support that this mutation is responsible for the NLP phenotype in donkeys. We thus propose to name the c.[349 T > C] allele, which can be easily detected with a DNA test, the a nlp allele in donkeys. Additional file Additional file 1: Table S1. PCR and sequencing primers. Sequences and PCR temperatures from the intronic primers that were used to amplify and sequence the three ASIP coding exons. Table S2. Genomic variants in ASIP identified between donkey sequences and the horse reference sequence. Sequence variants in ASIP identified between two NLP donkeys, two control donkeys and the horse reference sequence (coding sequences and 5'end of the 3'UTR).
1,899.8
2015-04-08T00:00:00.000
[ "Biology" ]
Studying the Evolution of Scientific Topics and their Relationships We propose a study of the development of scientific topics through time, as well as the relations between them within the scientific field of computational linguistics and across subfields. We use topic modeling to analyze scientific texts published in the ACL Anthol-ogy, and introduce a categorization of topics in our field into 3 types: tasks, algorithms, and data. In order to understand how topics emerge, evolve, and gradually disappear over time, we analyze the evolution of these topics across time through several case studies. We further include in our analysis papers published in NeurIPS, and try to understand whether there was any influence between topics in this conference focused on neural meth-ods and computational linguistics conferences, as well as measure the divergence over time between conferences in terms of the topics approached. We additionally look at the relationships between topics, categorizing them into types of competing or cooperating topics. Introduction Scientific fields progress through innovation. Science functions under the premise that, when new better topics appear in research, they overtake the old ones and contribute to shaping the progress of the research field (Kuhn, 2012). Nevertheless, scientific topics evolve interdependently (the appearance and popularity of one topic may affect the popularity of another) and oftentimes, the focus of research in a certain field is also influenced by topics in other related subfields. We propose a multidimensional approach for studying scientific topics and their evolution, by analyzing our field of research -computational linguistics -from several points of view: we look at the parallel evolution of topics in computational linguistics and their popularity over time, as well as how they relate to each other, engaging in cooperating or competing relationships. We also extend this perspective by considering the interplay of topics within a field, as well as the context in which they appear, and how the same topic is portrayed in different subfields, with a focus on the mutual influence between ideas in computational linguistics and those in the related field of neural networks. Among studies that track the evolution of topics in scientific texts, Hall et al. (2008) focused on scientific text in computational linguistics, analyzing papers published in ACL, EMNLP andCOLING between 1978 and2006. The authors identify increasing and decreasing trends up to 2006, and make predictions about the subsequent evolution of the field. We continue the analysis including articles published up until the end of 2018, and uncover current shifts and trends that may not have been predictable 15 years ago -such as the rise of neural networks methods. In our work, we study topics across three types: tasks, algorithms and data. Moreover, our aim is to further and complement the previous explorations of topics in computational linguistics not only by extending the analysis to recent years, but also by looking at relations between topics within and across fields. We analyze texts in four top computational linguistics conferences (adding NAACL to the three conferences analyzed in (Hall et al., 2008)). We additionally propose an exploration of topics across conferences and subfields, and include in our analysis papers published in the Conference on Neural Information Processing Systems (NeurIPS), which is a machine learning conference focused on neural networks. Considering that in recent years neural networks have almost dominated methods used in computational linguistics, we try to understand how topics approached in computational linguistics relate to those in the more focused field of neural networks, and whether and how they migrate between these conferences. Our analysis of topic relationships within computational linguistics is inspired from Tan et al. (2017), in which the authors propose a way to classify topic relationships into four types, based on their co-occurrence in text and the degree of correlation between their popularity over time. In our paper, we extend this and take a deeper look at the relations existing between topics in scientific text. We propose interpretations of topic relationships in the context of a scientific domain, and report interesting findings on how these types of relationships manifest between scientific topics, discovering, for example, which algorithms in computational linguistics are compatible with certain tasks (such as neural machine translation and RNNs), or finding pairs of topics that represent algorithms which have replaced one another along the history of computational linguistics (such as statistical machine translation and neural machine translation). Previous work Multiple previous studies have looked at evolution of topics through time, analyzing texts of various genres, from news (Michel et al., 2011;Rule et al., 2015) to emails (Wang and McCallum, 2006) to scientific articles (Hall et al., 2008;Prabhakaran et al., 2016;Griffiths and Steyvers, 2004;Blei and Lafferty, 2006;Anderson et al., 2012). Popular choices for representing topics include topic models, to which some studies add variations specific to tracking trends over time, such as the continuous-time model proposed by Wang and McCallum (2006), the generative model proposed by Bolelli et al. (2009a,b), or the dynamic topic model (Blei and Lafferty, 2006). Hall et al. (2008) use an approach based on topic modeling, and focus on scientific texts in computational linguistics, analyzing papers published in ACL, EMNLP and COLING between 1978and 2006. Gollapalli and Li (2015 use topic models and keyphrase extraction to compare topics in ACL and EMNLP. In other studies on scientific articles, topic representations are enriched with additional features such as citations (He et al., 2009). Citations and citation networks have been leveraged extensively in previous studies for tracking scientific topics (Shibata et al., 2008(Shibata et al., , 2009Jurgens et al., 2018), analyzing the structure of the scientific community (Leicht et al., 2007), or summarizing scientific papers (Qazvinian and Radev, 2008), or entire tech-nical topics . Other authors make use of rhetorical framing to predict the patterns present in the development of scientific topics (Prabhakaran et al., 2016). Not as many studies attempt to provide in-depth systematic analyses of the relations between topics within a field or across fields, independently from the publications where they occur. Zhang et al. (2017) introduce a learning technique to identify the evolutionary relationships (e.g., topic evolution, fusion, death, and novelty) between scientific topics. Grudin (2009) study the particular relationship between the field of AI and Human Computer Interaction. Shi et al. (2010) propose a temporal comparison of grant proposals and academic publications, in an attempt to understand which precedes the other and how they influence each other. In one of the most extensive studies on the topic (Tan et al., 2017), the authors propose a systematic way of classifying relations between topics into four types of cooperating or competing topics, based on their patterns of co-occurrence and prevalence correlation: friendships, arms-race, head-to-head, and tryst. We build on this framework in our analysis of the field in the following sections. Dataset Our study focuses on topics in computational linguistics and their evolution. For exploring this topic, we make use of articles published in the ACL Anthology (Bird et al., 2008; from its inception. We collect all papers published in four top conferences: ACL, EMNLP, COLING and NAACL over time, obtaining a total of 14,737 computational linguistics articles overall. We will further refer to the set of computational linguistics conferences we considered by using the general term ACL+. For the second stage of our study, we additionally use articles published in the NeurIPS conference, from which we collect all articles published since 1994, in total 6,520 articles. Table 1 shows the number of articles for each time period (across 5-year time spans) for the ACL+ conferences and NeurIPS. In Figure 1 we show the number of papers published as a time series, computed separately for each of the conferences considered. We make our collected dataset as well as code used for our experiments publicly available. 1 pre-1980 374 -1980-1985 332 -1986-1990 729 -1991-1995 609 157 1996-2000 1108 842 2001-2005 950 767 2006-2010 3456 1449 2011-2015 3432 1091 2016-2018 3747 2214 Representation of ideas We base our study on the premise proposed by Kuhn (2012) that science proceeds by shifting from one paradigm to another, viewing the evolution of science as a series of topics that follow and replace one another. Furthermore, we assume that these shifts in topics are directly reflected in shifts at the level of the vocabulary employed in the articles that discuss them. Based on this assumption, we choose to represent topics by relying on topics extracted using unsupervised topic modeling, which treats documents as bags of words generated by one or more topics. We choose to measure the topics' evolution over time post-hoc, using a classical topic model and monitoring the change in topic prevalence over time. While dynamic topic models (Blei and Lafferty, 2006) allow to include temporal information in the generated labels themselves, they impose additional constraints on the time periods (for example assuming the changes between consecutive years are the same). We design our representation of topics starting from the observation that computational linguistics research can generally be described as comprising of a set of research tasks, which researchers aim to solve by employing appropriate algorithms, usually assisted by the use of datasets. Based on this assumption, we propose that topics in computational linguistics can naturally be categorized into 3 types: tasks, algorithms and data. As such, we propose a notion of scien-tific topic in our field which consists of both a topic and its category or type; in this view, a topic in computational linguistics can be defined as: typec ∈ {task, algorithm, data}, topict ∈ T, with T representing the list of topics generated by the Latent Dirichlet Allocation model (LDA) (Blei et al., 2003). These topic categories can be useful beyond our field and application, for example in question answering systems or paper recommendation systems (Augenstein et al., 2017;Park and Caragea, 2020;Luan et al., 2018;QasemiZadeh and Schumann, 2016). In order to identify the topics occurring in our corpus of scientific texts, we first train an LDA model on the full texts extracted from computational linguistics articles, and use it to extract a set of 100 topics which we will use to analyze the evolution of the field in the next stages of our study. We use the Mallet implementation of LDA 2 , with parameters set to 100 topics, and 100 training passes. The asymmetric prior distribution was learned directly from the data. The resulted model has a topic coherence score of 0.484 according to the C V coherence measure. In order to maximize the quality of the produced topics, we first label the obtained sequences with POS tags and select only words with POS tags corresponding to content words: nouns, verbs, adjectives, and adverbs, and discard the rest. We lowercase and lemmatize the texts, and we extract bigrams and trigrams using PMI scores to select words which occur together with high probability and add them to our vocabulary and document representations. On the collection of articles published in the ACL Anthology preprocessed as described above, we train the topic model to extract 100 topics. We do not intervene with significant changes on the output of the model, and only add minor corrections, through manual curation: we remove 10 of the extracted topics which we do not consider to represent coherent or interesting ideas, and merge a few topics which were redundant. We are left with a total of 82 topics. We then manually label each topic with one of the three proposed categories: task / algorithm / data, and obtain the final list of topics occurring in our corpus. Each topic can be assigned one or more types: we obtain 53 topics labelled as tasks, 33 of the topics are algorithms, while 7 topics fall into the data category. Some topics belonging to the task type are, for example, morphology, event extraction, or summarization. Topics such as recurrent neural networks or topic models fall under the category of algorithms, whereas lexicons and parallel corpora are categorized as data (or resources). A few topics refer to inherently connected tasks and algorithms, we label those with both types -as is the case of neural machine translation or statistical machine translation. The appendix lists the entire set of extracted topics, along with the top 10 keywords that are relevant for each, as well as their types. When topics were merged, the list of keywords relevant for each topic were merged into one larger list. After having generated our list of topics, we further extract for each paper a list of relevant topics, considering only those which are present in the topic distribution for that document with a probability greater than 0.01. After this step, we are left with almost 13 relevant topics per article, on average. Finally, we measure the prevalence of a topic during a certain year by computing the empirical probability of its occurrence relative to the total number of topics that were approached overall in that year: where I is the indicator function, t d is the year in which document d was published. The conditional probability of a topic given a document P (t|d) is thus equal to 1 if the topic is present in the document and 0 otherwise. C y represents the total number of documents written in a year y. Figure 2 illustrates the distribution of topics across the computational linguistics corpus for each of the 3 topic types. Although the list of topics is Selected topics and trends In order to narrow our focus to subsets of topics worthy of interesting insights, we propose a few ways to select topics that stand out and comment on their development over time -several case studies will be presented in the following subsections. We also look into the most influential authors for each topic. We consider citations as an indicator of the influence of an author over a topic, and we thus measure the influence of each author for a topic by counting all of the occurrences of citations referring to the given author (regardless of the topic of the cited article) in all documents in our collection where the topic is present. Table 2 shows the top 5 most influential authors, ranked by number of citations, for a selection of topics. Confirming and refuting predictions We first confront our findings with the predictions made in previous studies which looked at the evolution of scientific ideas in computational linguistics. Hall et al. (2008) identify a list of topics which were then on an increasing trend in 2006: classification, probabilistic models, statistical parsing, statistical machine translation and lexical semantics. We find among our topics those which best match their list, then analyze their evolution in order to see whether the predictions made then still hold today. Figure 3 shows the evolution of four of these topics until 2018: not all of the topics have maintained the same upward trend all through 2018. Statistical machine translation and probabilistic models suffer a decrease in popularity after 2010; classification, though still very popular, has reached a plateau, while lexical semantics seems to be still on an increasing trend, though less abruptly. Most prevalent topics In our second case study we focus on the most prevalent topics overall, which we consider to be ones that over time have received the greatest attention in computational linguistics research. To find these, we average the probability of occurrence of a topic in each year, obtaining for each topic an overall score of prevalence: Prev(t) = 1 |Y | y∈Y P (t|y) Figure 4 shows the evolution of the top 5 most prevalent topics in ACL+ across time. Most of these were very popular in the earlier days of computational linguistics and started to decrease around 1990, such as the topics related to syntax. Complexity analysis has a steady evolution across time, maintaining a relatively flat trend. Topics with largest variation In our next analysis, we extract topics which vary most in popularity over time, hoping to discover topics which stand out because of their dramatic evolution over time. We do this by considering the distribution of probabilities for a topic over the years, and measuring its standard deviation, for each topic, then select those topics where standard deviation is highest. The top 5 such topics and their evolution are illustrated in Figure 5. It seems that the most dramatic variations are related to recent increases in popularity of certain topics, most of which relate to machine learning. The steep and constant increase in popularity of the learning topic is apparent. Among the first 5 topics which vary most dramatically in popularity over time we find topics related to neural networks, which although very recent relative to the entire history of computational linguistics, have quickly caught up in popularity and even surpassed more traditional topics in the field, and show an abrupt increase in popularity after 2010. We analyze topics related to neural networks in more detail in the following paragraphs. Neural networks In our final case study, we zoom in specifically on topics related to neural methods. These are shown in our previous results to be the stars of recent years in computational linguistics, showing an abrupt increase in popularity. The list of topics generated by our LDA model produce no less than four distinct topics related directly to neural networks, found in computational linguistics papers, which is already remarkable for such a recent topic. These are: neural networks, recurrent neural networks, neural machine translation and embeddings. To these we add for our analysis the topic of learning, as the general class of topics under which neural networks fall, and whose evolution we also expect to be affected by the popularity of neural networks. Furthermore, we compare the trends of neural network related topics in ACL+ to the same trends present in a conference focused primarily on neural networks: NeurIPS. In order to achieve this, we use our LDA model trained on ACL+ papers to extract topics from NeurIPS papers. Figure 6 shows the evolution of topics related to neural methods in papers published in ACL+ and in NeurIPS, respectively. Both papers in ACL+ and in NeurIPS show the same steep increase in recurrent neural networks and neural machine translation starting between 2010 and 2015. Learning has a clearly more stable evolution in NeurIPS, where it has been a very popular topic from the beginning, as compared to computational linguistics, where it sees a steady and still continuing increase. Interestingly, neural networks as a general topic evolve differently in NeurIPS and ACL+: while in computational linguistics they are a recent topic, with a sudden increase in popularity after 2010, in NeurIPS they were widely discussed from 1994, and have suffered a decline up to 2010 when they started following the same upward trend. Relationships between Topics Methodology We use measures of relatedness between topics on two dimensions: co-occurrence and prevalence correlation, to characterize relationships into four major types of relations, which will be described and interpreted in more detail in this section: friendship, head-to-head, arms-race and tryst. For categorizing pairs of topics into these types of relationships, we obtain co-occurrence scores for a pair of topics by computing the PMI score for the topics as they co-occur in documents, and compute the correlation score as the Pearson correlation between the time series represented by the topic's probability over time. We then split each of these two dimensions into two classes (positive/negative co-occurrence, and positive/negative correlation), obtaining the four types of relationships from their combinations. We first standardize the distributions of co-occurrence and correlation scores, then split the relations landscape into four parts, depending on where they are situated on the two axes: positive/negative cooccurrence and positive/negative correlation. We also compute a measure of strength of each relationship between a pair of topics, which is simply the product of the two scores, in absolute value. Sorted by the average strength of top 25 relations of that type, the relations rank as follows: friendships>head-to-head>tryst>arms-race. Table 3 shows the top pairs of topics with the strongest relations for each relation type, as well as their strength. The appendix contains tables with the top 10 relations for each relation and topic type. We separately identify relations between different types of topics, and propose that some relations are more meaningful for certain topic pairings than others, depending on their types. For friendships, which refer to cooperating topics, we focus on topic pairs of different types, between which this relation is established, in order to discover the tasks go together with specific algorithms or datasets. For the other relation types (arms-race, head-to-head and tryst), we suggest that the cross-type topic pairs are less meaningful -since these types of relations can be interpreted as occurring between competing topics -for these we focus instead on same-type topic pairs (between tasks and tasks, algorithms and algorithms, data and data). In the tables presenting top relationships for each type, we restrict our focus to only topic pairs of types which can be meaningfully matched for each relation. Friendships Two topics are "friends" if they tend to co-occur in the same texts and are also correlated in their prevalence over time. These are topics which go together, or "cooperate" -they are often found in the same documents and are used together in the analysis of a certain idea or area of interest. Figure 7 shows the top strongest friendship relationships between a task and an algorithm, and an algorithm and data, respectively. We discover, for example, that the neural machine translation task is most associated with the recurrent neural networks algorithm, and that for the task of statistical machine translation, parallel corpora are the most useful types of datasets. Head to head Topics in a head-to-head relationship do not tend to co-occur in the same documents, and are anti-correlated over time. These are topics which have nothing in common, or are even rivals. In Figure 8 we can see the strongest head-to-head relationships in our corpus between tasks and algorithms respectively. One example is the relation between grammars and neural machine translation: these are rarely treated together in studies; more than that, while neural machine translation shows a recent increase in popularity, grammars are on a declining trend. Arms race An arms-race relation characterizes topics that are correlated in their usage over time, but do not tend to co-occur within the same documents. Topics in this type of relationship tend to evolve in a similar pattern over time, possibly with an underlying common cause, even though they are not directly related: such is the case of many algorithms which were widely used before being recently replaced by neural networks. Figure 9 shows two such pairs of topics: phonology with semantic role labelling, and topic models with dependency parsing, which show similar decreasing trends, but are not referred to in the same articles. Trysts Tryst is a relationship between topics which tend to co-occur in the same texts, but are anticorrelated in prevalence over time. We show that according to our study, this is one of the most interesting relations occurring between scientific topics, Figure 9: Examples of topics in arms race relationships and their evolution over time. and propose that it is useful for discovering topics that are replaced by others: topics which share a common niche of the research field, but as one topic increases, the other decreases. In Figure 10 we see two such relationships, which uncover interesting topic pairs. One is statistical machine translation versus neural machine translation, which is clearly a topic in the subfield of machine translation which has recently replaced the previous one as the primary focus of researchers. A similar phenomenon may have occurred for data-typed topics related to language resources: while dictionaries are overall more studied, they are on a decreasing trend, and have now been surpassed in popularity by parallel corpora. Relations between conferences Conference divergence In this part of our study we focus on the relations between conferences in computational linguistics. We compute divergence between conferences using Jensen-Shannon divergence applied on their topic distributions generated by papers published in each conference. Jensen-Shannon divergence is computed as the average of (a) ACL+ conferences. (b) ACL+ vs NeurIPS. the KL divergences between each of the distributions and the average of the distributions. Its value is 0 for identical distributions, and tends to infinity as the two differ more and more. Figure 11 shows the pairwise divergence over time between the computational linguistics conferences, as well as between the linguistics conferences and NeurIPS. The span of each pairwise divergence plot is limited to the span of the youngest conference in the pair; the values are smoothed using a rolling average with a window of 2 years. The plot reveals a decreasing trend for all conference pairs. ACL and COLING are the conferences with the oldest history, and show a steady but mild decrease in divergence throughout their evolution. The most similar conferences are shown to be ACL with EMNLP and with NAACL, which also show the steepest decrease in divergence. We further extend our study to contrast the computational linguistics conferences with NeurIPS. It is interesting to see that, even though computational linguistics and neural methods are technically distinct fields, the linguistics conferences still tend to converge with NeurIPS over time (although the absolute divergence between these is still considerably higher than among computational linguistics conferences). The most similar conference to NeurIPS in terms of the topics approached seems to be EMNLP, which from its beginning was the closest to NeurIPS among all linguistics conferences. This is perhaps explained by the more applied character of EMNLP compared to the others. In contrast, COLING, the oldest and most linguistics-focused of the conferences, is the least similar to NeurIPS, although still shows a tendency towards decreasing this gap. Synchronicity of topics across conferences Next, we introduce a second measure of similarity between conferences, this time over particular topics, in order to understand if conferences are synchronized in the topics they approach, and if this depends on particular sets of topics. Similarly to the measure of correlation used in the topic relationship analysis, the correlation between conferences for a subset of topics T is simply computed as the prevalence correlation of topics over time, on average, for each topic in the subset considered -this time between its evolution in the two conferences (or sets of conferences) to be analyzed. where the correlation between two conferences for a certain topic t is defined as: y (P (t|y, c 1 ) − P (t|y, c 1 )((P (t|y, c 2 ) − P (t|y, c 2 )) y (P (t|y, c 1 ) − P (t|y, c 1 ) 2 y (P (t|y, c 2 ) − P (t|y, c 2 ) 2 Using this measure we try to analyze how similar topics appear in different conferences over time, whether they follow similar trends or even influence each other. With an average correlation across all topics between NeurIPS and ACL+ of 0.71, this measure also shows a fairly similar evolution of topics between the conferences overall. We should note however that the topics used in the analysis were generated only from ACL+ papers, so topics exclusive to NeurIPS are not considered. We then rank the topics in our list by the correlation of their evolution in NeurIPS versus ACL+: 5 topics among the top 10 with the most correlated evolution are shown in Table 4. Neural topics in computational linguistics versus NeurIPS Neural networks are an interesting subset of topics, which have very quickly become very popular in computational linguistics, and are today common as central foci of both ACL+ and NeurIPS. The average correlation between ACL Anthology and NeurIPS for topics related to neural methods (neural networks, RNNs, neural MT and embeddings), is 0.58, which interestingly is lower than the overall correlation across all topics. We try to understand whether these conferences are synchronized in the way they approach topics and hope to understand, by comparing their evolution, if they mutually influence each other, especially regarding topics which are relevant for both. In order to analyze this phenomenon, we compute the correlation between the evolution of topics, this time introducing an artificial lag for the papers in ACL Anthology. The correlation of topic time series is computed using an updated definition of topic probability: where l is a lag factor. Figure 12 shows the correlation between the evolution of topics after applying lags ranging from −25 to 25 years, for the full set of topics, as well as for the subset of topics related to neural networks. If there is any asynchronicity in the way topics appear in the two fields, the lag corresponding to the best correlation should help us find the delay with which topics gain popularity in the two conferences comparatively. In our case, the optimal lag value across all topics is found to be exactly 0, whereas for neural topics the optimal lag is 1 year, showing a slight delay in the approach of neural method related topics in ACL+. Overall, ACL Anthology and NeurIPS seem fairly synchronized when in comes to innovation in this area. Conclusions We presented in this article an analysis of the topics found in computational linguistics conferences. We enhanced topics with their types by categorizing topics into tasks, algorithms, and data; and showed how the field has evolved, uncovering general trends, as well as new unforeseen trends such as the abrupt rise of neural network methods. We also identified the most influential authors for each topic, which can provide interesting insights assuming most cited authors when discussing an idea carry a big share of the responsibility of introducing and promoting the idea. A more sophisticated method for identifying influential authors could include a normalization factor based on the number of citations. We additionally included a study of relations between topics and between subfields, to gain insight into the interplay between topics within and across fields. Our analysis confirmed the strong cooperative relationship between certain tasks and algorithms, such as neural machine translation and recurrent neural networks, but also revealed some interesting less obvious ways in which some topics relate -automatically identifying topics which replace others in the preference of scientists in a subfield (such as the change in paradigm for machine translation). In a separate experiment, we zoom in on the topic of neural networks, and compare the evolution of this topic in computational linguistics conferences to its parallel development in a conference dedicated to neural networks: NeurIPS. Through the various complementary analyses we performed, we try to contribute to answering the question of how scientific topics emerge and gain traction by considering internal as well as external factors, and the scientific context in which trends appear and evolve. In the future, we would like to explore predictive models of what research topics would gain popularity in upcoming years. It would also be interesting to explore the effect of extracting more fine-grained topics, which could help with identifying more subtle trends -at the technical level, this would involve controlling the level of noise when increasing the number of topics. We will also explore in more depth the properties of the emerging network of topic relations, and the types of topics involved. Exploring more complex topic structures could help model more sophisticated notions such as scientific paradigms. A Full list of topics Domain adaptation (task) domain adaptation adapt cross data share weight distribution multi scenario Automata (algorithm) string transformation finite operation transducer stre match regular weight symbol Morphology (task) morphological arabic morpheme stem suffix morphology prefix root affix inflection Multi-word expressions (task) expression collocation literal metaphor idiom mwe multiword descriptor mwes compositional Sentiment analysis (task) sentiment negative positive opinion polarity lexicon subjective classification subjectivity neutral Trees (algorithm) tree node child root subtree parent forest leaf branch depth Reinforcement learning (algorithm) action agent dialog policy reward instruction environment goal human reinforcement SVMs (algorithm) kernel svm bag vector bow space reranke linear clue support Linear programming (algorithm) constraint solution variable solve inference constrain ilp hard linear soft Argument mining (task) claim essay argument stance email evidence support debate statement topic Topic models (algorithm) document topic lda collection distribution topical latent content coherence background Clustering (algorithm) cluster clustering group induce merge class partition gold induction centroid Language acquisition (task) student author learner simplification write native readability grade complex read Generation (task) generation generator content record surface realization choice plan selection component Named entity recognition (task) token joint ner span crf sequence labeling normalization pipeline crfs Discourse segmentation (task) segment segmentation boundary unit length break sequence segmenter segmented window Events/temporal (task) temporal anchor event tense expression interval causal date day reference Phonology (task) letter phoneme syllable pronunciation phonetic vowel phonological stress consonant sound Stylistics (task) emotion social gender age group emotional participant people relationship person Unification (task) grammar unification head formalism cat description hpsg sign definition constraint Language models (task) gram probability bigram lm perplexity trigram unigram estimate vocabulary smooth Textual entailment (task) entailment inference hypothesis game textual player rte premise entail team Biomedical (task) cue medical citation abstract patient scientific scope biomedical cite article Anaphoral/coref. resolution (task) pronoun mention antecedent coreference resolution coreference resolution anaphor resolve anaphoric reference Dependency parsing (algorithm) dependency parser parse head treebank tree dependent projective arc accuracy Database/resources (data) template database logical hybrid variable city expression meaning sql equation Social media/web data (data) user response post comment message conversation thread interaction feedback reply Summarization (task) summary summarization document rouge compression content length extractive human duc Spelling correction (task) error edit correction spelling revision rate confusion preposition incorrect learner Evaluation/annotation (task) human metric paraphrase reference correlation quality automatic judgment judge rating annotation annotator annotate agreement annotated gold scheme guideline automatic manual Semantic role labelling (task) argument predicate role arg srl syntactic identification propbank labeling core Discourse (task) discourse relation coherence connective implicit unit explicit paragraph marker rhetorical Syntactic structure (task) noun adjective compound head modifier modifi nominal determiner proper adverb verb subject object class preposition verbal noun passive argument syntactic syntactic linguistic syntax grammatical construction structural lexical deep surface phenomenon Lexical semantics (task) similarity vector cosine distributional sim distance weight relatedness space lsa Learning (algorithm) weight log parameter objective loss update optimization linear optimize paramet Probabilistic models/distributions (algorithm) distribution probability sample variable latent prior parameter estimate inference generative Statistical MT (task,algorithm) alignment align link probability ibm aligned null correspondence aligner heuristic translation translate quality mt target statistical translator smt reference bilingual translation bleu reorder decode smt hypothesis side decoder target chinese Transfer learning (algorithm) target transfer projection mapping project side map direct ds auxiliary Speech recognition (task) speech recognition speaker asr speak utterance acoustic transcript transcription prosodic POS tagging (task) tag pos tagger chunk accuracy tagging speech unknown tagset sequence treebank wsj fragment accuracy bracket pcfg probability np penn treebank head Lexicons (data) lexical lexicon item entry lex lexeme coverage associate derive substitution Constituent parsing (algorithm) clause constituent head relative coordination subject element position complement mark parse parser grammar chart parsing span tree syntactic stage Multilinguality (task) resource french spanish multilingual pivot german corpora italian dutch portuguese candidate rank selection ranking denote weight framework ranker probability combination Embeddings (algorithm) vector embedding matrix embed space dimension vec dimensional mikolov tensor Plan-based dialogue (task,algorithm) dialogue utterance act speaker plan turn goal belief conversation request Question answering (task) question answer passage question answere match paragraph trec reason factoid relevant Event extraction (task) event trigger mention extraction document ace attack argument entity relevant Grammars (algorithm) grammar derivation symbol terminal production nonterminal free cfg adjoin string Logical forms (algorithm) formula logic interpretation logical scope operator theory proposition predicate expression Knowledge base (data) entity mention wikipedia link person kb article document page title Information extraction (task) pattern seed extraction acquire acquisition bootstrappe web relationship discover match Applications (task) user tool module interface component support file format display design Disambiguation (task) interpretation ambiguity ambiguous processing preference strategy disambiguation attachment mechanism heuristic Graphs/AMR (algorithm) graph edge node vertex graphs connect amr weight propagation link Neural networks (algorithm) network layer neural cnn architecture rnn vector deep hide embedding Narratives (task) story genre book expert worker movie narrative human collect crowdsource Ontologies (algorithm) concept attribute hierarchy ontology taxonomy conceptual hypernym relation hierarchical link Prediction (task) predict prediction accuracy regression predictor error linear predictive variable effect Quantitative analysis (algorithm) frequency count probability distribution estimate occurrence corpora association statistical log Vision/multimodal (task) image visual video caption multimodal modality fusion textual human modal Parallel corpora (data) parallel bilingual monolingual corpora cross lingual keyphrase comparable translation resource extraction Neural MT (task,algorithm) decoder encoder nmt sequence neural attention bleu decode rnn vocabulary Recurrent neural networks (algorithm) lstm attention vector memory embed mechanism embedding weight layer encode Complexity analysis (task) cost memory speed index fast run bit store key efficient Opinion mining (task) review aspect product rating restaurant opinion customer rationale hotel service Social media (data) tweet twitter social media user twitt message hashtag detection post microblog Transliteration (task) character chinese transliteration oov hindi unknown accuracy char urdu ctb Dictionaries (data) dictionary definition code entry link cod dictionarie analogy database bank Relation extraction (task) relation extraction triple relational tuple open express relationship distant supervision rel Historical linguistics (task) change family lemma cognate russian czech linguistic historical distance swedish Wordnet/disambiguation (task,algorithm) sense wordnet sens disambiguation synset wsd gloss disambiguate resource ambiguous Dependency parsing (algorithm) search transition stack beam prune action shift greedy partial configuration Information retrieval (algorithm) query search web retrieval document page relevant retrieve relevance engine Supertagging (algorithm) category ccg np derivation supertag composition lexical ambiguity supertagger ccgbank Asian languages (task) japanese expression korean bunsetsu particle accuracy wo marker element noun Classification (algorithm) class classification classifier accuracy classify svm binary decision classifi combination Sequence analysis (algorithm) sequence local position global distance length chain sequential gap permutation Frame semantics (algorithm) frame slot schema filler framenet fill intent element role slu Dynamic programming (algorithm) path factor ij lattice tuple cache length denote space dynamic News articles (data) article news company year political country day people issue market Scene description (task) object description property expression µi µi reference scene referent location spatial , oo, ooooo, uooo, oo, uu, uuuu precision, recall, match, detection, filter, extraction, threshold, detect, confidence, identification keyword, title, conference, computational linguistic, page, proceeding, tutorial, university, year, processing german, read, incremental, reading, die, prime, processing, der, surprisal, field
8,895.6
2021-01-01T00:00:00.000
[ "Linguistics", "Computer Science" ]
Timing of the Saalian- and Elsterian glacial cycles and the implications for Middle – Pleistocene hominin presence in central Europe By establishing a luminescence-based chronology for fluvial deposits preserved between the Elsterian- and Saalian tills in central Germany, we obtained information on the timing of both the Middle Pleistocene glacial cycles and early human appearance in central Europe. The luminescence ages illustrate different climatic driven fluvial aggradation periods during the Saalian glacial cycle spanning from 400–150 ka. The ages of sediments directly overlying the Elsterian till are approximately 400 ka and prove that the first extensive Fennoscandian ice sheet extension during the Quaternary correlates with MIS 12 and not with MIS 10. Furthermore, the 400 ka old fluvial units contain Lower Paleolithic stone artefacts that document the first human appearance in the region. In addition, we demonstrate that early MIS 8 is a potential date for the onset of the Middle Paleolithic in central Germany, as Middle Paleolithic stone artefacts are correlated with fluvial units deposited between 300 ka and 200 ka. However, the bulk of Middle Paleolithic sites date to MIS 7 in the region. The fluvial units preserved directly under the till of the southernmost Saalian ice yield an age of about 150 ka, and enable a correlation of the Drenthe stage to late MIS 6. Results Stratigraphy. The lithology and facies architecture of the SMT as well as the stratigraphical position of associated paleolithic artefact findings have been well documented in several studies 14,16,17,32,39,42,50,52,53 . The very bottom part of the SMT, when preserved, is built by the Corbicula-fluvial unit which contains warm-stage fossils 54,55 . This points to an initial formation of the SMT under warm climate conditions, but a reworking of the fossils must be considered. All overlying units of the SMT have been attributed to cold climate 56 as evidenced by the fluvial facies architecture and cryoturbation features including several levels of ice wedges. Furthermore, mammal remains such as Mammuthus primigenius or Ovibos moschatus are suggestive of the formation of the SMT in a periglacial environment. Only a small number of infrared-radiofluorescence ages of SMT-sediments, ranging from 306 ± 23 ka to 227 ± 15 ka 57 are available from different sites in eastern Germany, however these ages were presented without stratigraphical context, making the interpretation challenging. The currently exposed lithostratigraphy of the SMT sites of Rehbach, Zwenkau and Schladebach/Wallendorf was documented during recent luminescence sampling-campaigns. The site of Markkleeberg is no longer accessible and luminescence samples were provided by the Freiberg (Saxony) dating laboratory. The lithological description of the site is based on the work of E. Miersch 58 . The sedimentary units of all sites under investigation are outlined in Fig. 2, which also illustrates the stratigraphical position of stone artefacts. Luminescence chronology. Schladebach. The luminescence samples taken from the very bottom part of the SMT exposed at Schladebach, which contain reworked clay and organic rich sediments and directly cap the Elsterian till, yield ages ranging from 387 ± 42 ka -447 ± 52 ka ( Fig. 2; Supplementary Table S2). The ages overlap within the error-range and point to an initial period of fluvial aggradation of the SMT during MIS 11 or early MIS 10 59 . The decalcification of the basal gravels in the Schladebach/Wallendorf SMT and the presence of mollusks 60 support an onset of the fluvial aggradation during moderate climate conditions 33 . These luminescence ages deliver a minimum age for the embedded Lower Paleolithic stone artefacts of approximately 400 ka. The pIRIR 290 luminescence age estimates obtained from the upper-and middle part of the Schladebach section are 343 ± 42 ka, 360 ± 34 ka and 338 ± 31 ka. This age-cluster points to the aggradation of several meters of sand and gravel during MIS 10. Rehbach. The luminescence ages of Rehbach are indicative of 4 phases of fluvial aggradation. The ages obtained from the very bottom part of the SMT, preserved fragmentarily above the Elsterian till, are 423 ± 48 and 387 ± 48 ka, and are similar to the corresponding luminescence ages obtained at Schladebach. Therefore, at Rehbach the initial formation of the SMT correlates to MIS 11/ early MIS 10 too. The capping fluvial unit is dated to 239 ± 31 ka and correlates to late MIS 8 or MIS 7. Therefore, a significant chronological gap of around 150 ka is documented within the Rehbach sedimentary sequence. The central part of the SMT at Rehbach is dated to 173 ± 16 ka, suggesting a 3 rd period of fluvial aggradation during MIS 6. The top part of the SMT, which directly underlies the till of the southern-most ice sheet extension (Zeitz phase) during the Saalian glacial cycle, yields luminescence ages of 144 ± 13 ka and 160 ± 13 ka. These ages correlate to a later period of MIS 6, and constitute maximum ages for the till of the Zeitz-phase (Drenthe). Zwenkau. At Zwenkau, only approximately 3 meters of fluvial sand and gravel are exposed. The bottom portion was dated to 280 ± 45 ka (MIS 8). The capping unit, here discordantly underlying the Saalian till of the Zeitz phase, is dated to 185 ± 22 and 183 ± 21 ka. This suggests that the main part of the preserved SMT at Zwenkau dates to early MIS 6 and can chronostratigraphically be correlated to the middle part of the Rehbach-section. Late MIS 6 fluvial sediments are not preserved at Zwenkau. Markkleeberg. The archeologically important site of Markkleeberg is no longer accessible due to the closing of the former pit. The dating material from the SMT was taken by Matthias Krbetschek (1956Krbetschek ( -2012 from the University of Freiberg (Saxony) in the early 2000s. Some of the K-feldspar samples were kindly provided to us by Freiberg University. Unfortunately, no material was available from the bottom portion of the SMT, from which most of the stone artefacts derive. The only chronological information of this basal unit is based on preliminary infrared-radiofluorescence (IR-RF) ages 42,57,61 pointing to an aggradation around 250 ka (236 ± 23 ka) 61 . However, a quality-validation of this IR-RF age is not possible. The pIRIR 290 luminescence ages from the top-part of the SMT at Markkleeberg are 217 ± 24 ka (the sediments underlying the Markkleeberg silt and cryoturbation horizon), and the unit concordantly underlying the Saalian till yields luminescence ages of 164 ± 16 and 159 ± 17 ka. The pIRIR 290 ages are in very good agreement with the IR-RF ages obtained from the same samples presented by Krbetschek et al. 61 . Archaeology. From the middle of the 20 th century until today, more than 6700 Lower Paleolithic artefacts were recovered from the base of the SMT sequence and the coarse gravel dump at the gravel pits of Schladebach and Wallendorf 32,33 . During luminescence sampling and additional surveys, a number of additional finds were documented including 6 fragmented animal bones, 10 simple flake cores (Supplementary Figures S23-S25), 16 flakes and 3 tools (Fig. 3A), all of which derive from the exposed basal parts of the SMT sequence (see Supplementary Figures S19-S21). The artefacts are primarily of Lower Paleolithic character, though only one flake has a slightly faceted platform (see Supplementary Tables S3 and S4), indicating a trend towards the Middle Paleolithic. This slight trend has already been observed in the earlier collections 32, 33 . The luminescence ages of SCienTifiC REPORTS | (2018) 8:5111 | DOI:10.1038/s41598-018-23541-w approximately 400 ka (Fig. 2) indicate a human presence in central Germany that is characterized by a Lower Paleolithic stone tool industry including sparse MP features 32,33 during MIS 11 or early MIS 10 ( Fig. 4). No knapping sites were documented in the basal gravels, either within our own or previous surveys. Instead, the artefacts are scattered along the whole basal SMT sequence. Sharp edged artefacts are rare in the older collections 33 . During our own survey, the recovered artefacts (see Supplementary Tables S3 and S4) are a mixture of lightly edge damaged pieces (e.g. Fig. 3A-7), edge damaged flakes (e.g. Fig. 3A-2) and rolled artefacts (e.g. Fig. 3A-5). Indications show that artefacts were predominantly reworked during the onset of the fluvial aggradation. Therefore, they are likely slightly older than the ages of the SMT at the Schladebach pit presented here. In other words, the ages for the sediment that contains the artefacts should be interpreted as a minimum age for the human presence at the site. As the basal sediments of the SMT suggest temperate climatic conditions 60 , and the artefacts are found in sediments overlying Elsterian glacial deposits, an MIS 11 age for the artefacts can be inferred. Given that fact that the artefacts only occur within the basal layers of the sequence as well as the ages for the whole sequence are older than MIS 9 (except the age on top of the gravel accumulation; Fig. 2), an age younger than MIS 10 for the artefacts along with the human presence can be excluded. Supplementary Table S2. The green-colored ages are infrared-radiofluorescence ages formerly presented by Krbetschek et al. 61 (see results chapter). At Schladebach and Rehbach, the Saalian Main Terrace is visibly capping the Elsterian till. Only the first Elsterian till is preserved whereas the upper Elsterian till was eroded and only a boulder pavement as a till-residuum is found. At Markkleeberg, only the boulder pavement is preserved. At Schladebach and Rehbach, fragments of the ice-dammed lake sediments of the Dehlitz-Leipzig warved clay 41 are preserved below the SMT. The SMT itself is mostly horizontally bedded but also shows some cross-beddings. The upper fluvial sand and gravel are characterized by permafrost features such as ice wedges, and at Rehbach and Markkleeberg, the "Markkleeberg cryoturbation horizon" is preserved. The Markkleeberg cryoturbation horizon is a wide-spread, silt-rich unit that was not found at the outcrop at Zwenkau. The SMT is concordantly capped by the Böhlen warved clays and the up to 2 m thick Saalian till (Drenthe). The fluvial sequence in Markkleeberg was formed by the Pleiße/Gösel-river system. The gravel of the SMT at Rehbach and Zwenkau were deposited by the Weiße Elster river and the petrographic composition shows a high percentage of Nordic material (e.g. flint) as well as quartz, greywacke and chert (Kieselschiefer) documenting the catchment area of the Weiße Elster river in eastern Thuringia and the Vogtland. The SMT deposits at Schladebach correlate to the Saale-Unstrut river system 14 . The gravel composition of these deposits is characterized by high percentages of limestone originating from the Triassic sediment formations located south of the section in addition to Scandinavian rock components including flint. In conclusion, Schladebach/Wallendorf is now the oldest directly dated archaeological site in central Germany 27 and is among the oldest sites in central Europe (the Biśnik cave in Poland provided some similar and some older dates, predating MIS 12 62 ). From the Middle Paleolithic site of Markkleeberg, we were able to date two of the three reported find layers 49 present in the SMT sequence ( Fig. 2): The upper find horizon, where artefacts were potentially preserved in primary position, yields an age of about 160 ka. This is evidence for human presence in central Germany during cold conditions prior to the first Saalian ice advance. The upper part of the Middle find horizon, where rolled artefacts in secondary context were excavated 49 , has an age of approximately 220 ka. Additionally, the IR-RF age (236 ± 23 ka) from the bottom part of the sequence (Krbetschek et al. 61 ) indicates an MIS 7 age of these artefacts. The MP artefacts from the former open-cast brown coal mine of Zwenkau were recovered from the base of the exposed SMT sequence. The already published assemblages 50 were named after the former villages within the Zwenkau quarry "Eythra" and "Bösdorf ", which were destroyed by mining. The date for the base of the SMT presented here (Fig. 2) provides an age of 280 ± 45 ka for the artefacts, and is characterized by a sophisticated MP bifacial tool production system 50 . In the area of the western slope, from which the luminescence samples derive, a bifacial scraper was found in stratigraphic position at the basal part of the SMT during former surveys (Fig. 3C, pers. comm. W. Bernhardt, November 20 th 2017, who recovered and documented the artefact). The luminescence ages demonstrate the presence of humans with a fully developed MP stone tool industry between early MIS 8 and MIS 7 in central Germany (Fig. 4). During sampling for luminescence dating in the gravel pit of Rehbach, a typical MP Levallois core and a flake were found (Fig. 3B) within dislocated SMT basal gravels. Additionally, a damaged, potentially artificial flint flake was found in stratigraphic position within the approximately 240 ka old gravel unit (see Supplementary Figure S27), that makes a correlation of all artefacts to that unit likely. The age of around 240 ka for the basal fluvial sediments at Rehbach is younger compared to the age of around 280 ka obtained at the basal sedimentary sequence at Zwenkau, especially if the sites are part of the same fluvial terrace. Nevertheless, the age for the find bearing basal part of the SMT at Rehbach of 239 ± 31 ka overlaps the lower error range with the age obtained for the basal part of the SMT at Zwenkau (Fig. 2; Fig. 4). This overlap confirms the MP human presence in central Germany around MIS 8-7. Discussion and Conclusion Implications of the SMT chronology for the timing of the Fennoscandian ice sheet extension, and the age of the Holsteinian. The ages of the lower-most gravel unit of the SMT, ranging from 447 ± 52 ka-387 ± 48 ka, post-date the Elsterian till. This highlights the fact that the Elsterian glacial cycle cannot be correlated with MIS 10 and therefore correlates to MIS 12. These age estimates deliver the first resilient chronological control for the Elsterian glacial cycle from its typo region and demonstrate that there is limited temporal gap between the end of Middle Pleistocene Revolution and the onset of the huge Middle Pleistocene glaciations in Europe. As the Elsterian glacial cycle is terminated by the Holsteinian interglacial, the ages presented here support the suggestion of a correlation of the Holsteinian with MIS 11. The top part of the SMT yields luminescence ages ranging from 164 ± 16 ka -144 ± 13 ka, and was deposited during the later MIS 6. The stratigraphic position of theses layers directly under the Saalian till or Saalian warved clays shows that the southernmost Saalian ice advance in the region occurred during late MIS 6, hence at the end of the Saalian glacial cycle. The new data delivers important supplementary ages to the recently by Lang et al. 63 published chronological data for the extension of the Saalian ice sheet into Germany. The pIRIR 290 luminescence ages demonstrate that there might have been spatial-and chronological variations for the Middle-Pleistocene Fennoscandian ice sheet oscillations. For the region of northern Germany Roskosch et al. 64 documented a higher number of Middle Pleistocene ice advances relative to our study area. More resilient chronological data of sediments correlating to ice sheet fluctuations along a European transect may be mandatory to better understand these spatial differences in the future. Driving forces and timing of fluvial aggradation within the SMT. The luminescence age estimates of the SMT point to several periods of fluvial aggradation spanning from about 400 ka-150 ka. The exposed fluvial sand and gravel cover a time span of approximately 250 ka, and the accumulation of the sedimentary sequences might be explained by the interplay of relatively rapid fluvial aggradation interrupted by periods of erosion or less fluvial activity. MIS 11-10. At Rehbach and Schladebach, the very basal part of the exposed fluvial units contains gravel and sand partly including reworked clay-rich and seldom organic deposits which are indicative of the onset of increased fluvial activity, including the reworking of warm-stage (Holsteinian) deposits during the MIS 11-MIS 10 transition. The dating-precision does not allow for the separation between MIS 11 or early MIS 10. However, an increased mobilization of gravel and corresponding aggradation is more likely to correspond to the climatic shift towards colder climatic periods 65 than during an interglacial with presumably stabilized landscape surfaces. The pIRIR 290 luminescence age estimates obtained from the upper-and middle part of the section at Schladebach overlap within the error and are indicative of the aggradation of several meters of sand and gravel during MIS 10. Ice wedges inside the sedimentary sequence support the idea of fluvial activity at Schladebach under periglacial, cold climate conditions. The luminescence ages do not allow us to distinguish between various aggradation periods during MIS 10, but the ice wedges at different elevation levels suggest a periodically stable terrace surface. Generally, several fluvial aggradation events during MIS 10 can be assumed. MIS 6 aggradation is shown at Rehbach, Zwenkau and Markkeeberg. The fluvial sequence at Rehbach represents the climatic shift from early-to late MIS 6 in the middle-to upper part of the sedimentary sequence. The deposits preserved at the middle part of the section, dated to 173 ± 16 ka show no permafrost features such as ice wedges whereas the upper part, dated to 144 ± 13 and 160 ± 13 ka, represents permafrost conditions towards the glacial maximum of MIS 6. Fluvial aggradation in the study area may have been driven mainly by climatic shifts towards colder climate conditions during the Middle Pleistocene. The most rapid fluvial aggradation documented in this study occurred during MIS 10 and MIS 6. Interpretation of the ages for the Middle-and Lower Paleolithic stone artefact assemblages, and the implications for human dispersal into central Germany. The first hominin occupation of Europe and the human dispersal is discussed in several studies ( 27,28,66,67 among others), however evidence for the presence of sites older than 250 ka in central Europe is relatively sparse 27,28 . Therefore, the Schladebach/ Wallendorf site, directly dated in the present study to MIS 11 or early MIS 10 (Figs 2; 4), adds an important data point to the ongoing debate about the human occupation and dispersal into central Europe. Another important site potentially attributed to the MIS 11 interglacial in central Germany 36 is the Thuringian travertine site of Bilzingsleben 68-70 (for a summary of the geology, the age estimations and the research history see the article by C. Pasda 35 ). Unfortunately, 230 Th/U dating on micro samples has thus far been unsuccessful, largely due to strong weathering of the travertine, resulting in open system behavior for U 71 . Nevertheless, because U series activity ratios yielded values close to the radioactivity equilibrium, it was suggested that the age of the site is likely ≥300 ka 71,72 . Although the anthropogenic origin of the majority of the artefacts has recently been questioned 35 , human cranial remains prove the presence of humans in Bilzingsleben 73 . A recent study of the megafauna from the site 74 confirms the natural accumulation of many of the bones and also demonstrates ephemeral evidence for a human accumulation agent. We provide evidence here that Wallendorf/Schladebach, with more than 6700 32,33 stone artefacts, confirms the presence of humans in central Germany around MIS 11 and/or early MIS 10. These humans had a Lower Paleolithic stone tool industry, with few MP features (e.g. prepared cores 32,33 ) at that time. Schöningen in Lower Saxony 75-77 is the next younger Lower Paleolithic site confirming human presence in central Germany during the MIS 9 interglacial 34,36,78 (Fig. 4). Although the MIS 9 ages based on 230 Th/U dating on peat 78 were rejected recently due to U-series open system behavior 19 , we favor the MIS 9 interpretation for the find layer of Schöningen based on: (1) the interpretation of the geological succession following the Elsterian sediments 36 , (2) a palynological record which is distinct from the Holsteinian 79 , and (3) the recently published MIS 9 ages based on luminescence dating of heated flint 34 . Besides exceptional faunal remains, like teeth and a humerus fragment of a saber-toothed cat, the site is famous for the preservation of wooden spears 75,80 . These thrusting spears demonstrate the manufacture and use of wooden hunting weapons in the Lower Paleolithic. Like in Schladebach/Wallendorf, the stone artefacts from Schöningen are attributed to the (late) Lower Paleolithic: prepared cores are not present, the tools and the blank production is based on rather thick and broad flakes and, although evidence for bifacial shaping is present, handaxes themselves are missing 81 . SCienTifiC REPORTS | (2018) 8:5111 | DOI:10.1038/s41598-018-23541-w During the following MIS 8, we have archaeological evidence in central Germany (Fig. 4) from Rehbach and Zwenkau. Previously, the lower find horizon of Markkleeberg was suggested to have an early MIS 8 age 49 as well, but the IR-RF ages of M. Krbetschek (Fig. 2) indicate an MIS 7 age for the lower find horizon. The age we present for the base of the SMT in Zwenkau is suggestive of an onset of the MP in the region in early MIS 8, however, the relatively skewed De-distribution of sample L-Eva 1614 (overdispersion = 31%) may indicate an age-overestimation. Recent evidence from the site Kesselt-Op de Schans/Belgium also suggests an age for the Lower to Middle Paleolithic transition in Northwestern Europe during the end of MIS 9 and the onset of MIS 8, about 280 ka 31 . This matches the range of our age from Zwenkau. However, given the results of a recent comparative study about the early MP record in western and northern Europe 28 , there are only a few sites that have layers correlated with MIS 8, and there is little evidence for human presence during the MIS 8 Pleniglacial. The majority of the early MP sites in Europe are in fact associated with MIS 7 28,30 . Considering this European evidence, an MIS 7 age for the occurrence of the fully developed MP in central Germany is more likely. This is underlined by the fact that the date for the basal SMT of Zwenkau overlaps in its lower error range with the MIS 7 age of Rehbach (which points to the MIS 8/ MIS 7 boundary), as well as the MIS 7 IR-RF ages for the basal find horizon of Markkleeberg. The hypothesis is reinforced by the Thuringian travertine site Weimar-Ehringsdorf. Several attempts 71,72,82 have dated the formation of the travertine in Weimar-Ehringsdorf to MIS 7. The artefact assemblage found in several layers within the travertine of Weimar-Ehringsdorf is attributed to the Middle Paleolithic [83][84][85] . Importantly, several of the human skeletal remains have Neanderthal features 86 . Among them are the cranial remains of a female, and partial skeleton remains of a child 86 . The clearly identified Neanderthal features 87 of the human remains indicate that with the emergence of the fully developed MP during MIS 8 (?) -MIS 7 in central Germany, the Neanderthal lineage was linked to this stone tool industry. During the MIS 6 cold stage (Fig. 4), we have evidence from the upper find layer of Markkleeberg that humans adapted to the cold stage environment and were present in central Germany shortly before the major Saalian ice advance around 150 ka. The finds were described as sharp edged in situ artefacts, preserved in gravel lenses within small fluvial channels 49 . Therefore, we can infer that the finds were not reworked and/or transported from an older find horizon. From the archaeological evidence in central Germany and the dates presented in this study, we can infer that: (1) The Lower Paleolithic human presence started in MIS 11 and lasted until about MIS 9. Materials and Methods Materials. Schladebach/Wallendorf. The gravel pit of Schladebach, Saxony-Anhalt (51°18′22.66′′N, 12° 6′17.28′′E), is situated about 20 km west of Leipzig. Here, the SMT gravels of the Saale-Unstrut-river system are exploited by mining. About 3 km to the north are the former gravel pits of Wallendorf 32,33 . Here, more than 6700 artefacts were recovered from the base of the same SMT sequence and from the SMT coarse gravel dump of the pits. The artefacts were described as having a Lower Paleolithic character 32,33 . The flakes were detached from simple flake cores, with changing striking platforms and directions. Among the tools, notched pieces are the most numerous, followed by simple scrapers. Nevertheless, some MP traits were reported 32,33 : A small number of cores can be classified as prepared or Levallois cores (unfortunately no numbers are given in the cited publications), and some flakes show facetted platforms. Additionally, some bifacially worked tools, like simple handaxe-like forms, occur in the assemblage. During a survey at the exposed SMT base at the Schladebach pit, which was conducted by the authors in addition to the luminescence sampling, 10 cores (Supplementary Figures S23-S25), 16 flakes, 2 scrapers, and a potentially notched tool (Fig. 3A) were recovered (see also Supplementary Tables S3 and S4. The luminescence samples were taken from the exposed SMT sequence of the ongoing gravel pit. Markkleeberg. Markkleeberg (51°16′15.29′′N, 12°24′5.35′′E), Saxony, is situated about 8 km south of the center of Leipzig, and is one of the most important sites for the early MP in Germany. The first artefacts were found at the end of the 19 th and the beginning of the 20 th century 88 in several gravel pits within the SMT sequence (Pleiße/ Gösel-river system). These finds were comprehensively published by Grahmann in 1955 48 . Later, at the end of the 1970s until the beginning of the 1980s, about 4500 artefacts were recovered and partly excavated from the base of the fluvial sequence 47 d"). The artefact assemblages from Markkleeberg can be characterized as typical MP, with prepared or Levallois cores, scrapers and bifacial tools such as handaxes [47][48][49] . The site of Markkleeberg is no longer accessible. The luminescence samples were taken by M. Krbetschek during fieldwork campaigns between 1999 and 2001, and were made accessible to us by the Freiberg (Saxony) dating laboratory. Zwenkau. The MP artefacts were recovered in a former brown coal open-cast mine named Zwenkau, Saxony (51°14′11.68′′N, 12°16′11.88′′E), which is approximately 14 km south-east of the center of Leipzig. Within the open-cast mine, the SMT sequence of the Elster-river system was accessible. Because of villages situated formerly in the area of the mine, the artefacts are published under the site names "Eythra" and "Bösdorf " 50 (Fig. 3C). 1850 artefacts 50 were recovered during surveys of the base of the SMT. The high proportion of bifacial tools is a particularly significant feature of this collection. About 50% of all tools are bifaces such as handaxes, knives and bifacial scrapers. Levallois cores are present, but are not very common. The majority of the cores are opportunistic flake cores, with several exploition surfaces (pers. comm. W. Bernhardt, Schkeuditz, who recovered and documented the artefacts). The luminescence samples derive from a part of the SMT sequence which is preserved in the western slope of the former open-cast brown coal mine. In this area, a bifacial scraper was found in the profile of the basal part of the SMT (Fig. 3C, pers. comm. W. Bernhardt, November 20 th 2017, who recovered and documented the artefact). Rehbach. The gravel pit of Rehbach, Saxony (51°16′11.43′′N, 12°16′59.46′′E), is situated approximately 4 km north-west of the former coal mine Zwenkau, and is situated within the gravels of the same SMT (Elster-river system) sequence. During sampling for luminescence dating, we recovered a typical MP Levallois core and a flake (Fig. 3B) from dislocated SMT basal gravels. Another flint flake was found in stratigraphic position within the SMT sequence (Supplementary Figure S27). The luminescence samples were taken from the exposed SMT sequence of the ongoing gravel pit. Methods Luminescence sample preparation and instrumental details. Luminescence-samples and material for gamma spectrometry were taken at equal positions. Sample preparation followed the common steps, including the removal of carbonates and organic matter in HCl and H 2 O 2 . To separate feldspar from heavy minerals and quartz, either density separation using lithium heterotungstate, or, for samples from Markkleeberg and Zwenkau, the flotation technique 89 was used. All samples from the section Markkleeberg were etched for 40 min using 10% HF to remove the alpha-ray affected outer rim of the coarse feldspar grains. Finally, the sample material was mounted on steel discs (aliquots) using silicon spray. Equivalent dose (D e ) measurements were undertaken using automated Risø TL-DA-20 reader. The feldspar signal was stimulated using IR light-emitting diodes transmitting at 870 nm (145 mW/cm 2 ) and the feldspar signal was detected in the blue-violet wavelength region. Irradiation was provided by a calibrated 90 Sr/ 90 Y beta source with a dose-rate of ~0.24 Gy/s. Dose rate determination. Dose rates were determined based on high resolution germanium gamma spectrometric analysis of the activities of uranium, thorium, potassium, and their daughter isotopes. All samples were measured on the bulk material at the "Felsenkeller" laboratory (VKTA) in Dresden. Dose rate attenuation by moisture was accounted for using water content values of 15 ± 10%. The cosmic-dose rate contribution to the ionization of minerals was based on Presscot and Hutton 90 . The internal potassium content was assumed to be at 12.5 ± 0.5% 91 . To account for alpha efficiency an a-value of 0.11 ± 0.02 92 was used for non-etched samples and the dose rate conversion factors were taken from Guerin et al. 93 . Equivalent dose measurements. Luminescence dating was applied to coarse-grained K-feldspar using the pIRIR 290 approach similar to Thiel et al. 45 . Prior to the detection of the elevated temperature feldspar signal at 290 °C for 200 seconds, the IR 50 signal, mostly affected by higher rates of anomalous fading, was depleted by stimulating with infrared-diodes for 100 seconds. The measurement protocol is shown in Supplementary Table S1. The pIRIR 290 approach was chosen due to its high signal stability (negligible fading) and as its suitability to date Middle Pleistocene sediments 46 . For equivalent dose measurements, grain size fractions of 180-250 microns were used for material from the sites Rehbach, Zwenkau and Schladebach/Wallendorf. For the samples from Markkleeberg, the 90-160 microns fraction was used. All equivalent dose measurements were conducted using very small aliquots with a sample-diameter between 0.5 mm-1 mm. Hence, only a very few grains were put on one aliquot allowing to point out if there was any skewness in equivalent dose distribution due to insufficient bleaching. For each sample, 3-5 artificial doses were inserted to create the dose response curve. At the end of the measurement cycle of the pIRIR 290 SAR approach, the first artificial dose was inserted again to measure the recycling ratio as quality criteria. For final equivalent dose estimation, only aliquots yielding recycling ratios deviating within 10% from unity were accepted. Equivalent dose overdispersions are mostly >25% (see Supplementary Table S2) and equivalent dose distributions indicate mostly sufficient bleaching (see Supplementary Figures S1-S18). Therefore the Central Age Model 94 was used for age calculations. SCienTifiC REPORTS | (2018) 8:5111 | DOI:10.1038/s41598-018-23541-w Dose recovery tests were conducted on samples L-Eva 1594, 1599, 1600 and 1612. Therefore, the sample material was bleached for 24 hrs under a solar simulator (UVA cube 400). Remaining dose residuals were then measured for respectively 3 aliquots from each sample. A further 3 aliquots were irradiated with a known dose close to the assumed natural one, and afterwards the precision of the recovered dose was tested. All residual subtracted measured to given dose ratios are within ±10% deviation from unity and are at 1.08 ± 0.04 (L-Eva 1594), 1.09 ± 0.07 (L-Eva 1599), 0.93 ± 0.22 (L-Eva 1600) and 1.09 ± 0.02 (L-Eva 1612). The presented ages are non-fading corrected as obtained pIRIR 290 fading rates are low and fading is interpreted to have only negligible effect on the equivalent doses. Additionally, pIRIR 225 measurements 43,44 were conducted on samples L-Eva 1594 and L-Eva 1597 (Rehbach gravel pit). The pIRIR 225 ages are presented and discussed in the supplementary part of the paper. Data Availability. The datasets generated during the current study are available from the corresponding author on reasonable request.
7,509.6
2018-03-23T00:00:00.000
[ "Geology", "History", "Environmental Science" ]
Facies variability of pennsylvanian oil-saturated carbonate rocks (constraints from Bashkirian reservoirs of the south-east Tatarstan) . One of the strategic ways of the old oil-producing regions is to further prospecting for potentially promising areas for hydrocarbon. One of these exploration areas is the Volga-Ural region. These reservoirs consist of carboniferous carbonate rocks, which contain high viscous hydrocarbons and are characterized by complex facies architecture and reservoir properties influenced by diagenesis. The high degree of facies variability in the studied area does not allow reliable distribution of potential reservoir rocks not only between different areas but even within the same oil field. Based on textural and compositional features of carbonate facies, 5 main facies associations were identified and characterized with respect to the depositional settings in the Bashkirian basin. The facies associations correspond to: distal middle ramp facies, open marine proximal middle ramp facies, high-energy innershoal facies, inner ramp facies of restricted lagoons, facies of affected by subaerial exposures. From west to east in the study the following trends in facies characterare identified: 1) a decrease open marine middle ramp facies and in the total thickness of the Bashkirian sections; 2) an increase in evidences of sub aerial exposures; 3) a decrease in the proportion of potential reservoir rocks. A general shallowing of the depositional setting was identified in an eastward direction, where potentially promising reservoir facies of shallow high-energy environments were replaced by facies of restricted lagoon and facies affected by subaerial exposures and meteoric diagenesis (palaeosols, dissolution). The applied approach based on detailed carbonate facies analysis allows predicting the distribution of potentially promising cross-sections within the region. Bashkirian carbonate facies, reservoirs, correlation At the same time, according to the authors, small attention is paid to carbonate reservoirs, as objects that have significant prospects of oil production and can ensure the region's energy stability soon. These reservoirs include regionally oil-bearing carbonate rocks of the Bashkirian stage. The difficulty of their exploration lies in the high facies variability of reservoir rocks in the studied area and the difficulty of correlating of facies from section to section. Attempts to correlate using different methods for comparing sections were carried out by various authors (Mukhametshin, 1982;Kochneva, Koskov, 2013;Galkin, Efimov, 2015;Kolchugin, Morozov et al., 2013, etc.). The authors used correlation techniques based on a statistical analysis of deposits, comparing data from geophysical well surveys, where the core analysis of the studied sections was not assigned the most significant role. The authors of this article believe that the lithological and facial principle of comparing sections based on core analysis and should be the basis for the correlation of deposits in the area. Such an approach will allow qualitatively dissecting the studied sections and established the all variety of GEORESOURCES www.geors.ru Fig. 1. The location scheme of the studied deposits, the main tectonic elements of the region, and a brief lithological and stratigraphic characteristic of the Carboniferous system lithotypes which composing the sections and the laws of their change both vertically (along the section) and horizontally (over the area). The object was core material selected from Middle Carboniferous units from the Bashkirian strata. The sections were studied from the most fully represented core wells since drilling of the Bashkirian section is often incompletely and is limited only to the productive zone. The studied core characterized deposits located on a line from west to east from deposits on the eastern side of the Melekesskaya depression to deposits located within the South Tatar anticline (Fig. 1). The boundaries of the Bashkirian strata were detected according to GIS data. Also, the upper boundary is reliably detected by core analysis: by changing of Bashkirianlimestones to carbonate-clayish strata of the Verey horizon (Moscovian), as well as by changing fossils (Khalymbadzha, 1962;Khvorova, 1958). The thicknesses of the Bashkirian sections of the studied area on average 40 meters, however, there is a general tendency to decrease (with minor variations) the thicknesses of the sections in the direction from west to east: from the eastern side of the Melekesskaya depression to the South Tatar anticline. Thus, the thickness of the studied Bashkirian sections varied from 60 to 18 meters. On the western slope of the South Tatar anticline, the volume of the studied sections is formed by the successions of the Kamsky horizon and the cheremshansky horizon. Horizons unconformity cover the Serpukhovian strata. In the top of sections, the Bashkirian strata are unconformity overlap by Moscowian strata (Geology of Tatarstan, 2003). It is believed that the sections of the Melekesskaya depression are more complete. In the Upper Bashkir section is detected the Melekessky horizon, up to 12 m thick, covers the cheremshansky horizon (Geology of Tatarstan, 2003). However, the occurrence of the Melekessky horizon is noted in sections of the axial part of the Melekesskaya depression and it is possible on the eastern side of depression thickness may be less or they may completely disappear. In practice, horizons are not distinguished, which is caused by small volumes of paleontological studies and difficulties in comparing fragments of the section according to well log data. Traces of regional unconformity of rocks found in the sections. There are brecciated limestones and the loss of a certain group of fossils, according to V.S. Gubareva (Gubareva et al., 1982). In industry special studies of fossils are practically not provided. The authors propose a methodology for identifying patterns of variability and correlation of deposits, based on the allocation and tracking of facies in the area. One of the ways based on a qualitative description of core material and analysis of petrographic sections. The practical side of the study is the possibility of using the proposed methodology to track potentially promising reservoir rocks by area and, conversely, to identify low promising areas. Research methods and study methods Macroscopic study of core samples The studied sections were characterized by continuously selected core material with an actual core yield of 90-100%. This allowed the authors qualitative Optical microscopic studies Petrographical analysis of thin sections was made using an Axio Imager A2 polarizing microscope. Analysis of the thin sections included determination of the mineral composition, identification of the microtexture and structure of the rocks, fossil fragments, determination of facies. The structural classification of Dunham (Dunham, 1962) was chosen as a classification of carbonates, used by the international community and most of the oil companies in Russia, in recent years. The methodology of lithological and facial reconstructions The lithofacies model was used to determine the type of facies of allocated lithotypes. This model of distribution of facies determines the presence of lithotypes in various physical and geographical settings. They are controlled by the morphology of coastline, changes in the depth of the water basin, the topography of the seafloor, distance from land, etc. Distribution of facies has a certain pattern, in conditions of increasing depths of basin. The authors created a distribution scheme for the facies of the Bashkirian sea for the studied area (Fig. 2), based on the analysis of several models of marine carbonate precipitation (Immenhauser et al., 2004;Della Porta et al., 2004). Latin letters are used: A, B, C, D, E, for the convenience of detected facies. Detailed interpretation of the facies will be done in the "Results" part. Results Bashkirian successions were formed in the conditions of gentle slope carbonate ramp, based on analysis of the composition of the rocks (Proust et al., 1998). Precipitation of carbonates was in normal marine environments of low latitudes (Kolchugin, Immenhauser et al., 2016). The studied region can be defined as a transition zone between the inner and middle ramp with typical carbonate sedimentation (Kolchugin, Della Porta et al., 2017). The authors proposed to distinguish 5 main types of facies, which differ in the lithological and paleontological composition. Facies c is represented by well-sorted grainstones, with crossed lamination with an abundance of ooids, bioclasts, and fragments of various grains and intraclasts. The facies is characterized by the intergrain type of porosity, isopahous rims of marine fibrous cement (Fig. 3c). Facies E is composed of various types of limestones: brecci as (Fig. 4A), mudstones and wackestones, sometimes bundstones (Fig. 4B), and packstones. All facies have traces of secondary iron mineralization and recrystallization. Rocks are often characterized by the presence of sediment or cracks of karstification ( Fig. 4C) with fragments of subaerial leaching and fragments of paleosols and calcrete. Breccias are often characterized by the black pebbles (Fig. 4A). The authors selected the most typical sections which are characterizing the eastern side of the Melekesskaya depression and the Western slope of the South Tatar anticline and analysis of the variability of rocks in the studied area. One of the most western sections is the section of the Bashkirian strata of the Akansky deposit, located on the eastern side of the Melekesskaya depression. The Novo-Elkhovsky deposit was selected as the most eastern section in the studied area. A significant number of sections were studied, between "opposite" sections on the line from west to east. However, sections of Ivinsky and Demkinsky deposits were chosen as the model between the selected, since they were best characterized by core samples (Fig. 5). It was possible to give a detailed description of the types of rocks and establish the boundaries of change lithotypes. An important feature of all studied sections is the presence of traces of subaerial exposures. In sections, they are marked as facies E and highlighted in red. The proportion of facies E varies from section to section and generally increases from west to east. Another feature is a decrease in the total thickness of the Bashkirian sections. The thickness of the deposits is 45-60 m on the eastern side of the Melekesskaya depression and does not exceed 20-25 m on the western slope of the South Tatar anticline. Discussion The variability of the Bashkirian successions from west to east mostly caused by more intraformational erosion of strata within the western slope of the South Tatar anticline. This is indicated by an increasing share of brecciated limestones in sections, traces of subaerial exposures, and meteoric diagenesis. This type of diagenesishas often explained the lack of effective porosity in grainstones, which seem to be the most promising rocks as potential reservoirs. The pore space of such grainstones is almost filled by early diagenetic calcite. The periodic outbreak of rocks above sea level takes a negative role inthe preservation of primary high porosity. Meteoric waters change the physicochemical parameters of precipitation conditions and produce recrystallization of rocks and calcite cementation, filling of pore space by secondary calcite (Badiozmani, Mackenzie, 1977;Moore, 1989). The presence of reddish colors of rocks indicates the periodic outbreak of rocks above sea level. It is caused by the appearance of iron oxides and hydroxides as markers of subaerial exposures (Fig. 4). Breccias contain black pebbles and found in sections of the western slope of the South Tatar anticline. The black color of pebbles is caused by humic organic matter (fragments of ancient paleosols). This indicates a relatively long time of continental subcontinental environments where could form soils. The Bashkirian basin is an epicontinental basin with extremely insignificant depth differences. Periodic glacioeustatic oscillations of the marine basin drained some areas. It led to the erosion of previously accumulated carbonates. Bashkirian time was a time of active fluctuations in sea level and produced by global glaciation (Bishop, Montañez et al., 2009;Mii, Grossman et al., 2001). Probably, glacioeustatic oscillations were a key factor in sea-level change. Traces of erosion are captured only in the form of thin brecciated limestones, in the western sections. In the eastern sections are limestones with traces of secondary iron mineralization, limestones with polygonal cracks of the early stages of karstification, and meteoric type of diagenesis, in addition to brecciated limestones (Fig. 4). It indicates relatively deeper marine environments in the west of the studied region (the modern eastern side of the Melekesskaya depression) and more shallow in the east (modern western slope of the South Tatar anticline). Moreover, the authors do not exclude that even more characteristic tracers could simply be eroded of the existence of rocks in subaerial exposures and changing of sea level. Correlation of sections between deposits difficult task, because of the high degree of facial variability in the studied area. However, such correlation is quite realistic based on the frequency of certain facies and the patterns of their change along the section, as well as the tracking of intervals of subaerial exposures (Fig. 5). It seems that the intervals of subaerial exposures can be considered as some benchmarks for the correlation of GEORESOURCES www.geors.ru 34 sections. Studying the Bashkirian sections shows that all the sections have at least two intervals of subaerial exposures in the middle and upper parts of the sections. Probably, they were the most obvious stages of subaerial environments. It can find a larger number of such intervals in the eastern sections, located on the western slope of the South Tatar anticline. It caused by more shallow marine environments of carbonate precipitation. Moreover, such "regional breaks" of sedimentation are well distinguished in sections and can be used to compare strata. It is noted variability in productivity and oil saturation, in addition to facies changes within the selected profile from west to east. First of all, this is connected with the potential reservoir rocks represented by packstones and grainstones, which are thinning to the east. If the packstones almost disappear in the eastern sections, grainstones lose porosity in conditions of subaerial diagenesis. The industrial production of such sections is lost, and often rocks do not have any signs of oil saturation. Fig. 2. conclusions An analysis of the composition of the studied sections and the position of the sections studied area allowed to draw several conclusions. 1. It is observed a decrease in the share of normal marine environments from the west (the eastern side of the Melekesskaya depression) to the east (the western slope of the South Tatar anticline), at the same time, an increase in the share of restricted lagoon facies and subaerial exposures. In the same direction is observed a general decrease in the thickness of the Bashkirian sections. 2. It is reduced the industrial productivity of sections and the overall oil saturation of the rocks from west to east in the studied area. This is due to two main factors: 1) it is lithological and facies composition of the section, due to a decrease in the share of potential reservoir rocks (packstones and grainstones); 2) it is the type of diagenesis of carbonate sediments, where potentially promising reservoir properties of rocks were lost under the subaerial conditions and the influence of meteoric diagenesis. 3. The high facies variability of the Bashkirian strata caused by global glacioeustatic sea-level fluctuations, and the amplitude of which sea-level change could be up to several tens of meters. In this case, a significant part of sections could be thining (up to 10-15 m) due to erosion.
3,465
2020-06-30T00:00:00.000
[ "Geology", "Environmental Science" ]
Global Bayesian Analysis of the Higgs-boson Couplings We present preliminary results of a bayesian fit to the Wilson coefficients of the Standard Model gauge invariant dimension-6 operators involving one or more Higgs fields, using data on electroweak precision observables and Higgs boson signal strengths. Introduction After a decades-long hunt, in the summer of 2012 the physics world erupted in excitement when both the AT-LAS and CMS experiments at the Large Hadron Collider (LHC) at CERN announced their discovery of a particle that looked like the Higgs boson (H) [1,2]. With the help of two-and-a-half times more data and sophisticated experimental analyses, it is now confirmed that the newfound particle behaves, indeed, very much like the Standard Model (SM) Higgs boson. That this Higgs boson decays to SM gauge bosons is now established with high statistical significance. In fact, each of the decay channels H → γγ, H → W + W − and H → ZZ is by now a discovery channel. There is also good evidence of its non-universal couplings to fermions. The decays to τ + τ − and bb final states have also been seen with good confidence. Since the Higgs-boson mass (m H ) has now been measured, its couplings to SM particles are completely predicted except for the residual arbitrariness introduced by the Yukawa couplings to fermions, which are nevertheless very constrained by the precise measurement of fermion masses. This means that any deviation from the SM predictions will provide unambiguous evidence for New Physics (NP). Unfortunately, large deviations from the SM expectations are already ruled out (except possibly in the couplings to light fermions and/or H → Zγ). This, in conjunction with the absence of any other direct NP signal so far, leads us to expect a deviation at the level of no more than a few percents. Hence, a rigorous study of the Higgs-boson couplings in the Run-II of the LHC and also in the high luminosity phase is mandatory. Although new particles at the TeV scale or below are perfectly allowed by the LHC data, it is interesting to study the sensitivity of the current Higgs-boson related measurements to short-distance physics assuming an effective field theory framework. The effect of heavy NP (beyond the reach of LHC for direct production) can be parametrized in terms of gauge-invariant higher-dimensional operators involving only SM fields. In this case, one supplements the SM Lagrangian with operators of mass dimension greater than 4, In the SM there is only one operator of dimension 5, the celebrated Weinberg operator which gives Majorana masses to light neutrinos [3]. As this operator is irrelevant for our discussion of Higgs physics, we will not consider it here. On the other hand, the number of dimension-6 operators is much higher: even for one generation the count of the total number of operators grows to 59 [4] 1 . Adding general flavour structure increases this number to a gigantic 2499 [6]. For phenomenological explorations of some of these operators and related studies, see [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,6,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36]. In the following section we will choose one operator basis and introduce the set of operators considered in this work. The experimental data used in our analysis will be discussed in Sec. 3. We will present our results in Sec. 4 and outline some conluding remaks in Sec. 5. Operator basis Several operator bases have been used in the literature to describe the physics of gauge-invariant dimension-6 operators in the SM [37,38,4]. In this work we concentrate on electroweak and Higgs-boson observables only. While depending on the set of observables chosen for a specific study one of these operator bases can be more convenient than others, physics should be basis independent. Moreover, we aim to study also other observables (e.g., flavour and other low-energy ones) in the near future. Therefore the choice of one operator basis is as good as any other one for our purpose. As we do not want to introduce another new basis in the literature, we choose to adopt the fairly general basis introduced in Ref. [4]. As mentioned earlier, the total number of independent operators was shown to be 1 The original work by Buchmuller and Wyler [5] had 80 operators out of which only 59 were shown to be independent by the authors of [4]. 59. The basis of Ref. [4] consists of 15 bosonic operators, 19 single-fermionic-current operators and 25 fourfermion operators for each fermion generation. Since in this study we limit ourselves to electroweak and Higgsboson signal strength observables (extending the previous work by some of us [8,28,32]), we consider only a subset of operators. In particular, we only consider operators involving one or more Higgs fields. Operators which involve fermionic fields are assumed to be flavour-diagonal and family universal. Moreover, we restrict this study to Charge-Parity (CP) even operators only. As the Wilson coefficients are generated at the scale Λ, ideally, one should also use the Renormalization Group Equations (RGE) to evolve them from the scale Λ to the energy scale relevant for the process of interest 2 . In this work we neglect the effect of RGE. Below, we introduce our notations and list all the operators relevant for our study. • Bosonic operators: where τ I are the three Pauli matrices. The Wilson coefficients for the operators O HW B and O HD (we denote them by C HW B and C HD respectively) are related to the well known Peskin and Takeuchi parameters S and T [39] by, where c W and s W are the cosine and sine of the weak mixing angle θ W respectively, v is the Vacuum Expectation Value (VEV) of the Higgs field and α em is the electromagnetic fine-structure constant. In addition to the above operators, there are two more purely bosonic operators involving only the Higgs-boson field, namely, The operator O H contributes to the wave-function renormalization of the Higgs field and O H contributes to the Higgs potential, i.e., the VEV v and the SM Higgs-boson self coupling λ. We will see later that this makes O H poorly constrained and O H , which does not affect our observables at all, remains unconstrained by our analysis. A joint measurement of the Higgs mass m H and the selfcoupling λ is required to constrain this operator. There are 8 more bosonic operators (6 CP odd + 2 CP even) in the total 15 bosonic operators listed in [4], but they either do not involve any Higgs field or are CP-odd. Thus, we do not consider them in our analysis. • Single-fermionic-current operators: As we consider flavour diagonal couplings only, all the above operators except O Hud are hermitian. Here, H = iτ 2 H * and the hermitian derivatives have been defined as, There are also (non-hermitian) operators involving scalar fermionic currents, Once the Higgs field gets a VEV, these operators modify the SM Yukawa couplings. There are 8 more operator structures which involve tensor fermionic currents. We do not consider them in the analysis presented here. Experimental data In order to constrain the Wilson coefficients of the dimension-6 operators induced by NP, we use the data on (1) ElectroWeak Precision Observables (EWPO) from SLD, LEP-I, LEP-II and Tevatron and (2) Higgs signal strengths from ATLAS and CMS. The experimental values of the EWPO are summarized in Table 1. For the definitions and theoretical expressions of the EWPO and related issues, we refer the reader to [28] and the references therein 3 . The quantities in the first five rows in Table 1 have been used as inputs of our fit. Currently, we have used only their central values while fitting the NP coefficients. In addition to the EWPO, we also use the data on Higgs signal strengths from the ATLAS and CMS experiments. The theory prediction for the signal strength µ of one specific analysis can be computed as, where the sum runs over all the channels which can contribute to the final state of the analysis. The individual channel signal strength r i and the SM weight for that In the presence of NP the relative experimental efficiencies, i , will in general be different from their values in the SM. In particular, the appearance of new tensor structures in the vertices can modify the kinematic distribution of the final-state particles, thereby changing the efficiencies. In this work, we assume that this effect is negligible and use the SM weight factors throughout. This assumption is valid for small deviations from the SM couplings so that kinematic distributions are not changed significantly. We have implemented our effective Lagrangian in FeynRules [41] and used Madgraph [42] to compute the NP contributions to the Higgs production cross sections numerically at the tree level. In order to compute the branching ratios we have used the formulae given in [25] after changing them to our basis. We only consider NP effects which are linear (O(1/Λ 2 )) in the dimension-6 operator coefficients. In all cases, the SM K-factors 4 4 We define the SM K-factor to be the ratio of the cross section have been used to estimate the effect of QCD corrections, even for the NP contributions. No theoretical uncertainties have been associated to the cross sections and branching ratios in our current analysis. Results In our analysis we have used the Bayesian statistical approach. It has been implemented using the public package Bayesian Analysis Toolkit (BAT) [52]. Flat priors have been chosen for the parameters to be fitted. We consider only one Wilson coefficient at a time and fit it first to the EWPO and Higgs-boson observables separately, and then to the combination of both. Our results are summarized in Table 2 where we show the 95% probability regions on the Wilson coefficients assuming the NP scale to be 1 TeV. It can be observed that except for O HW B the Electroweak precision constraints are much stronger than the Higgs signal strength data for all the operators which contribute to the EWPO. from the LHC Higgs Cross Section Working Group [43] to the leading order number obtained using Madgraph. Only EW Only Higgs EW + Higgs The strong constraint on C HW B from the Higgs data is due to its contribution to the Higgs decay to two photons which is loop suppressed in the SM. More precisely, the direct NP contribution to the Hγγ vertex can be written as, which has to be compared with the SM vertex c γ α em 8πv F µν F µν H with c γ ≈ −6.48. In Fig. 1 the posterior distribution of C HW B is shown with only EWPO and only Higgs signal strength data. Eq. (27) also explains why the bounds on C HW and C HB are rather strong from the Higgs signal strength data. The tight constraint on the operator O HG can also be understood in a similar way. It contributes to the Higgsboson production through gluon fusion, which should be compared to the SM contribution α s 12πv G A µν G µν A H, where α s is the chromomagnetic finestructure constant. The bounds on the dimension-6 operator coefficients in Table 2 can also be translated into bounds on the NP scale for fixed values of the coefficients. We show them in Table 3 for two values, C i = 1 and C i = −1. A close look at the Table 3 will reveal that, assuming C i (Λ) = ±1, the lower bound on the NP scale for one of the operators that is constrained only by the Higgs data (C H ) is less than 1 TeV. As this is close to the energy scale being probed at the LHC, the validity of such low bounds may be questionable. Conclusion The discovery of a Higgs boson and the absence of any other direct signal of new physics motivates to adopt effective field theories to study possible deviations of the Higgs-boson couplings from the SM. In this work we have taken the above route to study the effects of dimension-6 operators in Higgs physics. To this end, we have considered EWPO from LEP and Tevatron, and Higgs signal strength date from the LHC to fit the coefficients of the NP operators. In general, in an Ultraviolet (UV) complete model several operators are generated with specific relations among their coefficients. However, given the state of our knowledge about UV physics, any theoretical bias is premature and considering definite combinations of the operators in a fit is not strongly motivated. Here we have studied only one NP operator at a time. Barring accidental cancellations, our results should provide an estimate of the bounds even in relatively general scenarios. Updated results including more than one operator at a time will be presented in a future publication [53]. The summary of our results is presented in Tables 2 and 3. It is interesting that there is a strong hierarchy among the lower bounds on NP scales of different operators. It spans from cases with ∼ 1 TeV (C H ) to ∼ 15-20 TeV (e.g., C HB ). We observe that, except for the operator O HW B , the Higgs strength data is redundant for all the operators which contribute to the EWPO. The bound from Higgs data for the operator O HW B is comparable to that obtained from EWPO. However, there are also operators (e.g., O HG, HW, HB ) which are only constrained by the Higgs data. Moreover, as some of them contribute to loop-suppressed processes in the SM, the bounds on them are rather strong. To summarize, the preliminary results presented here indicate that the NP scale is beyond the reach of LHC energy for most of the operators if the Wilson coefficients are assumed to be ±1. However, these bounds can be weaker if the coefficients are smaller or specific correlations among the NP operators are present. Therefore NP scale of order ∼ TeV is allowed for perturbative values of the couplings.
3,267.6
2014-10-15T00:00:00.000
[ "Physics" ]
Web Services Composition Using Dynamic Classification and Simulated Annealing Service Oriented Architecture (SOA) introduced the web services as distributed computing components that can be independently deployed and invoked by other services or software to provide simple or complex tasks. In this paper we propose a novel approach to solve the problem of the business processes execution engine web service selection and services composition in the Service Oriented Architecture (SOA) related to the Synchronous mode. The paper provides a mechanism to improve the web services selection and service composition, using dynamic web services and service composition classification and Simulated Annealing (SA) to satisfy services' requirements expressed as the Service Level Agreement (SLA). The results show that the proposed approach enhanced the services composition by increasing the availability and decreasing the response time to the service composite. Introduction SOA introduced the web services as distributed computing components that collaborate with other services in a loosely coupled manner to perform simple tasks over the internet. Web Services are developed and deployed by different providers where the Web Service Definition Language (WSDL) is proposed as standard to avoid the interoperability and to expose the web service functions using eXtensible Markup Language (XML) and Simple Object Access Protocol (SOAP) to exchange data (Zhang et al., 2003;Juric et al., 2006;Zhang & Pan, 2008;Al Hadid, 2011). Web services can be classified into two types; elementary web service and composite web service; while the elementary web service does not rely on other web service to accomplish its task. The composite web services are aggregating multiple web services which interact with each other according to a predefined business process model to perform an end-to-end complex functions using business process execution engine (Zeng et al., 2003;Gao et al., 2009;Sheng et al., 2014). Business process execution engine supports asynchronous and synchronous processing modes. Synchronous mode is ideal when processes are executed in relatively short time where client is blocked until request processing is completed then it will return a response to the client immediately. On the other hand, asynchronous mode which is related to the long-running processes does not block the client for the duration of the operation (Juric et al., 2006;Li et al., 2010). and complex solution that meet the customers' needs (Gao et al., 2009). Consequently, statically approaches used to select web services and compose business processes in design time are inappropriate (Zeng et al., 2003); instead, dynamic composition approaches and techniques are required where the QoS changes of the composed web services during the runtime are taken into account. In order to improve the web service selection and composition availability and reliability, some artificial intelligence (AI) algorithms have been adapted such Simulated Annealing (SA). SA is an AI widely used probabilistic search algorithm that can be used as a mechanism to simulates the behavior of metals under annealing when temperature is high, bad solutions have better chance to be accepted. As temperature goes down, SA becomes stricter and only good solution can be taken (Chau et al., 2008;Maqableh & Karajeh, 2014;Varty, 2017). In this research we propose a new approach which is based on web services and composite services classification and SA algorithm. This approach solves the problems related to services composition and business process execution engines that support synchronous processing mode. Proposed approach improves the business process composition by classifying the web services and composite services using actions and weights. it also minimize the services composition time, support new composite services design and maintain existing composite services redesign. In addition, proposed approach improves web services reusability, utilization, providing highly dynamic composite services execution that meets the SLA. As a result, composite services will be updated to the WS QoS changes and to the different customers' requirements. This paper is organized as follows: section 2 presents all the researches related to the web services selection and composition, in section 3, proposed approach is introduced, which includes the proposed execution engine architecture and the proposed execution algorithms. Section 4 discusses simulation and results, which includes the simulation configuration, experiments and the proposed dynamic composition approach results. finally, section 5 give the research conclusions. Related Work The significance of the Web Service selection and composition on-going research is an evident of the SOA and SLA aspects important to improve Web Services performance and reliability. Many Efforts have been addressed to identify the SLA attributes to improve the quality of service using heuristic and non-heuristic approaches (Mirzayi & Rafe, 2015). Zhang et al. (2003) proposed a service selection mechanism used to configure and compose business processes using service selection mechanism applied to narrow down the available service list, and then utilize the optimization capabilities provided by the Genetic Algorithm to construct the best business processes that satisfy the customers' requirements. Jung et al. (2009) proposed methodology of business process clustering based on structural similarity metrics to compare business process models and to discovering the similar processes using the Cosine similarity measure. The researchers use the similarity metrics to classify similar processes in the same cluster utilized to reengineer and to support new process design. Li et al. (2010) proposed a distributed agent-based orchestration engine where agents collaborate to execute a portion of the original service composition. The implementation of the proposed architecture decreases the process execution time and improves the engine throughput compared to a non-distributed approach. Zhang & Pan (2008) build a web service classification system that uses different algorithms and classifiers to extract data from the corresponding WSDL file and parses it into key words related to the services groups. Corella & Castells (2006) proposed a heuristic approach for the semi-automatic classification of Web services using three-level matching procedure between services and classification categories, proposed approach assumed a corpus of classified services is available previously. Cardellini et al. (2017) present MOSES which is a software platform supporting QoS-driven adaptation of service-oriented systems, researchers claimed that proposed platform achieves a greater flexibility in facing different operating environments, also, it can handle any the possible QoS requirements conflicting related to several concurrent users. Also, many researchers suggested Artificial Intelligence (AI) algorithms to improve the efficiency and effect of web services selection. Gao et al. (2009) proposed an algorithm named quality of experience/ quality of service driven simulated annealing and Genetic Algorithm to optimize the ability in web services selection. Wang and Hou (2008) proposed a web service selection algorithm based on multi-objective genetic by analyzing the aspects of the web service selection process constraints. Lécué (2009) proposed a Web Service Composition extensible optimization model designed to balance semantic similarities and quality of service QoS using Genetic Algorithms. Lei et al. (2005) presented dynamic selection of composite web services using genetic algorithm optimized neural network algorithm to express the composed service instead of using the traditional mas.ccsenet.org Modern Applied Science Vol. 12, No.11;2018 neural networks. Many researchers stated that web services are affected by the performance and functional problems which may not guaranty the required Quality of Service (QoS) and the expected behavioral properties and functionality, where the traditional QoS evaluation methods might exhaust or slow down the service which may affect the business process and prevent the Service Level Agreement (SLA) to be achieved. Additionally, researchers claimed that current web services based on business process execution languages do not adequately satisfy the new service composition that meets the business requirements where new and the existing composite services must be redeployed dynamically to satisfy the clients' requirements which expressed as SLA, in addition to the problem of updating existing composite services according to the related web services QoS changes (Juric et al., 2006;Muthusamy et al., 2009). Architecture The Proposed architecture improves web services composition via dynamic classification in the business process execution engine initialization and runtime using SA to support new and existing process design and reengineer and to satisfy the clients' requirements which expressed as SLA. Service composition QoS is a key factor to ensure clients' satisfaction that may have different requirements regarding to the QoS. For example client may require maximizing the availability and response time while another may give more importance to the process than the response time. Accordingly, we have grouped the clients into three classes regarding the constraints and preferences of the users, also similar web services are grouped into classes using same actions and the weights. The classified Web services will be used to compose the services composition which will also be classified using actions and weights. Figure • Web service ID: represents a unique Web service identification number. • Web Service Wight: demonstrates the QoS described by the availability and the response time of the web service, where web service's weight is classified into three categories; Excellent, Good and Poor service. • Web Service Action: represents the affect of the web service process. • Web service Response Time: is the time needed by a service to correctly respond to the request, it can be calculated using the following equation (Emeakaroha et al., 2010): Where; mas.ccsenet.org Modern Applied Science Vol. 12, No.11;2018 • Web service Availability: is the probability that the web service is accessible; it can be calculated using the following equation (Emeakaroha et al., 2010): Where: • Down time represents the time it takes to bring the web service back online after a failure, which is known as the mean time to repair (MTTR). • Up time denotes the web service operational time between the last web service failure to the next failure, which is known as the mean time between failure (MTBF). Service composition pool architecture components are: • Service Composition ID: represents a unique Service Composition identification number. • Web Services: Web services that composed the Service Composition which depends on action and weight of the related Web services. • Service Composition Weight: demonstrates the QoS described by the availability and the response time of the Business Process, where business process weight is classified into three categories; Excellent, Good and Poor business process. • Service Composition Action: Service Composition with same action has the same affect with different web services invocations. According to the SLA and QoS which can be recognized by the Weight (strength of classifier). Same business processes have the exact same action. • Service Composition Response Time: is the total time of every web service execution time in the service composition. it can be calculated using the following equation (Gao et al., 2009): Where, • BusProctime is the sum of all the web services' response time. • WStime(i) is the time needed by web service (i) to correctly response to the business process task request. • N: is the number of web services in the service composition Web services and service composition response time has been categorized as shown in table 1. • service composition Availability can be obtained by calculating the multiplication of all web services availability composing the business process, as shown in the following equation (Gao et al., 2009): Where, • BusProcava is the multiplication of all the web services availability in the business process. o WS ava (i) is the probability that the web service (i) is accessible. • N: is the number of web services in the service composition Web services and service composition availability has been categorized as shown in table 2. SLA also represented as a SLA pool which will improve the procedure of selecting the precise business process that achieves the client SLA requirements. SLA pool Architecture is shown in Figure (2). • Client ID: represents a unique client identification number. • Client SLA Weight: demonstrates the QoS described by the availability and the response time of the client requests, where the SLA weight is classified into three categories; Excellent, Good and Poor client's SLA. • Client SLA Action: action represents the affect of the Business Process. • Client SLA Response Time: the maximum response time of the composite service. • Client SLA Availability: the minimum availability of the composite service. • Client SLA Maximum Capacity: maximum number of client's requests a business process can accept per second. The architecture of the proposed approach for the services composition using dynamic classification and SA is shown in Figure 3. Proposed Algorithm Proposed approach simulation is modeled as a set of classified web services and composite services to provide the execution environment and the SLA aspects, each composite service is defined as a sequence of activities implemented by execution operations on the web services that have the same weight and related actions to fulfill the composite services requirements. We will apply the proposed architecture to the following execution modes which will be compared with the results applied without the modifications of the suggested architecture. The execution modes are: 1) Normal Mode: Clients' requests within the SLA requests maximum capacity (Sheng et al., 2014). 2) Dropped Mode: Clients' requests exceed the SLA requests maximum capacity which considered as Denial of Service Attack (DoS) (Jung et al., 2002). 3) Priority Mode: Clients' requests within the SLA requests maximum. in this case, there will be more than one request will be received in the same time (Manikutty & Cao, 1998), and the proposed algorithm will arrange the invocations according to the clients' weight, and number of requests. Initialization: 1) Load Clients SLA Data into SLA_Pool; 2) Load Web_Services_Pool data; web services will be classified into clusters using action and weight stored in the WSDL; 3) Load Business_Processes_Pool data by selecting the web services from the Web_Services_Pool that meet the service composition action and weight requirements randomly. Accordingly, selected web services' availability and response time data will be used to calculate the service composition availability and total response time of the associated web services. Run Execution Engine using Normal Mode In this mode, number of client's requests will not exceed the requests maximum capacity identified in the SLA Loop (Next Run) Run Execution Engine using Priority Mode In this mode, the system will arrange the invocations according to the clients' weight, and number of requests: Do until Run=No_Of_Runs Do Until Priority_Client = No_of_Concurrent_Clients -1 Select client randomly from the SLA_Pool; Generate No_Of_Service_Composition requests the client will invoke where No_Of_Service_Composition_requests <=Client_Max_Capacity; Load client_Number and No_Of_Service_Composition_requests to the Priority_Pool; Do Until Client_Request = No_Of_Business_Process_requests From the business process pool that match the clients SLA weight and action, Select randomly an unengaged service composition and record its response time, availability, weight; Loop (Next Client_Request) Calculate the Average of the client's requests availability and response time; Record Client, No_Of_Service_Composition_requests, availability, response time in the results file; Loop (Next_ Client_Request) Loop (Next_Priority_Clients) Generate Random Number (R), where (R) between 0 and 100; Generate Random Number (X), where (X) between 0 and No_of_Concurrent_Clients -1, get the client's requests availability from the results file where the client number is located in the Priority_Pool in location (X); If (R) > client (X) availability Run SA GAP; reclassify web services into different pools using Action and Weight stored in the WSDL and recreate the service composition according to the new web services classification and update the web services and service composition pools; End if; Loop (Next Run) Simulation In order to evaluate the proposed service selection and composition approach, we develop a special simulator for web services selection and composition using VB.NET. Proposed simulator supports the web services and service compositionclassification, SLA, web services and business process pools and SA, in addition to SLA Gap which simulates the case when the process availability or reliability do not meet the client SLA needs; SLA Gap is used to modify the Web services specifications by changing web services weight to upper or lower level and modify its availability and response time and update web services pool. Accordingly, composite services weight will be updated to the new level and adjust its availability and response time values according to the Simulator testing methods are grouped into two groups which applied to all business process execution modes; normal, dropped and priority modes. The first testing groups do not classify the web services or the service composition modes while the other testing group uses the proposed approach to dynamically classify the web services and composite services and SA. Results This section discusses the results of the web services and composite services QoS obtained from the simulator experiments for the standard static approach and proposed dynamic service composition approach. We implemented the experiments 50 times for the standard and proposed improved approaches using the simulator with 1000 runs in each experiment for every normal, dropped and priority modes. (4) shows the average of composite services availability for Normal, Dropped and Priority modes, the Figure shows a comparison between the standard execution for the business process execution engine algorithm and the proposed improved algorithm. Figure (4) shows that the results of the proposed approach for all modes are better than the standard business process execution engines modes; where the composite services' availability is higher than the standard service composition approaches for all modes. (5) shows that the average response time of the improved service composition using the proposed algorithm is better than the standard service composition approaches, where the response time of the proposed approach modes are less than the standard service composition modes. (5), the obtained QoS results show that proposed algorithm has enhanced service composition availability and response time for all modes and all the SLA clients' group: excellent, good and poor service groups, where the improved proposed approach has maximized the composite web services availability and minimized the composite web services response time based on the classification processes and SA Gap for all modes. Accordingly, proposed approach using classification and SA is the most appropriate choice to be used by the business process execution engines. Conclusion This paper presents a dynamic approach to classify, select and compose web services with QoS constraints. The proposed approach implements the classification pools and SA to dynamically classify the web services and composite services to the different pools based on the updated QoS in order to meet the clients' requirements and needs.We designed the proposed approach to update the web services QoS properties during the business execution engine run time which improves the engine performance by recomposing the services composition dynamically. Our approach depends on SA algorithm to upgrade the composite services; it means that it will be updated randomly based on the service composition availability and the generated random number which give the engine the flexibility to update the web services and services composition pools. On the other hand, SA is activated and run when the generated random number is greater than the composite service availability. The service composition process might be trapped in local optimum problem, requiting a longer time to update the web services pool and the service composition pool because the value of the composite services availability is high even if the selected service composition related to a poor SLA class.
4,382
2018-10-29T00:00:00.000
[ "Computer Science" ]
Real-time simulation of (2+1)-dimensional lattice gauge theory on qubits We study the quantum simulation of Z2 lattice gauge theory in 2+1 dimensions. The real-time evolution of the system with two static charges, i.e., two Wilson lines, is considered. The dual variable formulation, the so-called Wegner duality, is utilized for reducing redundant gauge degrees of freedom. We show some results obtained by the simulator and the real device of a quantum computer. Introduction Over the past few decades, lattice gauge theory has revealed many equilibrium properties of quantum field theory. Now we are entering a new era of lattice gauge theory. The simulation device is changing from classical computers to quantum computers. Quantum simulation will provide us novel results which cannot be obtained by classical simulation. One of the main issues is non-equilibrium or real-time dynamics of quantum field theory. Up to now, quantum simulation was mainly applied to the (1+1)-dimensional gauge theory [1][2][3][4][5][6][7][8][9]. The application to higher dimensions is compulsory for the future. As the simplest setup, let us consider the Z 2 lattice gauge theory without matter fields in 2+1 dimensions. This is the simplest but interesting theory which shares many essential features with realistic gauge theories. Although gauge field dynamics in 1+1 dimensions can be uniquely determined by the Gauss law constraint, this is impossible in 2+1 dimensions. Redundant degrees of freedom must be removed in nontrivial manners [10][11][12][13][14][15]. In the (2+1)-dimensional Z 2 lattice gauge theory, there already exists well-established formulation, say, the Wegner duality [16]. The formulation can be utilized for quantum simulation, as demonstrated in this paper. In the classical simulations of pure lattice gauge theory, the most frequently-computed observable is the Wilson line or the Wilson loop. The Wilson line is interpreted as the world line of a charge. At equilibrium, it is an order parameter for the phase of gauge theory. In real-time simulation, the expectation value of the Wilson line itself would not be so important because it is no longer an order parameter. Rather, the response of the system to the Wilson lines would be interesting. It is interpreted as gauge field dynamics induced by charges. In this paper, we discuss how to perform the real-time simulation of pure lattice gauge theory with static charges by quantum computers. Z 2 lattice gauge theory Let us start with the basics of Z 2 lattice gauge theory. We consider the two-dimensional square lattice in the x-y plane. The Z 2 gauge fields are defined on links and their quantum operators are c The Author(s) 2012. Published by Oxford University Press on behalf of the Physical Society of Japan. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by-nc/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. given by the Pauli matrices. The familiar form of the Hamiltonian [17] is where e 1 and e 2 are the unit vector in the x and y directions, respectively. The first term is the contribution of the electric field, which is defined on links, and the second term is the contribution of the magnetic field, which is defined on plaquettes. The operator satisfies the lattice version of the Gauss law for physical states. The charge distribution Q( x) is set to 0 when a charge does not exist at x and to 1 when a charge exists at x. Although we could in principle consider time-dependent Q( x), i.e., moving charges, we only consider time-independent Q( x) in this paper. Because of the commutation relation It is easy to put static charges on the system. What we need to do is just to prepare an initial state with nonzero charge distribution. The charge distribution does not change in time evolution because of Eq. (4). Thus, static charges would be realized in ideal simulation. In quantum computers, however, the charge conservation is artificially violated due to device noises. It is important to use noise-robust formulation to keep the charge distribution fixed. When the lattice size is L x × L y , the number of links is ∼ 2L x L y . (The symbol "∼" means that the precise number depends on boundary conditions.) The dimension of the total Hilbert space is ∼ 2 2LxLy , but we do not need to treat the total Hilbert space. The total Hilbert space is divided into ∼ 2 LxLy subspaces with different distribution of Q( x). Since the subspaces are decoupled with each other, we only have to treat one subspace in one simulation. Removing redundant degrees of freedom, we can reduce computational cost and suppress artificial process. Dual variable formulation In the (2+1)-dimensional Z 2 lattice gauge theory, there is a famous formulation to remove redundant degrees of freedom [16]. The Z 2 gauge fields on the original lattice are mapped to the Z 2 spin variables on the dual lattice. The relation between the original lattice and the dual lattice is depicted in Fig. 1. The center of the plaquette on the original lattice defines to the site on the dual lattice. In the following equations, quantities on the dual lattice are written with an asterisk "*". For example, x * denotes the position of a dual site. When no charge exists at all, Q(x) = 0 for ∀x, the duality transformation is given by two equations [17]: the dual spin-flip operator and the dual spin operator When charges exist, the second equation must be modified to satisfy the Gauss law. Let us define the phase factor η( x, j) such that η( x, j) = −1 for the links connecting the charges and η( x, j) = 1 elsewhere (see Fig. 1). The phase factor satisfies the equation by definition. (The choice for the path connecting the charges is not unique. The path ambiguity is equivalent to the redefinition of field variables. Even if the path is deformed, Eq. (7) holds and physical results are invariant.) The duality transformation (6) is generalized as η( x − n e 2 , 1)σ 1 ( x − n e 2 , 1). The dual Hamiltonian (11) has three advantages compared with the original Hamiltonian (1). First, the Gauss law constraint is automatically satisfied. It is easy to probe This is an identity equation, so charge conservation is exact. Second, required memory size is smaller. While the number of the gauge fields is ∼ 2L x L y in the original Hamiltonian, the number of the dual spin variables is ∼ L x L y in the dual Hamiltonian. Third, the implementation on quantum gates is easier. The original Hamiltonian includes the product of four Pauli matrices. It will be implemented by some complicated combination of multi-qubit operations. The dual Hamiltonian only includes the product of two Pauli matrices. It can be directly implemented by a two-qubit operation. Real-time simulation The time evolution of the total system is given by The continuous time evolution is approximated by the n-times matrix operation via the Suzuki-Trotter decomposition, with The time step δt must be small enough to justify the Suzuki-Trotter approximation. Each matrix operation can be easily implemented by quantum gates. The matrix e −iHEδt is realized by the controlled rotation gate C Rz for two qubits and the standard rotation gate R z for a single qubit. The matrix e −iHBδt is realized by the standard rotation gate R x for a single qubit. We computed the time evolution by the simulator and the real device of a quantum computer. The simulator is the algorithm on a classical computer designed to mimic a quantum computer. For the real device, we used "ibmq 16 melbourne", which has 15 qubits, in IBM Quantum services [18]. The computation was performed with the dual Hamiltonian on the dual lattice, and then the obtained results were translated to the language on the original lattice. The geometry of the lattice is shown in Fig. 1 located at x = (x, y) = (1, 2) and (2, 2). Since the dual spin variable is given by two quantum states |1 and | − 1 , it can be embedded into a digital qubit. The initial state is set as The parameter is fixed at λ = 1. The electric field energy Ψ|(H Ex + H Ey )|Ψ is defined on links. It can be measured by counting the probability of each state of |Ψ because the matrix σ 3 (x * − e j )σ 3 (x * ) is diagonal. The magnetic field energy Ψ|H B |Ψ is defined on plaquettes. It can be measured by diagonalizing the matrix as σ 1 = h † σ 3 h with the Hadamard gate h. The distributions of these energies are shown in Fig. 2. At the initial time t = 0, the distributions can be analytically calculated from the initial state (18). The electric field energy is 1 on the link between the charges and −1 on all the other links. The magnetic field energy is 0 on all the plaquettes. After time evolution, the electric field energy spreads out all over the lattice and the magnetic field energy is transfered to the electric field energy. The snapshots at t = 1 and t = 5 are shown in the figure. Two systematic errors are analyzed in Fig. 3. The electric field energy on the link between the charges is plotted as one typical observable. In the left panel, we show the time-step dependence. The results obtained by the simulator with δt = 0.05, 0.10 and 0.20 are shown. The finer ones δt = 0.05 and 0.10 show good agreement although the coarse one δt = 0.20 slightly deviates. This indicates that the Suzuki-Trotter error is sufficiently small. In the right panel, we compare the result obtained by the simulator and the raw data obtained by the real device (without any error mitigation). The simulator result is expected as the exact answer (up to other systematic errors). On the other hand, the real device suffers from many kinds of noise. Furthermore, in the real device, the geometry of qubits is different from the geometry of the simulated lattice. The two-qubit operation is reconstructed as a sequence of the operations, and thus leads to noise enhancement. The raw data cannot reproduce the simulator results. Although the artificial violation of charge conservation is absent, the deviation is still large. Error mitigation will be necessary for practical use. Summary and outlook We have studied the real-time evolution of pure lattice gauge theory with static charges. Because of the limitation of computational resource, we adopted the (2+1)-dimensional Z 2 lattice gauge theory on a small lattice. Nowadays both of the device and the algorithm of quantum computers are rapidly developing. We will soon be able to adopt more realistic setups, e.g., continuous gauge group, in 3+1 dimensions, and on a larger lattice. The Wegner duality was originally discovered in the Z 2 gauge theory [16]. Later, the dual variable formulation was generalized to gauge theories with Z N subgroups [19]. Although the formulation is not universal, it would be useful for specific theories. For example, it is applicable to SU(N ) gauge theory. We can analyze real-time dynamics of gluons around color charges, e.g., the time evolution of a confining string.
2,641.8
2020-08-26T00:00:00.000
[ "Physics" ]
Principles of formation and development of clustering systems of the national economy Hetman Petro Sahaidachnyi National Army Academy, Heroes of Maidan str., 32, Lviv, 79012, Ukraine. Tel: +38-032-238-65-34 E-mail<EMAIL_ADDRESS>Maslak, O., Doroshkevych, K., & Salata, I. (2018). Principles of formation and development of clustering systems of the national economy. Scientific Messenger of Lviv National University of Veterinary Medicine and Biotechnologies. 20(86), 68–72. doi: 10.15421/nvlvet8613 Introduction As you know, in the implementation of any processes in various types of economic activity, rules, laws, rules, peculiarities that are characteristic of it and affect the effectiveness of the results should be guided by. Their study should be done in order to form a comprehensive understanding of economic processes, gaining experience of their implementation, ensuring the prerequisites of effectiveness, for which all the features, rules, outcomes should be generalized with the help of principles.They should be guided by the formation and development of clustering systems of the national economy. As noted by Gerasimchuk Z.V., Smolich DV, the principles of cluster formation are objectively inherent in the process of clusterization of initial principles, the basic laws, rules and regularities of the formation of cluster formations (Gerasimchuk and Smolich, 2014).Given that clustering, in its essence, is the process of forming clusters and other network structures, the principles of cluster formation lie at the heart of the formation and development of clusterization systems.They also ensure the validity of clustering, the diversity of forms of cluster formations (cluster systems), the effectiveness of the processes of formation and development of clusterization systems, which is formed in particular, by introducing a mechanism for regulating relations in a cluster (Gerasimchuk and Smolich, 2014).Let's consider them more carefully. Often, in the economic literature, the following principles of clustering can be found: geographical localization (territorial placement, concentration), the tightness of economic ties (including community ties, unity of interests of cluster members, coherence of development, general industrial infrastructure, common goal, community, integrity, integrity, specialization, communication of cooperation and competition), voluntary association of enterprises and organizations in clusters, resource provision (self-financing, p fiduciary), systemicity (synergy, self-development, integrity), etc. (Gorjaeva, 2008;Pjatinkin and Bykova, 2008;Semenov and Bileha, 2012;Plahin, 2014).Geographic localization indicates that cluster systems are close to the geographical location of the enterprise.The principle of the tightness of economic ties indicates that in the process of clustering, ties between the members of cluster systems that allow the implementation of joint projects that provide the development of cluster infrastructure are strengthened.In addition, clusters are formed around key activities around which all cluster members are united.The principle of a common goal (unity) means the subordination of process-es for the formation and development of clustering systems to a single goal (strategy) and the availability of long-term strategic goals for its development.Voluntaryity indicates that enterprises and organizations form cluster systems on a voluntary basis, can freely enter and exit the cluster system.This principle also involves the principles of corporate and sustainability (points to the presence of a climate of trust and the culture of communication between cluster members, supporting the cluster's business reputation, etc.), dynamism (flexibility of the cluster boundaries, which allows changing the structure of the cluster depending on the factors of the internal and external environment), adherence to the critical mass of the cluster (to maintain a balance between the cluster participants, it must consist of a certain number of participants, the excess of which leads to negative effects), n Depending participants cluster systems (for participants clustered systems characterized by economic and judicial independence, protection of members of the cluster), equity participants (Strategy cluster system is based on reconciling the interests of each of its members), selection of partners, outsourcing. The principle of resource provision assumes that entering the cluster of an enterprise provides resources that can meet the needs of the cluster.This ensures their concentration. Synergism described above, it manifests itself in exceeding the socio-economic consequences of the formation and development of clustering systems as a result of its implementation, in comparison with the functioning of its elements separately from each other.Synergy arises at each of the stages of achieving the strategic goals of the enterprise, which forms its multilevelness.Systemicity indicates the openness of cluster systems and their ability to develop, change the structure and composition of participants, accumulation of information, etc. Among other clustering principles, public-private partnership is distinguished in economic literature.It is characterized by: the combination of tangible and intangible resources of the state and the private sector on a longterm and mutually beneficial basis to achieve the greatest efficiency of the cluster; Legislative support for the activities of the cluster; the priority of the interaction of universities, regional authorities and business representatives; the correspondence of the cluster system to the policies of the state policy (Saltykov, 2011;Lobanova, 2014;Lyfar, 2014). Another group forms the principles of state management of clustering processes.Among them Pavlova AV highlights the principle of stimulating innovative processes (due to the stimulation of clusterization activity, there is a single corporate culture that can provide dynamism, flexibility of the formation of the innovation cluster and competitiveness), a programmatic approach to phased solving the tasks of building a cluster system (involves conducting integration processes initiated by the state, in parallel with the joint development procedures by the participants public-private affiliate program of effective policy of development and production of perspe tive products and its market position within the cluster system), the most effective management (arising from the creation of a management cluster), the construction of the systems economically stable cluster structure (due to the relationship with the strategic objectives of the development of related industries) (Pavlova, 2011). Pechatkin VV considers the principles of cluster policy in the regions: unity of economic space, polycentrism, coherence of strategic priorities of regional development and socio-economic development of the country as a whole, the complexity of quantitative accounting and valuation of all components of the economic potential of regions, dynamism (Pechatkin, 2012). Butko M.P.Among the principles of clustering among others is legitimacy, which is to ensure compliance with the requirements of domestic and international legislation in the management of the clusterization process and priority, which involves the application of the cluster model in the priority and promising types of economic activity (Butko, 2010). Vazhinsky F. A., Molnar A. S. note, with clustering principles, is to balance the interests of the cluster participants by ensuring equal economic conditions of economic activity and equality of benefits of production for all participants; an effective democratic system of selforganization and self-governance, based on analytical, regulatory and coordination functions on the part of the relevant bodies (Semenov and Bileha, 2012). Reshetova K.Yu.highlights the following principles of state support for innovative clusters: -target principle, envisages the orientation of all measures of the state regional economic policy on the development of cluster systems in the priority areas of capital investment or types of economic activity; -the principle of complexity, aimed at integrated support of clusters that implement innovative projects within the strategy of regional development or national economy; -the principle of objectivity in the decision-making by state authorities regarding the formation and development of a clustering system of the national economy; -the principle of targeting, aimed at the adoption of appropriate management decisions that are aimed at achieving specific goals, taking into account the peculiarities of the development of each particular cluster; -the principle of legitimacy, involves the recognition of clusters, their legitimacy, the definition of state authorities in order to develop the national economy; -the principle of parity of financial and economic relations between state authorities and cluster management bodies in order to ensure a balanced development of the clustering system of the national economy (Reshetov, 2014). Saltykov M.A. as the principles of the mechanism of clustering highlights: the principle of priority policy incentives over administrative tools; the principle of support of small and medium enterprises; principle of research support and introduction of innovations; the principle of ensuring the coordination of the interests of the cluster participants; the principle of interaction between business, administrative structures and research organizations; the principle of involving related activities in the correction of the cluster development strategy (Saltykov, 2011). Material and methods In the article used methods of scientific abstraction, analysis, synthesis, theoretical generalizations, etc. Results and discussion On the basis of the conducted research we consider it expedient to carry out the division of clustering principles according to the levels of the structure of the national economy into local, regional, national ones.Local principles are characteristic for the formation and development of a separate cluster system (cluster).These include the above principles of geographical localization, the tightness of economic ties, voluntary association of enterprises and organizations in the clusters (includes the principle of sustainability of dynamism, adherence to the critical mass of the cluster, the independence of participants in cluster systems, the equality of participants selection partners, outsourcing), systemic (synergy, self-development), resource support, public-private partnership, etc. Regional principles are characteristic for the formation and development of a clustering system in the region.We believe that regional principles point to the strategic directions of cluster development in the regions, which ensure the effectiveness of the processes of formation and development of the clusterization system.They are beyond the competence of the cluster system management bodies and can be applied by the relevant authorities within their competencies.These include the principle of priority of stimulating innovation (the principle of research support and innovation), the principle of support for small and medium enterprises, the programmatic approach to phased solving the tasks of building a cluster system, the system and the relationship with the strategic tasks of the development of related industries, autonomy and collegiality (self-management and self-organization, while forming and developing cluster systems, cluster management bodies are formed, but other members of the cluster systems are also involved in management), the complexity of quantitative accounting and cost estimation of all components of economic potential of regions, self-development on the basis of involvement in cluster systems of related economic activities. Fig. 1. Principles of formation and development of clustering systems of the national economy Note: Summarized for (Gorjaeva, 2008;Pjatinkin and Bykova, 2008;Saltykov, 2011;Semenov and Bileha, 2012;Gerasimchuk and Smolich, 2014;Plahin, 2014;Lyfar, 2014;Lobanova, 2014).National principles are in line with state policy in the field of clusterization, they are enshrined in legal norms, or generalization of current legal rules in the state.Also, national principles can be derived from the practice of clustering the national economy.These include the principle of the unity of the economic space (as is known, due to the general economic legislation, the unity of the monetary system, the unity of the customs territory, the functioning of common infrastructure systems for the formation and development of the clusterization system), polycentrism (multipolarity of the formation and development of cluster systems in national economy, indicating the presence of strong blocks of cluster systems that will determine the development of clusterization processes), the coherence of strategic prio the rhythms of regional development and the socio-economic development of the country as a whole, legitimacy (legitimacy), priority (application of the cluster model in priority and promising types of economic activity), the principle of complexity (integrated support for clusters implementing innovative projects within the strategy of regional development or national economy)), the principle of objectivity (procedural and tactical nature of decision-making on the formation and development of clustering systems of the national economy, ensuring their fairness and impartiality), targeting (orientation of decisions on the formation of the development of the clustering system on those organizational and legal forms or social relations that are in line with the desired characteristics for the implementation of the national economy development strategy), the principle of parity of financial and economic relations between state authorities and cluster management bodies , ensuring equal economic conditions of economic activity and equality of production benefits for all participants, etc. (Fig. 1). Conclusions Сlustering as a process of forming clusters and other network structures requires compliance with certain principles.Principles of formation and development of clustering systems of the national economy can be divided into local, regional and national (fig.1).Local principles are characteristic for the formation and development of a separate cluster system (cluster).Тhey contain: geographic localization, the tightness of economic ties, systemicity, etc. Regional principles are characteristic for the formation and development of a clustering system in the region (priority of stimulating innovation activity, support of small and medium enterprises, etc.).National principles are in line with state policy in the field of clusterization, they are enshrined in legal norms, or generalization of current legal rules in the state.In further research on the problem, a modeling of the clusterization of the national economy should be carried out. Principles of formation and development of clustering systems of the national
2,928.8
2018-02-26T00:00:00.000
[ "Materials Science" ]
Simple graph models of information spread in finite populations We consider several classes of simple graphs as potential models for information diffusion in a structured population. These include biases cycles, dual circular flows, partial bipartite graphs and what we call ‘single-link’ graphs. In addition to fixation probabilities, we study structure parameters for these graphs, including eigenvalues of the Laplacian, conductances, communicability and expected hitting times. In several cases, values of these parameters are related, most strongly so for partial bipartite graphs. A measure of directional bias in cycles and circular flows arises from the non-zero eigenvalues of the antisymmetric part of the Laplacian and another measure is found for cycles as the value of the transition probability for which hitting times going in either direction of the cycle are equal. A generalization of circular flow graphs is used to illustrate the possibility of tuning edge weights to match pre-specified values for graph parameters; in particular, we show that generalizations of circular flows can be tuned to have fixation probabilities equal to the Moran probability for a complete graph by tuning vertex temperature profiles. Finally, single-link graphs are introduced as an example of a graph involving a bottleneck in the connection between two components and these are compared to the partial bipartite graphs. A population is represented as a directed graph, each vertex corresponding to an individual population member, while edges are labelled with information describing the interaction between population members. In many cases, the edge labels are probabilities so that the weight w ij of the edge that connects vertex i to vertex j describes the probability of an effect being propagated from vertex i to vertex j. Following this, an update procedure is defined describing temporal dynamics. Here attention is restricted to discrete processes but continuous models have also been explored [20]. Some of the useful update procedures are birth-death, death-birth, voter models and probabilistic voter models [21]. In this paper, birth-death updating is used. In cases of rumour spread, there are competing narratives and birth-death processes provide a good model. The state space approach developed in Voorhees [22] and Voorhees & Murray [23] allows determination of all fixation probabilities for arbitrary initial conditions. The population is partitioned into two disjoint classes, depending on the presence or absence of a defining characteristic (e.g. a mutant gene, believing a rumour, being infected with a virus, etc.). Vertices are labelled 0 if this characteristic is not present and 1 if it is present. Thus, for a population of fixed size N the state space is the set of 2 N binary vectors of length N with the all zero vector corresponding to extinction and the all one vector to fixation. A state vector v = (v 1 , . . . v N ) is also a binary number. Taking v as the denary form of this number, v = N k=1 v k 2 N−k , and denoting the probability that the state denoted v will go to fixation by x v , a set of master equations were derived [22], yielding fixation probabilities for all initial states. where W is the edge weight matrix of the population graph, This yields interesting results [22,23] but is limited in that equations (2.1) are a set of 2 N − 2 linear equations in an equal number of unknowns so that exact solution for populations having more than a few members is not possible unless strong symmetry constraints are imposed. (2.3) A graph is fixation equivalent to a complete graph if its single vertex fixation probability is given by the Moran expression of equation (2.2). Lieberman et al. [4] proved that a graph with a probabilistic edge weight matrix is fixation equivalent to a complete graph if and only if the edge weight matrix is doubly stochastic. Broom & Rychtàr [25] derived the single vertex fixation probability for the nstar graph and also provided a means to compute this probability for a line graph. Zhang et al. [26] derived the k-vertex fixation probability for star graphs, and Voorhees & Murray [23] give the single vertex fixation probability for the complete bipartite graph K s,n together with summation formulae allowing computation of k-vertex fixation probabilities for these graphs. The star and bipartite results are reproduced in simple and compact form in Monk et al. [27] using a Martingale formalism, while results reported in Adlam & Nowak [28] indicate that the Moran formula of equation (2.1) may be, at least asymptotically, universal in the family of random graphs. In this paper, several classes of simple graphs are considered with respect to fixation probability and characteristic measures of graph structure. Because applied interest is in large populations, a good deal of research is focused on random graphs, complete graphs and small world graphs with vertex degrees specified in terms of global averages and treated statistically [29][30][31][32]. Our goal is more modest, to provide results in some very simple cases that may point to more general applications, and to examine the extent to which structural parameters of graphs may provide information related to fixation probabilities and other questions of interest. Graph parameters considered are eigenvalues of the graph Laplacian, subset conductances, graph communicability (the Estrada parameter) and measures of vertex centralities, and random walk hitting times. Choice of graphs for study has been both pragmatic and strategic: we have chosen graphs that are simple to analyse, yet exhibit behaviours suggestive for more general cases. The graphs considered are biased cycles, dual circular flows, partial bipartite graphs and singlelink coupled graphs. While cycles are perhaps the simplest graphs to consider, biased cycles display interesting properties, e.g. providing a clear case of asymmetry in the graph Laplacian that shows an analogy to rotation/angular momentum. In addition, results obtained for cycles may be useful for analysis of more complex graphs in which interaction cycles can be identified, the simplest case being dual circular flows, which can be thought of as composed of an ensemble of cycles. Dual circular flows, in turn, constitute a large family of graphs that include star graphs [4,22,25], funnel graphs [4,22,23], cascades [23] and layered networks [33]. In addition, we use them to provide an example of how the edge weights of a graph can be tuned to match specified global properties such as temperature profile. Finally, the partial bipartite and single-link graphs form the two extremes of a class of graphs with vertex set the union of two components V s and V n with linkages between these components through subsets S k ⊆ V s and S m ⊆ V n having, respectively, k and m elements. In terms of application, this family of graphs can provide models for polarized populations in which communication between different subpopulations takes place through subsets of representatives. While illustrated here for only a small numbers of vertices, many of our results hold for graphs of arbitrary size. In particular, theorems (4.1), (4.3), (5.1) and (6.1) giving Laplacian eigenvalues for cycles, dual circular flows and partial bipartite graphs; equation 7.1, giving the characteristic polynomial for the Laplacian of a single-link graph; conjectures (6.2) and (6.3) on hitting times and communicability for partial bipartite graphs; and formulae derived for conductances are valid for any number of vertices. Partial bipartite graphs are of particular interest. These graphs have only three non-zero Laplacian eigenvalues and which of these is smallest or largest depends on the values of the connection parameters 1 − p and 1 − q between the two subsets of vertices. Equality of all three eigenvalues occurs at the value of p and q for which the conductances of each graph component are equal and, although no proof is available, this appears to correspond to the condition for equality of single vertex fixation probabilities. Further, given conjecture (6.3), these parameter values give the minimum value of the communicability and are the condition for equal probability measures on paths going in either direction between graph components. The notation used is that vertices are labelled with lower case indices ranging over the number of vertices. Thus, v k labels the kth vertex of a graph. The single vertex fixation probability for the kth vertex is labelled by x 2 k−1 with 2 k−1 the base 10 form of the binary number associated with a single one at the kth vertex. Measures of graph characteristics This section briefly describes the structural measures for graphs that are considered in this paper. Eigenvalues of the Laplacian The graph Laplacian of a graph G with edge weight matrix W is defined in Barbosa et al. [34] as = Φ 1/2 (I − W)Φ −1/2 , where Φ is a diagonal matrix with entries given by the solution of φ · W = φ. The smallest eigenvalue of is always 0 and the eigen-spectrum satisfies λ 0 = 0 ≤ λ 1 ≤ · · · ≤ λ N−1 ≤ 2. The number of zero eigenvalues gives the number of disconnected components of the graph. Thus, if λ 1 is non-zero the graph is connected and the value of λ 1 gives a measure of the difficulty involved in cutting the graph into disconnected parts. Subset conductances For a graph (G, W) with vertex set V the conductance, also known as the Cheeger constant [35], is defined as ( For a subset of vertices S, the conductance of S is The conductance of S is inversely related to the mixing time for a Markov process starting at a vertex in S. Communicability This concept was introduced in 2008 by Estrada & Hatano [36] in order to describe situations 'in which a perturbation on one node of [a] network is 'felt' by the rest of the nodes with different intensities.' [37, p. 92]. The communicability between vertices i and j of a graph is defined as a weighted sum of all paths starting at vertex i and ending at vertex j, with weighting that favours shorter paths. This allows definition of a variety of communicability functions with the general form P = ∞ k=0 c k A k , where A is the graph adjacency matrix and the coefficients c k are chosen such that the series converges, gives greater weight to shorter paths (smaller k values), and yields real, non-negative values for the matrix elements of P. As sums over loops of all lengths starting and ending at a vertex, the diagonal elements of P represent a measure of vertex centrality [37]. One choice for coefficients is c k = 1/k!, yielding P = e A , and the communicability is defined as Tr(e A /N), i.e. the average vertex centrality. Since cases considered in this paper involve probabilistically weighted edges, the matrix A is replaced by the edge weight matrix W. Hence the matrix function used is V = e W and the i, j element of this matrix is the weighted sum of probabilities for paths of length k starting at i and ending at j, for 0 ≤ k ≤ ∞. We still refer to Tr(e W /N) as communicability, however. In most cases, e W cannot be computed directly and the sixth-order approximation e W ∼ I + W + 1/2W 2 + 1/6W 3 + 1/24W 4 + 1/120W 5 + 1/720W 6 is used. Expected hitting times Following Kemeny & Snell [38], the matrix M of expected hitting times for a graph (G, W) is computed using the fundamental matrix of the graph, defined by Z = [I − (W − A)] −1 where W is the edge weight matrix and A is the matrix with columns equal to the stationary probability of a random walk on G. If J is the matrix of all ones and Z * is the diagonal matrix with Z * ii = Z ii , then [38] The h ij element of M is the expected value of the random variable describing the number of iterations for a random walk that starts at vertex i to first reach vertex j. Biased cycles This section begins with consideration of biased cycles, that is, cycles in which there is a probabilistically preferred direction. Figure 1 shows the general form of a biased cycle. If the parameter p equals one-half it becomes a balanced cycle, while p = 1 yields a clockwise cycle and p = 0 yields a counter-clockwise cycle. The weight matrix W is doubly stochastic; hence, by the Isothermal theorem, the single vertex fixation probability equals the Moran probability, although time to fixation will differ [39]. From equation (3.2), it is easy to see that the conductance of any single connected block of k vertices in a length N biased cycle equals N/k(N − k). Also, the matrix e W is circulant: e W = circ(V 11 , V 12 , V 13 , V 14 , V 15 ). Since the diagonal elements of W k are zero if k is odd and symmetric in p and 1 − p if k is even, Tr(e W ) will be symmetric in p and 1 − p with maximum value at p = 1/2. Figure 2 shows entries of e W /5 for N = 5. This figure is symmetric about p = 1/2. V ii = V jj for all i, j, hence Tr(e W /5) = V 11 and V 12 = V 15 , V 13 = V 14 at 1/2. The diagonal elements V 11 are substantially greater than the off diagonal elements with the exceptions V 15 = V 11 at p = 0 and V 12 = V 11 at p = 1. Since the steady state for the matrix W is homogeneous, the Laplacian matrix for a biased cycle is = I − W. Surprisingly, the characteristic polynomial for is rather complicated. For the biased cycle on N vertices, (4) [p N/2 + (1 − p) N/2 ] 2 N ≡ 2 mod (4), (4.2) where the coefficients satisfy Corollary 4.2. If p = 0 or p = 1, equations (4.2) reduce to (λ − 1) N = 1(N even), (λ − 1) N = −1(N odd) and the eigenvalues of are given in terms of the Nth roots of unity: λ = 1 + e 2πki/N or λ = 1 − e 2πki/N as N is even or odd, respectively. If 1 -2p is negative then there is a clockwise bias in the cycle, whereas if it is positive there is a counterclockwise bias. This bias shows up clearly in the hitting times. Figure 3a − c shows hitting times for the biased cycle as a function of p for one-, two-and three-step transitions in the counter-clockwise direction. The shape of these curves is related to the possibility of a k-step transition occurring either as a counterclockwise transition of k steps or a clockwise transition of N − k steps. Figure 3d shows the one-, two-, three-and four-step hitting times in the counter-clockwise direction for the N = 9 biased cycle. The value of p at which the peak hitting time occurs gives a measure of the bias of the cycle. For example, for N = 5 the single-step transition in the counter-clockwise direction is favoured for p 0.68806, whereas for p > 0.68806 the four-step transition in the clockwise direction is favoured. Because of the symmetry between p and 1 − p, the peak value of p starting from the symmetrically placed vertex will be one minus the peak value starting from a given vertex. Referring to figure 1, if vertex v 1 is located at the top of a five cycle then the p value of 0.68806 characterizes vertex v 2 while vertex v 5 will be characterized by a peak p-value of 0.31194. Likewise, the two-and three-step graphs for N = 5 are mirror images reflected about p = 1/2. Dual circular flows In Voorhees [22] and Voorhees & Murray [23], a class of graphs called circular flows is defined and in Voorhees & Murray [23] a number of results concerning circular flows are given, including entropy computations and demonstration that this family of graphs contains many cases in which the single vertex fixation probability is enhanced with respect to the Moran probability only for limited values of the fitness parameter r ≥ 1. In these cases, graph edges were directed in a single direction and the graphs were viewed in terms of probability flow through a series of layers. Here this is generalized to dual circular flows, that is, circular flows in which directed edges exist in both directions. For simplicity, only the case of uniform right-and left-directed edge weights is considered although this condition will be dropped when we consider tuning a graph. A length k + 1 dual circular flow is illustrated schematically in figure 4. The state space for this system is the set of vectors {(m 0 , m 1 , . . . , m k )|0 ≤ m i ≤ n i }, where m i indicates the number of mutants in the ith subset of the graph, which contains n i vertices. Using the notation of Voorhees [22], x ι(m) indicates the fixation probability of the state m, whereas x (m,m i ±1) represents the fixation probability for the state arising from m in which the ith population of mutants has increased (+) or decreased (−) by 1. For a dual circular flow, equation it is x 2 , and for each of the three vertices at level 2 it is x 8 . The overall single vertex fixation probability is ρ = (x 1 + 2x 2 + 3x 3 )/6 and the Moran probability is denoted ρ m . Figure 5a shows the transition between fixation probabilities for cascade and funnel flows as p varies from 0 to 1. Figure 5b-d shows the differences between single vertex fixation probabilities from different levels (0, 1, 2). Figure 5e shows cross sections of figure 5a for specified values of r, whereas figure 5f -h shows plots of x 1 − ρ m , x 2 − ρ m and x 8 − ρ m , respectively, as functions of r for specified p-values. Examination of these figures indicates that starting at the single, level zero vertex always suppresses selection, starting at one of the three level-two vertices always enhances selection, relative to the Moran value, while starting at one of the two level-one vertices will enhance or suppress selection depending on the values of p and r (for example, for p = 0.4 selection is enhanced for r > 1). At p = 0.25, x 2 > x 8 for r < 4.0471 while for r values greater than this x 8 > x 2 . Likewise, at p = 0.9, x 2 < x 1 for r < 2.9803 and the inequality is reversed for r greater than this value. Thus, the two vertices at level 1 exhibit the least stability in terms of fixation behaviour. This example illustrates the complex behaviour that appears even in simple dual circular flow graphs. In Voorhees & Murray [23], a number of cases are considered showing that the single vertex fixation probability for such graphs can display both enhancement and suppression of selection, depending on the particular value of the fitness r. For example, in the (1, 2, 3)-funnel graph (p = 1), selection is only enhanced for 1 < r < 5.3695, whereas for the (1, 2, 3)-cascade (p = 0), it is only enhanced for 1 < r < 2.0304. The following theorem relates the characteristic polynomials for the Laplacian of all dual circular flow graphs to that of a biased cycle of the same length. Outline of proof Beginning with the flow (1, 1, 2), the claim is demonstrated for this case. This is extended by induction to the flows (1, 1, n) and (1, . . . ,1, n). With this established a further induction demonstrates the result for flows (1, . . . , 1, n k−1 , n k ). Proceeding in this way, an eventual proof is obtained. One point to note is that all eigenvalues of A again involve a factor of 1 − 2p, providing a measure of directional bias in the circular flow. By equation (3.2) and inspection of figure 4, it is obvious that the conductance of level s in a dual circular flow equals N/(N − n s ). Computation of the communicability and the associated vertex centralities yield results like those indicated in figure 6 which shows these plotted as functions of p for the dual flows 123, 235 and 1234, with V i indicating the centrality of a vertex at level i. Inspection of this figure indicates that the centrality of a vertex in a given layer is inversely related to the number of vertices in that layer-multiplying vertex centrality by the number of vertices in a layer shows that the centrality of a layer is proportional to the number of vertices-layers with more vertices have greater centrality. This can be understood in terms of the topological structure of these graphsthere are no edges connecting vertices within a layer. This also implies that the hitting times between layers will be the same as for a biased cycle of the same length, although hitting times between specific vertices will be substantially longer. Dual circular flows also provide an example of the way that edge coefficients in a graph can be tuned to match desired global features. To address this, the graph of figure 4 is generalized, allowing different weight coefficients between levels. This is indicated in figure 7. The target parameter chosen for tuning is vertex temperature [4] at each level of the graph and the inter-level edge weights p i will be selected to produce specified temperature profiles. In particular, conditions will be determined for which the single vertex fixation probability matches that of a complete graph. Referring to figure 7, the temperature of a vertex at level i is Hence, if a temperature profile t = (t 0 , . . . , t k ) is to be matched the following set of equations must be satisfied: Two examples will illustrate the factors involved in doing this. x 8 x 8 x 2 In this case, the goal is to adjust the connection coefficients p i so as to produce a graph with single vertex fixation probability equal to the Moran probability. Equation (5.3) becomes Setting p 2 = p and solving for p 0 and p 1 yields Since p 0 and p 1 are probabilities they must lie in the range [0, 1], and this requires If (n 0 , n 1 , n 2 ) = (2, 3, 4), on the other hand, there is a larger range of possible edge weights: the conditions that must be satisfied are 1/2 ≤ p 2 ≤ 3/4 with p 0 = 2p 2 − 1/2 and p 1 = 2(2p 2 − 1). If the number of levels in the graph is even (i.e. k is odd), an additional constraint appears. Equation (5.3) separate into two sets of equations with the left sides of each set summing separately to zero. This yields the conditions For example, if k = 3 and t i = 1 for all i, this requires n 0 + n 2 = n 1 + n 3 . In addition, each set of equations is solved in terms of a single parameter so there are two parameters in the solution. For example, in the general k = 5 case, the equations are and with constraints n 0 + n 2 + n 4 = t 1 n 1 + t 3 n 3 + t 5 n 5 n 1 + n 3 + n 5 = t 0 n 0 + t 2 n 2 + t 4 n 4 . (5.8c) Equations (5.8a, b) have solutions in terms of p 4 and p 5 , respectively: For a solution to be valid, all transition probabilities must be in [0,1] imposing the additional constraints while the values of p 4 , p 5 are also constrained to lie in [0, 1]. If edge weights exist satisfying the temperature profile t i = 1, 0 ≤ i ≤ k, then the graph with these edge weights will have the Moran fixation probability. The next example, however, demonstrates that matching a general temperature profile does not always lead to matching fixation probabilities. Example 5.3. Consider the circular flow (n 0 , n 1 , n 2 ) with temperature profile (2, 3/2, 1/3). Any edge weights satisfying p 0 = 3p 2 − 2, p 1 = (3p 2 − 1)/2 will match this profile. If p 2 = 1 then p 0 = p 1 = 1 as well-this case is considered in Voorhees & Murray [23]. But all values of p 2 in the interval [2/3, 1] yield valid solutions matching the given profile. Figure 8 shows a plot of the single vertex fixation probability minus the Moran probability for this range of p, illustrating that very different fixation probabilities result. Partial bipartite graphs The vertex set for a complete bipartite graph K s,n consists of two subsets V s and V n containing s and n vertices, respectively, with edge set That is, every vertex in V s has a link to every vertex in V n and vice versa but no pairs of edges in V s are linked nor are any pairs of edges in V n . Furthermore, if edges are directed and weighted, then all weights for edges from V s to V n are 1/n and all weights for edges from V n to V s are 1/s. This family of graphs includes the star graphs (s = 1). In Voorhees & Murray [23], the birth-death single vertex fixation probability for the complete bipartite graph K s,n is given as Here we consider the more general case of partial bipartite graphs, as illustrated in figure 9. This figure shows two subsets V s and V n of vertices with |V s | = s and |V n | = n. Within each subset, every vertex connects equally to every other vertex with probabilities p/(s − 1) and q/(n − 1), respectively. In addition, every vertex in V s connects to every vertex of V n with the probability (1 − p)/n and every vertex of V n connects to every vertex of V s with probability (1 − q)/s. The edge weight matrix for this graph has block form: where J i,j is the i by j matrix of all ones and I j is the j by j identity matrix. The normalized steady-state with s entries of n(1 − q) and n entries of s(1 − p). The Laplacian matrix and its antisymmetric part have block form . Schematic of a partial bipartite graph. Proof of the next theorem follows directly from the form of these matrices. The total probability flow from V s to V n equals s(1 − p) while the total flow from V n to V s is n(1 − q). In the bipartite case, the subsets of interest are V s and V n . The right and left conductance of the graph are thus defined as and C(V n ) = s + n sn i∈V n ,j∈V s The conductance of a set of vertices S is inversely related to the mixing time of a Markov process starting from a vertex in S. Thus, C(V s ) ≥ C(V n ) implies that the mixing time starting from a vertex in V s will be less than that starting from a vertex in V n . The condition for Figure 10 shows the single vertex probability for the (2, 5) partial bipartite graph for fixed values of q with p and r as variables and for r = 2 with p and q as variables. Although figure 10c and 10d appear quite similar there is a significant difference. In figure 10d the p = 0 line is at 0, whereas in figure 10c this is not quite the case, as indicated in the detail of figure 10e. Figure 11 shows cross sections of figure 10e. While the difference from the Moran probability is extremely small, these figures show an unusual effect in which selection is enhanced for fitness values less than one, suppressed for a limited range of values greater than one and then enhanced again. The graphs of figure 11 illustrate a transition between three distinct regimes of behaviour. In the first, selection is suppressed for 0 < r < 1 and enhanced for r > 1 (pattern A, table 1). In the second, selection is enhanced for 0 < r < 1, and there is a value r max dependent on p, such that selection is suppressed for fitness in the range [1, r max ] and enhanced for fitness greater than r max (pattern C, table 1). In the third, selection is enhanced for 0 < r < 1 and suppressed for r > 1 (pattern D, for 0 < r < 1, enhanced for fitness in the range [1, r max ] and suppressed for r > r max (pattern B, table 1). Table 1 shows comparative q-and p-values illustrating this effect, which will be discussed further in §7. The q = 0.6667, p = 0.118 case is an anomaly in that selection is enhanced for the two regions 0 < r < 1 and 42.2788 < r < 175.4007 and suppressed for larger r (the enhancement is of the order of 10 −11 in the second region). x 1 x 8 x 1 x 8 products of p and q with coefficients of up to 20 digits. Nevertheless, a simple solution exists and is the same for all three values of r considered: q = (1 + 3p)/4. This compares exactly with the condition based on conductances given in equation (6.7). Likewise, for the (2, 5) case the x 1 − x 4 = 0 line is q = (3 + 2p)/5, which again is the same condition as given by equation (6.7). This suggests that equation (6.7) provides a sharp differentiation between fixation probabilities for vertices starting in one or the other of the two graph components V s and V n . If the condition of equation (6.7) is satisfied the subset V s will be said to be evangelical relative to V n while if this condition is not satisfied V s is isolated relative to V n . In the extreme case p = 1, we will say that V s is quarantined. Estimates of hitting times were computed for the (s, n) partial bipartite graph for a number of examples and the form of these cases suggests the following conjecture. Conjecture 6.2. Let G be an (s, n) partial bipartite graph with h(s, s) and h(n, n) the respective hitting times between pairs of vertices in V s and V n , and h(s, n) the hitting time starting from a vertex in V s and ending at one in V n while h(n, s) is the hitting time starting from a vertex in V n and ending at one in V s . Then If (s, n) = (3, 4) then equation (6.9) becomes q ≥ (1 + p)/(5 + p). Note that the hitting times h(s, s) and h(n, n) both involve a factor of 2 − p − q which is an eigenvalue of the Laplacian. If p and q are sufficiently small, 2 − p − q is the largest eigenvalue. From equation (6.5), 'sufficiently small' requires In the limiting case of p = q = 1, the eigenvalue 2 − p − q becomes zero and the graph separates into two disconnected components. Care is required, however, to determine how h(s, s) and h(n, n) behave in this limit since the factors ( give results that depend on how the limit (p, q) → (1, 1) is approached. For example, suppose that p = q k . Then 2 − p − q = 2 − q − q k = (1 − q)(2 + q + q 2 + · · · + q k−1 ) and at q = 1h(s, s) = (1 + k)(s − 1) and likewise h(n, n) = (1 + k)(n − 1). These differ from the hitting times for size s and size n complete graphs, which are s − 1 and n − 1, respectively. The way to deal with this in computing h(s, s) is to set p = 1 before going to the limit (or, in computing h(n, n) setting q = 1). Otherwise there is always the possibility of transitions between V s and V n giving additional contributions to hitting times. Conjecture 6.3. Let G be an (s, n) partial bipartite graph, V ss and V nn the diagonal elements of e W corresponding to vertices in V s and V n , with V sn and V ns refer to off block diagonal elements. Then In addition, Tr(e W ) = e 1 + e p+q−2 + (n − 1)e −(n−1+q)/(n−1) + (s − 1)e −(s−1+p)/(s−1) . (6.12) Setting partial derivatives of Tr(e W ) with respect to p and q to zero and solving for q yields Setting these equal indicates that the minimum value of Tr(e W ) occurs at (p, q) = 1 n + s − 1 (s − 1, n − 1), (6.14) which is the condition for eigenvalue equality in equation (6.10). Figure 14 shows plots of V ss and V nn , V sn and V ns , and Tr(e W ) for (s, n) = (3, 4). In figure 14a, V 33 is lowest at p = 0, q = 1 and in figure 14b, V sn is zero at p = 1, whereas V ns is zero at q = 1. The minimum value in figure 14c occurs at (p, q) = (1/3, 1/2), which coincides with the point at which all non-zero eigenvalues of are equal. The line of equality in figure 14a is difficult to express but can be approximated as the straight line q = 1.0923827p − 0.3800648, thus for p > (q + 0.3800648)/1.0923827, V ss will be larger than V nn . In figure 14b, the line of equality is given by Single-link graphs In this section, we consider graphs consisting of two components with linkage between components occurring only through a single vertex in each component, as illustrated in figure 15. These graphs and partial bipartite graphs are the extreme cases for graphs having two components with linkages occurring only between subsets of each component. Here the component V s contains s vertices, one of which is linked to one of the n vertices in V n . Each non-linked vertex in V s connect to every other vertex in this component with probability 1/(s − 1) while the linking vertex connects to all other vertices in V s with probability p/(s − 1) and connects to the linking vertex in V n with probability 1 − p. Likewise, all non-linked vertices in V n connect to every other vertex in this component with probability 1/(n − 1) while the linking vertex connects to all other vertices in V n with probability q/(n − 1) and connects to the linking vertex in V s with probability 1 − q. Single vertex fixation probabilities are suppressed relative to the Moran probability for cases considered, as shown in figure 16 for (s, n) = (3, 4) for r = 5/4 and r = 2. This figure also shows comparisons of fixation probabilities for possible initial mutant vertices for r = 5/4 and 2. Here x 1 is the fixation probability for a non-linked vertex in V 3 , x 4 is the fixation probability for the linking vertex in V 3 , x 8 is the fixation probability for the linking vertex in V 4 , and x 16 is the fixation probability for a non-linked vertex in V 4 . In figure 16c,d, if p = 0 the x 16 probability is zero and the x 1 probability is the Moran probability for s − 1 vertices. Likewise, if q = 0 the x 1 probability is zero and the x 16 probability is the Moran probability for n − 1 vertices because fixation can only occur if the initial mutant is introduced in the {v 1 , v 2 , v 3 } or the {v 4 , v 5 , v 6 , v 7 } vertex sets, respectively. If p = 1 (q = 1), on the other hand, the x 1 probability is zero and if q = 1 (p = 1), the x 16 probability is zero (these are not shown in figure 16 which only shows p and q between the values of 0 and 7/8). If p = q = 1 the graph has split into disconnected components, which shows up in the appearance of a second zero eigenvalue for the Laplacian as shows up in figure 17e below. Taking x = λ − 1, the characteristic polynomial for the Laplacian of this graph is Thus, there will be s − 2 roots λ = s/(s − 1), n − 2 roots λ = n/(n − 1). The last term in brackets factors into x + 1 (yielding the 0 eigenvalue) and a cubic. Figure 17 shows plots of the cubic factor for chosen values of p and q. In the lowest curve of figure 17a, p = q = 0, corresponding to an eigenvalue of 2 (since x = λ − 1), whereas in the upper curve of figure 17e, p = q = 1, giving a second eigenvalue of 0 (x = −1) as the graph is split into two disconnected components. With the linked vertices denoted v s and v n , the conductances for the sets V s \v s , V n \v n , v s , v n , V s and V n are given by All but the two of these are constant and the condition C Approximation of e W and Tr(e W ) for the (3, 4) case yields figure 18. The minimax values Tr(e W /7) are at (p, q) ∼ (0.793893, 0.394234). The line of equality in figure 17b is approximately given by q ∼ 2.021513p + 0.0074732-if q is greater than the above value, vertex v 1 has the greater centrality. In figure 17e, the line of equality is q = p which, as in the partial bipartite case, corresponds to the condition for C(V 3 ) = C(V 4 ), and we conjecture that this holds for a general single-link graph. Table 3 shows the hitting times for the (3,4) single-link graph. In contrast to hitting times for the partial bipartite graphs, most of the hitting times in table 5 show divergences for cases where p or q are 0 or 1. Consideration of the graph topology makes it apparent that the hitting time h uv will diverge whenever there is either zero probability of a transition from vertex . Discussion Construction of graphic population models involves few conceptual problems, the procedure is well established: locate each population member, characterized by one or more variable conditions, at a vertex of a graph and label edges between vertices with some measure of interaction between linked individuals. Then define an updating process relating individual interactions to the dynamics of characteristic variables. While this sort of model can become mathematically complex, the conceptual framework is simple. Equation (2.1) treats the easiest case, in which each population member exhibits a binary-valued characteristic and the edge weights are just probabilities of interaction, coupled with birth-death updating. Solutions to this equation provide birth-death fixation probabilities for any given initial state. The problem is the combinatorial explosion that arises. If the population has size N, each population member is characterized by a vector of characteristics c = (c 1 , . . . , c k ), and each of these characteristics c i can exhibit one of m i possible values, then the state space contains almost k i=1 m i N members. One way of dealing with this is through methods from statistical mechanics in which graphs are constructed according to algorithms for developing random, small world or other connections between vertices. In this paper, a different direction is taken-we study properties of some very simple graphs in the hope of eventually discovering connections to the statistics of larger graphs. In analogy, a molecular gas can be studied in terms of statistical mechanics by assuming that all molecules are point particles and using concepts such as temperature, entropy, pressure, free energy and so on; but it is also possible to study the structure of the individual molecules with an eye towards eventual discovery of connections between molecular structural properties and large-scale gas behaviour. Four classes of graphs were considered: biased cycles, dual circular flows, partial bipartite graphs and single-link graphs. For biased cycles, theorem (4.1) gives the characteristic polynomial of the Laplacian matrix and the corollary to this theorem expresses eigenvalues in terms of roots of unity for the special cases p = 0 and p = 1. Theorem (4.3) shows the value of the term 1 − 2p as a measure of cycle bias. Computation of hitting times for simple cases shows the characteristic form illustrated in figure 3, with maximum hitting times coming at p-values characterizing a 'tipping' point between clockwise and counter-clockwise motion for a random walk. Biased cycles provide the simplest example of the dual circular flow graphs and theorem (5.1) links the characteristic polynomial of the Laplacian of a circular flow to that of the biased cycles, with the term 1 − 2p again playing the role of measuring directional bias. Further generalization allowed the possibility of 'tuning' a flow to match specified parameters, exemplified by derivation of the conditions required to match any specified vertex temperature profile. Bipartite graphs are two-level dual circular flows and, as indicated in the Introduction section, the generalization to partial bipartite graphs leads to results of particular interest, showing a tight connection between conductances, Laplacian eigenvalues, vertex centralities, expected hitting times and single vertex fixation probabilities. These graphs also showed the possibility of counterintuitive behaviour in which fitness values less than 1 can result in enhanced selection, illustrated in figure 11 and table 1. While these tables show only the (2, 5) partial bipartite case, similar behaviour has been found for the (3,4) case, although the ranges of p for which the behaviour appears, at least in preliminary investigation, are narrower. From table 1, for fixed q there appears to be two critical p-values, p − and p + , depending on q, such that in most cases with q < 1/2 pattern A appears for p < p − , pattern D appears for p > p + and pattern B for p − < p < p + ; while for q ≥ 1/2 pattern C appears rather than pattern B when p − < p < p + . The value of r max , the fitness value that distinguishes between patterns A and B or between patterns C and D depends on both the q-and p-values. In table 1, pattern C shows up for cases in which q is relatively large while p is relatively small. This corresponds to a situation in which the five vertex is strongly connected internally, with weak linkages to the two vertex subset, while this latter subset is weakly connected internally with strong links to the five vertex subset. Under these conditions, a mutant with r < 1 appearing in V 2 will have a low probability of extinction and, if chosen for reproduction, a high probability of reproducing in V 5 so that in a sense V 2 may act as a root. Much further work is required, however, to develop any detailed understanding of this behaviour, as well as extending analysis to dual circular flows in which vertices in each level can influence each other. The final case consists of a two-component graph in which the linkage between components occurs only through a single vertex in each component. Laplacian eigenvalues, hitting times and communicability were determined, and fixation probabilities were found for the (3, 4) example, allowing comparison to the corresponding partial bipartite case. As indicated, partial bipartite and single-link graphs are the two extremes of a family of graphs that provide models of two populations communicating across smaller sets of representatives. Adjustment of connection parameters provides models for different levels of polarization in the two populations. Further work is required to analyse both the partial bipartite graphs and single-link graphs, as well as to consider more general cases in which linkages between two populations are through subsets S k ⊆ V s and S m ⊆ V n . Being able to characterize polarization within a population in this way could provide an important tool for studies of social interactions. Table 5 shows a general summary of results reported in the present paper.
10,617.6
2015-05-01T00:00:00.000
[ "Mathematics" ]
Joint Computation Offloading and Data Caching Based on Cooperation of Mobile-Edge-Computing-Enabled Base Stations : Mobile terminal applications with high computing complexity and high time delay sensitivity are developing quite fast today, which aggravates the load of mobile cloud computing and storage and further leads to network congestion and service quality decline. Mobile edge computing (MEC) is a way of breaking through the limits of computing and storage resources of mobile cloud and alleviating the load of mobile cloud. Computing time costs and transmission time costs are considered to be the main issues for the mobile cloud when carrying out computing offloading and data caching. Therefore, an efficient resource management strategy, which could minimize the system delay, is proposed in this paper. The new scheme offloads reasonably computing tasks and caches the tasks’ data from the mobile cloud to mobile edge computing-enabled base stations. An intelligence algorithm, genetic algorithm, is being used to solve the global optimization problem which would cause transmission delay and computing resources occupation, and to determine the computing offloading and data caching probability. The simulation of the system using MATLAB is conducted in 8 different scenarios with different parameters. The results show that our new scheme improves the system computing speed and optimizes the user experience in all scenarios, compared with the scheme without data caching and the scheme without computing offloading and data caching. Introduction With the rapid development of the mobile Internet and the Internet of things in recent years, the functions of the mobile terminals (MTs) are becoming much more rich than ever before. The character of mobile terminals has gradually evolved from a simple communication tool to a powerful station integrating communication, computing, entertainment and office. Various applications, such as augmented reality, virtual reality, and location-based service (LBS), have been contained in one mobile terminal as required by the consumers. These typical applications with high computing complexity and long time delay sensitivity not only aggravate the load of the mobile cloud (C) in computing and storage resources, but they also lead to system network congestion and service quality decline. In order to alleviate the computing load of the mobile cloud, the concept of Mobile Edge Computation (MEC) is proposed [1], which provides the IT service environment and the cloud computing capability on the mobile network edge [2]. By deploying the mobile edge computing-enabled base station (MEC-BS) of MTs in a community and the neighbor mobile edge computing-enabled base stations of the MEC-BS (MEC-NBS) on the edge of the mobile network, the computing can sink to the mobile edge node, which can effectively reduce the load of the mobile cloud and reduce the demand for the data transmission bandwidth. In our research, we define the MEC stations consists of five units: (1) receiving unit: which is used to receive service requests from its covering MTs and the surrounding MEC stations; (2) control unit: which is used to determine whether the received task is further offloaded and whether cache the data required for the corresponding task from C based on the Cooperative Resource Management Algorithm; (3) caching unit: which is used to cache the data required for the corresponding task based on the Cooperative Resource Management Algorithm to reduce the time delay in data access to C; (4) computing unit: which is used to calculate the computing tasks offloading from its covering MTs; (5) sending unit: which is used to send the calculation result to the MT, send the offloading request to further MEC stations and send the data request of the corresponding task to C. It is worth noting that MEC stations have limited computation and storage resources, so that they cannot provide computing and caching services for all the tasks like C. However, the MEC-BS will be still overloaded if there are too many tasks offloaded from the MTs. In existing algorithms, they try to ease the load of MEC stations through refusing, delaying or queuing the offloading requests of MTs. However, these algorithms will lead to poor QoS of the system. Thus, a new model of resource management is urgently needed. In this paper, our research is concerned with a local system under mobile cloud, which includes a mobile edge computing-enabled base station (MEC-BS), the mobile terminals covered by this MEC-BS, and the neighbor mobile edge computing-enabled base stations within a certain distance from MEC-BS. Our goal is to offload the mobile cloud computing and storage pressure through MEC stations. Because of the poor computing capability of every MTs, we ignore MT as a offloading object to guarantee the quality of service. All tasks of MT need to go through the MT's MEC-BS to offload to mobile cloud or any neighbor mobile edge computing-enabled base station. Figure 1 is the overall architecture of our model. As shown in Figure 1, the mobile edge computing can enhance the performance of multiple computation-intensive and delay-sensitive applications, such as virtual reality, augmented reality, pattern recognition, automatic pilot, intelligent transportation, video acceleration, video surveillance, smart home, indoor positioning, remote medical surgery, unmanned aerial vehicle control, online live and interaction, smart building, etc. The users can experience higher real-time performance of applications, and run more complex applications in their resource-constrained MTs. The steps for computing offloading and data caching under the mobile edge computing environments are as follows: (1) when MT has a new computing task, it will upload the offloading request (OL-REQ) to its MEC-BS; (2) if the task is determined to be executed in MEC-BS by the Cooperative Resource Management Algorithm and MEC-BS has cached data of the task, the task will be executed directly in MEC-BS, then the offloading response (OL-RSP) will be returned to MT; (3) if the task is determined to be executed in MEC-BS and there is no cached data in it, MEC-BS will send the data request (DA-REQ) of the task to C, then C will return the data response (DA-RSP) to MEC-BS. The next process of executing and delivering is the same as 2); (4) if the task is decided to be executed in MEC-NBS by the algorithm, MEC-BS will send the offloading request to a MEC-NBS. If MEC-NBS has cached the computing data before, it will execute the task and return the offloading response to MEC-BS, and then MEC-BS will return the offloading response to MT; (5) if MEC-NBS is decided to execute the task and it has not cached the computing data of the task in advance, it will send the data request to C, then C will return the data response to it. The next process of executing and delivering is the same as (4); (6) if the task is resolved to be offloaded to C by the algorithm, MEC-BS will send the offloading request to C, then C will execute the task and return the offloading response to MEC-BS, and then the offloading response will be returned to MT. Our research work can be described briefly by a block diagram, as shown in Figure 2. A joint computation offloading and data caching based on cooperation of multiple MEC-BSs is modeled According to the computation offloading decision, a task is offloaded to its MEC-BS or a neighbor MEC-BS An optimization problem is formulated to minimize the total time consumption of all MTs in the system According to the data caching decision, the data of a task is or is not cached in its MEC-BS A genetic-based algorithm is designed to obtain the solution to the optimization problem, and give optimal computation offloading and data caching decision for each task The superiority of our scheme is demonstrated by extensive simulations with different scenarios, and comparisons with other algorithms Our research work can efficiently enhance the performance of multiple computation-intensive and delay-sensitive applications. The users can experience higher real-time performance of applications, and run more complex applications in their resource-constrained MTs. Related Works The research work of MEC mainly focuses on the research of computing offloading system. It can be divided into architecture design, protocol design, resource management, and application in concept. It involves many fields, such as computing, communication, and security. The accumulated research results laid a foundation for the engineering implementation and theoretical system of mobile edge computing [3][4][5][6]. Resource management technology is one of the core technologies of mobile edge computing. It optimally uses system resources to optimize system performance. At present, the existing research work and main problems of resource management technology in MEC are as follows: A. computing resources and storage resources are independently disposed. Existing joint resource management only considers computing and wireless resources; B. the base station resources are optimized separately, lack of a unified multi base station collaboration mode and the corresponding resource management technology; C. existing offloading/caching decision algorithms lack algorithm cost optimization, high complexity and poor real-time performance, that affect the real-time performance of decision execution; D. lack of resource management technology for cloud computing server load and network load optimization. Existing Computing Resources and Storage Resources In the existing research based on mobile communication network scenarios, most of the research work only takes into accounting the problem of computing offloading (such as References [7,8]), or data caching (such as References [9,10]). In joint resource management, computing offloading is only combined with radio resource optimization (such as References [11][12][13][14]). For example, in the literature [11], the author proposes an effective radio and computing resource allocation scheme to minimize the total processing time of multiple tasks. Considering the local processing, each mobile user divides the computing task into unloading task and local task, which realizes the effective utilization of wireless power resources and computing resources. In the literature [12], the cache decision and radio and computing resource allocation of video service in mobile edge computing (MEC) are optimized to maximize the system revenue. In order to tolerate the uncertainty of network traffic and avoid the "hard constraint" based on constant content request rate, this paper uses robust optimization to obtain the optimal cache decision, and then allocates radio and computing resources for video trans-coding. In Reference [13], an online combined wireless and computing resource management algorithm for multi-user mobile edge computing system is proposed, which reduces the terminal energy consumption by assigning the CPU frequency, transmitting power and bandwidth of the mobile terminal. In most mobile applications, the computing and storage requirements of the terminal are tightly coupled. That is, both of the computing process work and the data access work are included in one task, and the two are interrelated. Therefore, the joint optimization of computing and storage resources needs to be further explored in the research of resource management technology. The Base Station Resources Are Optimized Separately In the existing research work, most of the documents are based on the mobile edge of single base station to compute resource management, including single user and multi user problems in single base station environment. Specifically, in Reference [13], the authors mainly consider the computing offloading problem of multi terminal users in single base station environment, which are not aimed at multi base stations environment. In Reference [8], the problem of single user computing offloading is considered, in which MEC resources are not limited. In Reference [15], a MEC system composed of mobile devices and heterogeneous edge servers supporting various wireless access technologies is studied. In Reference [16], the problem of partial offload scheduling and resource allocation in multi task MEC system is studied, and the problem of partial offload scheduling and power allocation in single user MEC system is proposed. In Reference [17,18], although the authors take into account the computing offloading of mobile users in multiple cells, the main work is focused on the allocation of wireless and computing resources within one base station, without consideration of resource sharing and assistance between the base stations. The authors of Reference [19] studied the single terminal user scheduling its computing task into a number of smart devices in a number of adjacent environments, and the execution of the task depends on the coordination of multiple devices. The above research work does not focus on the mobile communication network environment, the proposed cooperation concept is limited to the end user centric collaboration, and there is no resource sharing and joint allocation among the intelligent devices. The architecture design of the future mobile communication system covers the cooperative function between the base stations. Through close collaboration, the base stations in the network can be virtualized as a resource whole and serve each end user flexibly in distributed computing and distributed storage. Therefore, the cooperation between base stations needs to be studied in depth. The technology of resource management improves the utilization ratio of the whole network resources and improves the service quality of users. Existing Offloading / Caching Decision Algorithms In Reference [20], the author considers the multi-user computation offloading problem in MEC system as an evolutionary game model, and proposes an evolutionary game algorithm based on reinforcement learning to achieve efficient computing offloading. However, IoT devices need to adjust their own strategies through continuous evolution and trial and error to maximize the fitness value, which will cause the time complexity of the algorithm is relatively high. Similarly, there are also some problems in the research of literature [21,22] based on game theory. In addition, a large number of mobile edge computing resources managements based on convex optimization (such as References [23,24]), non convex optimization (such as References [25,26]), combinatorial optimization (such as Referencse [27,28]), evolutionary algorithm (such as Reference [29]), etc. Although some of the authors, besides the basic algorithms, also provide approximate and simplified algorithms (such as Reference [24]), these algorithms still need to occupy a certain amount of computing time, especially in large-scale users and large-scale tasks environment, the algorithm cost is still the main problem that affects the real time of decision-making. The resource management algorithm determines the computing offloading requests and data caching requests in the execution terminal, and the algorithm's own overhead will directly affect the real-time performance of the decision, especially for the time delay sensitive task processing. Lack of Resource Management Technology for Cloud The high cloud server load will seriously reduce the performance and user experience of the mobile edge computing system. Most of the research work on cloud loss mainly focuses on the distributed cloud computing field. Cloud resource optimization mainly considers the load balancing questions between multiple cloud servers and clusters, mainly focusing on the virtualization of multiple clouds and resource sharing technology [30][31][32][33]. As for the resource optimization for mobile edge computing with the aim of reducing mobile cloud load, only Reference [34] studies the load reduction method under the high cloud server load, and two load recovery strategies are proposed in the engineering perspective. However, the author only considers the task offloading strategy when the server load is too high or damaged, and does not study the load balance in the multi server computing, storage sharing, and the network transmission equilibrium. In spite of this, the research on mobile communication network environment is still relatively scarce. In mobile communication network environment, the base station is the edge node of the cloud server. The problems of cloud oriented resource management needs to be solved mainly by: (1) how to use the computing resources of the base station to divert the cloud computing task into the base station; (2) how to use the storage resources of the base station to divert the cloud data storage to the base station cache, so as to reduce the cloud computing and storage load, reduce the network data transmission load and improve the system performance. System Model We assume the number of all MTs and MEC-NBS is M and K, respectively. The ith (i ∈ M = {1, 2, . . . , M}) MT is defined as m i , and kth (k ∈ K = {1, 2, . . . , K}) MEC-NBS is defined as n k . An MT can run multiple mobile applications at the same time, and each application may contain multiple tasks. The number of all types of tasks is assumed to be R, the jth (j ∈ R = {1, 2, . . . , R}) type of tasks is defined as r j . Moreover, the data storage of MEC-BS and n k is defined as d and d k (k ∈ K), respectively. We consider that the request from each MT is a Poisson process, and i represents the request rate of m i , that is, i tasks are generated by m i per second. The request ratio of m i to the task r j is p i,j (p i,j ∈ [0, 1]), and the proportion is different for different tasks. Note that ∑ j∈R p i,j = 1 because p i,j actually represents the proportion of r j in the tasks generated by m i . We assume that the request message length and the response message length are fixed and roughly equal in the offloading signal transmission or data caching transmission among the MT, MEC-BS, MEC-NBS, and C. Each task r j is profiled by an ordered vector < c j , v j , w j , z j , q j >, which is characterized by: (1) c j , the amount of computation needed to complete r j ; (2) v j , the size of the offloading request (including necessary description and parameters of r j ) for r j ; (3) w j , the size of the offloading response (including the result of r j 's execution) for r j ; (4) z j , the size of the data caching request (including necessary description and parameters of the data required to execute r j ) for r j ; (5) q j , the size of the data caching response (including the data required to execute r j ) for r j . The computing task r j has 2 ways to completing : execute it at C or offload it to MEC stations. If task r j is determined to execute at C, it will increase the computing load of the mobile cloud and affect the performance of the cloud server. It will also suffer the offloading-signal transmission delay from the MT to the MEC-BS and the MEC-BS to C. If the MEC-BS chooses to offload r j to MEC stations, it will greatly alleviate the cloud pressure, and MEC stations have a certain degree of computational power, the efficiency of the computing task can also be guaranteed, but, at the same time, it may suffer from the time consumption caused by data request and response transmission between MEC stations and C. If the MEC-BS chooses to cache the data at MEC stations in advance, it will avoid the time consumption of the offloading-signal and data-signal transmission. However, data caching will take time, and MEC stations have limited storage space. When r j is offloaded to MEC stations, it can be executed at MEC-BS, or be further offloaded to a MEC-NBS( the offloading probability of all the n k is equal). For task r j , we define the offloading probability from m i to MEC-BS as α i,j |(i ∈ M, j ∈ R) and from m i to n k as β i,j,k |(i ∈ M, j ∈ R, k ∈ K), so that the offloading probability from m i to C is . Moreover, we define the caching probability from C to MEC-BS as σ j |(j ∈ R), from C to MEC-NBS as π j,k |(j ∈ R, k ∈ K). There are many symbols used by our system model, so we add notions to distinguish them. "O" means "Offloading", "D" means "Data caching", and "2" means "to". For example, "ONBS2BS" expresses the symbol is related to the issue "computation Offloading from a MEC-NBS to the MEC-BS". "DC2NBS" expresses the symbol is related to the issue "Data caching from the Cloud to a MEC-NBS". Computation Model (1) Execution at C All the tasks which executed at C share the computation resources of it. By defining the service rate of C as η, if r j is selected to be executed at C, the time consumed by completing r j is where the denominator is the stable processing speed [35] (amount of computation processed per second) of C. is the total amount of computation of C's tasks executed per second which was determined to be offloaded to C. It can be observed that, with the increase of the tasks executed at C, the processing speed of C is decreasing. (3), which means the tasks' arriving rate cannot exceed C's service rate. (2) Execution at MEC-BS All the tasks which offloaded to a MEC-BS from C share the computation resources of the MEC-BS. By defining the service rate of MEC-BS as φ, if r j is selected to be executed at MEC-BS, the time consumed by completing r j is where the denominator is the stable processing speed [35] (amount of computation processed per second) of MEC-BS. ∑ i∈M ∑ j∈R (α i,j p i,j i c j ) is the total amount of computation of MEC-BS's tasks executed per second which was determined to be offloaded to MEC-BS. It can be observed that, with the increase of the tasks executed at the MEC-BS, the pro- (3), which means the tasks' arriving rate cannot exceed MEC-BS's service rate. (3) Execution at MEC-NBS All the tasks which offloaded to a MEC-NBS from C share the computation resources of the MEC-NBS. By defining the service rate of n k as φ k , if r j is selected to be executed at n k , the time consumed by completing r j is where the denominator is the stable processing speed [35] (amount of computation processed per second) of n k . ∑ i∈M ∑ j∈R (β i,j,k p i,j i c j ) is the total amount of computation of n k 's tasks executed per second which was determined to be offloaded to MEC-NBS. It can be observed that, with the increase of the tasks executed at the MEC-NBS, the processing speed of n k is decreasing. (3), which means the tasks' arriving rate cannot exceed n k 's service rate. Transmission Model (1) Communications between C and MEC-BS All the MEC stations, including MEC-BS and MEC-NBS, under a C's coverage share the wireless resources of it. In this paper, the impacts of inter-station and intra-station interferences caused by computation offloading have been ignored because of the extremely tiny sizes of them. There are two types of communications between C and MEC-BS (offloading-signal transmission and data-signal transmission, respectively). We define the data-signal transmission rate from MEC-BS to C as s BS2C i . Then, the time consumed by sending the offloading request of r j from MEC-BS to C can be defined as follows, if r j is selected to be offloaded. The data transmission rate from C to MEC-BS is denoted by s C2BS i . Then, we have the time consumed by receiving the offloading response of r j from C to MEC-BS: As for data caching transmission, MEC-BS uses the Cooperative Resource Management Algorithm to determine the most profitable data caching which is required by task r j , the time consumed by sending the data caching request of r j from MEC-BS to C can be defined as follows: Similarly,we have the time consumed by receiving the data caching response of r j from C to MEC-BS: (2) Communications between C and MEC-NBS MEC-NBS can also use the Cooperative Resource Management Algorithm to determine the most profitable data caching which is required by task r j . We define the data-signal transmission rate from n k to C as s NBS2C k the time consumed by sending the data caching request of r j from n k to C can be defined as follows: The data transmission rate from C to n k is denoted by s C2NBS k . Then, we have the time consumed by receiving the data caching response of r j from C to n k : (3) Communications between MEC-BS and MEC-NBS For the sake of data security, we assume that data-signal cannot be transmitted between the MEC stations, including MEC-BS and MEC-NBS. Accordingly, there is only one type of communications between MEC-BS and n k , which is the offloading-signal transmission. We define the offloading-signal transmission rate from MEC-BS to n k through the wired connection between them as s BS2NBS k . Reversely, s NBS2BS k represents the offloadingsignal transmission rate from n k to MEC-BS through the connection. The time consumed by transmitting the offloading request of r j from MEC-BS to n k is expressed as Similarly, the time consumed by transmitting the offloading response of r j from n k to MEC-BS is expressed as (4) Communications between MEC-BS and MTs All the MTs under a MEC-BS's coverage share the wireless resources of it. In this paper, we ignore the computing power of every MTs, so that there is only one type of communications between MTs and MEC-BS which is the offloading-signal data transmission. We define the uplink data transmission rate from m i to MEC-BS as s MT2BS i . Then, the time consumed by sending the offloading request of r j from m i to MEC-BS can be defined as follows, if r j is selected to be offloaded. The downlink data transmission rate from MEC-BS to m i is denoted by s BS2MT i . Then, we have the time consumed by receiving the offloading response of r j from MEC-BS to m i : Optimization Model The total time consumption for completing r j includes: (1) the time consumed by executing in the mobile cloud, if r j is selected to be executed at C; (2) the time consumed by computation offloading, if r j is selected to be offloaded to MEC-BS; (3) the time consumed by computation offloading, if r j is selected to be further offloaded to n k , ∀k ∈ K. In (1), the time consumption is generated by transmitting the offloading request of r j from m i to MEC-BS, transmitting the offloading request of r j from MEC-BS to C, executing r j at C, transmitting the offloading response of r j from C to MEC-BS, and transmitting the offloading response of r j from MEC-BS to m i , that is, ). In (2), the time consumption needs to be divided into two situations: (1) MEC-BS has the data caching required by task r j ; (2) MEC-BS does not have the data caching required by task r j , and it needs to send the data caching request to C to get the data. The two cases are discussed separately as follows: (1) The time consumption is generated by transmitting the offloading request of r j from m i to MEC-BS, executing r j at MEC-BS, and transmitting the offloading response of r j from MEC-BS to m i , that is, ). (2) The time consumption is generated by transmitting the offloading request of r j from m i to MEC-BS, transmitting the data caching request of r j from MEC-BS to C, transmitting the data caching response of r j from C to MEC-BS, executing r j at MEC-BS, and transmitting the offloading response of r j from MEC-BS to m i , that is, ). In (3), the time consumption needs to be divided into two situations: (1) n k has the data caching required by task r j ; (2) n k does not have the data caching required by task r j , and it needs to send the data caching request to C to get the data. The two cases are discussed separately as follows: (1) The time consumption is generated by transmitting the offloading request of r j from m i to MEC-BS, transmitting the offloading request of r j from MEC-BS to n k , executing r j at n k , transmitting the offloading response of r j from n k to MEC-BS, and transmitting the offloading response of r j from MEC-BS to m i , that is, ). (2) The time consumption is generated by transmitting the offloading request of r j from m i to MEC-BS, transmitting the offloading request of r j from MEC-BS to n k , transmitting the data caching request of r j from n k to C, transmitting the data caching response of r j from C to n k , executing h i,j at bk, transmitting the offloading response of h i,j from bk to bs, and transmitting the offloading response of h i,j from MEC-BS to m i , that is, ). In summary, we have the total time consumption for completing r j : Therefore, the total time consumption of all MTs covered by MEC-BS can be formulated as t: Optimization Problem The aim of our algorithm is to minimize the total time consumption t gained by all MTs in M, while ensuring all constraints are not violated. Thus, the corresponding optimization problem can be formulated as subject to α i,j ∈ [0, 1], ∀i ∈ M, ∀j ∈ R, β i,j,k ∈ [0, 1], ∀i ∈ M, ∀j ∈ R, ∀k ∈ K, π j,k ∈ [0, 1], ∀j ∈ R, ∀k ∈ K, where constraint (17) is the value range of each α i,j , constraint (18) is the value range of each β i,j,k , constraint (20) is the value range of each σ j , and constraint (21) is the value range of each π j,k . As aforementioned, Constraint (19) is the value range of the total probability that r j is offloaded from C. Constraint (22)-(24) is hard constraint of the offloading queuing systems of C, MEC-BS, and n k , respectively. Constraint (25) and (26) is hard constraint of the caching queuing systems of MEC-BS and n k , respectively. We define the total time consumption without computation offloading and data caching ast, and we expandt as We can come to this conclusion that the sufficient condition oft is that η − ∑ i∈M ∑ j∈R (p i,j i c j ) > 0 must hold. By comparing the condition with constraint (22), we have Thus, constraint (22) always holds. In combination with (16) and constraints above, we derive the convexity and concavity of (16). By splitting (16) and judging the positive definiteness of each part of Hessian matrix, we obtain that (16) is a non-convex and non-concave function due to the existence of several saddle points. Therefore, if we use the traditional numerical optimization algorithm (such as interior point method) to get the optimal solution of (16), it is easy to fall into local minimum. The "dead cycle" phenomenon occurs because of the minimal part trap, which makes the iteration impossible, and only the local optimal solution is obtained instead of global optimal solution. Genetic algorithm overcomes this shortcoming very well, which is a global optimization algorithm. Due to the evolutionary characteristics of genetic algorithm, the intrinsic properties of the optimization problem in the process of searching element have little effect on the final optimization result. Moreover, the ergodicity of the evolutionary operator of genetic algorithm makes it very effective to search element with probabilistic global significance, which matches the optimization object and optimization goal of our paper very well. In addition, compared with the accurate algorithm, the approximate algorithm has less time consumption and space consumption, so that it can guarantee the overall system performance. In this paper, we choose the genetic algorithm to solve the optimization problem. We use H = 0 as the initial generation, and H should strictly adhere to the above constraints. Now, we can give the optimization algorithm using Genetic algorithm, which is shown in Algorithm 1. Algorithm 1 Algorithm Solving Optimization Problem Parameters -Population Size = 150 -GA Generations = 100 -Crossover ratio = 1.2 Selection Function: Chooses parents by simulating a roulette wheel, in which the area of the section of the wheel corresponding to an individual is proportional to the individual's expectation. The algorithm uses a random number to select one of the sections with a probability equal to its area. Crossover Function: Returns a child that lies on the line containing the two parents, a small distance away from the parent with the better fitness value in the direction away from the parent with the worse fitness value. You can specify how far the child is from the better parent by the parameter Ratio, which appears when you select Heuristic. The default value of Ratio is 1.2. Mutate Function: Randomly generate directions that are adaptive with respect to the last successful or unsuccessful generation. The mutation chooses a direction and step length that satisfies bounds and linear. Mutate in H(t); Evaluate H(t); End End Cooperative Resource Management Algorithm The data cached from C and the computation offloading for each MT is managed by MEC-BS using Cooperative Resource Management Algorithm, which is responsible for monitoring and collecting the information (computation offloading requests and responses, data caching requests and responses, parameters of MTs, MEC-NBS, and C), running the optimization algorithm, and sending the optimization result to C and each MT. MEC-BS will send the selection probability in the optimization result to the MT and C to determine the place (C, MEC-BS, or MEC-NBS), which is selected to execute the task or cache the data. At initialization of the system, all MTs in M upload their required parameters, including ∀i ∈ M, ∀j ∈ R, p i,j , s MT2BS i , s BS2MT i , i , and the information of all its tasks (< c j , v j , w j , z j , q j >), to MEC-BS. MEC-BS also collects the required parameters of MEC-NBS, including ∀k ∈ K, φ k , s BS2NBS MEC-BS monitors periodic observation of changes in all parameters. It will run Algorithm 1 in the next period, if any parameter has changed. Periodic monitoring reduces the execution frequency of the algorithm, while ensuring the timeliness of the optimization results. The flowchart of the Cooperative Resource Management Algorithm is shown in Figure 3. The details of the algorithm are described in Algorithm 2 for MT side, Algorithm 3 for C side, Algorithm 4 for MEC-NBS side, and Algorithm 5 for MEC-BS side. The four algorithms are composed of twelve loops, L1-L12; these loops are deployed into separate processes and executed in parallel during the running period of the system. L10: if r j is decided to be offload to n k and n k cached the data of r j in advance then n k executes r j ; n k returns the offloading response of r j to MEC-BS; else if r j is decided to be offload to n k and n k did not cache the data of r j in advance then n k sends the data request of r j to C; C returns the data response of r j to n k ; n k executes r j ; (L3): when m i has a new r j , as for computing offloading, firstly, it produces a random number h ∈ [0, 1] based on Uniform distribution. Then, it compares h with the offloading probabilities in H. If h falls into the interval representing b1, then m i will offload r j to MEC-BS by sending offloading request and receiving offloading response. Similarly, if h falls into the interval representing b2, then m i will offload r j to n k . Else, r j will be offloaded to C. (L4): when m i has a new r j , as for data caching, firstly, it produces a random number c ∈ [0, 1] based on Uniform distribution. Then, it compares c with the data caching probabilities in H. If c falls into the interval representing b3, then m i will cache the data requested by r j in MEC-BS by sending data caching request and receiving data caching response. Similarly, if c falls into the interval representing b4, then m i will cache the data requested by r j in n k . Else, the data requested by r j will not be cached. (L7): if C receives the data caching request of r j to MEC-BS, C will return the data caching response of r j to MEC-BS; (L8): if C receives the data caching request of r j to n k , C will return the data caching response of r j to n k ; (L9): if any one of φ k , s NBS2BS k and s BS2NBS k changes, n k will send the new value of the parameter to MEC-BS; (L10): if n k receives the offloading request of r j from MEC-BS and n k cached the data of r j in advance, n k will execute r j , then return the offloading response to MEC-BS. If n k receives the offloading request and n k did not cache the data of r j in advance, n k will send the data request of r j to C, then C will return the data response of r j to n k . After that n k will execute r j , then return the offloading response to MEC-BS. (L12): if MEC-BS receives the offloading request of r j offloaded from m i and MEC-BS cached the data of r j before, MEC-BS will execute r j , then return the offloading response to m i . If MEC-BS receives the offloading request and it did not cache the data of r j before, MEC-BS will send the data request of r j to C, then C will return the data response of r j to MEC-BS. After that MEC-BS will execute r j , then return the offloading response to m i . (L13): if MEC-BS receives the offloading response of r j sent from C, MEC-BS will forward the offloading response to m i . (L14): if MEC-BS receives the offloading response of r j sent from n k (∀k ∈ K), MEC-BS will forward the offloading response to m i . Simulations and Performance Evaluations We carried out extensive simulation of the system, and adopted the strategy of averaging multiple sets of random data to eliminate the measurement errors in the simulation process. We have adopted two contrasting strategies, one of which is the strategy without data cache, and the other is the strategy of neither data cache nor computation offloading. The three strategies are compared to analyze the effect of each strategy on the response time delay of the system from (1) M, the number of MTs; (2) R, the number of a m i 's r j ; (3) K, the number of MEC-NBS; (4) φ k , the service rate of n k , k ∈ K; (5) s C2NBS k , ∀k ∈ K, data transmission rate between C and MEC-NBS; (6) s NBS2C k , ∀k ∈ K, data transmission rate between MEC-NBS and C; (7) s BS2NBS k and s NBS2BS k , ∀k ∈ K, data transmission rate between MEC-BS and MEC-NBS. The result in each of the 8 scenarios is obtained from 50 repeated simulations using different random seeds. The simulation results verify the effectiveness of our strategy. It can well meet the requirements of minimizing the service response delay and improving the service quality of the resource scheduling. Table 1 lists the settings of the parameters used in simulations. Parameter values are configured based on the table unless stated clearly. Table 2 lists the three schemes in MATLAB simulation. We use genetic algorithm to get the minimum system delay of OMEC, and simulate the three strategies mentioned above. The results of simulating are shown in the figures. Different Numbers of MTs As shown in Figure 4, we measured the total time consumption by the system in completing all tasks under the 3 strategies with different numbers of MTs. In simulations, we increase M from 30 to 100, while other settings are listed in Table 1 When M increases, the total time for the system to complete the tasks increases correspondingly, these 3 strategies are in accordance with the law. At the same time, the total time consumed by OMEC and OCMEC is always lower than NMEC, OCMEC is always lower than OMEC. So, computing offloading and data caching can reduce the system delay. With the increase of M, the total load of the system increases. When M ∈ {30, 40, 50}, because of the low task load, the gain effect of OCMEC on reducing the delay is relatively low; when M ∈ {50, 60, 70, 80}, the task load increases continuously, and the total timeconsuming gap between OCMEC and NMEC is increasing, and the gain effect of reducing the delay is getting better and better. When M ∈ {80, 90, 100}, the total load exceeds the capacity of MEC-BS and MEC-NBS, and they cannot fully meet the calculation requirements of the tasks, so the total time gap gradually narrows, and the gain effect for reducing the delay gradually decreases. Different Numbers of MTs' Tasks As shown in Figure 5, we measured the total time consumption by the system in completing all tasks under the 3 strategies with different numbers of MTs' Tasks. In simulations, we increase R from 1 to 8, while other settings are listed in Table 1, except fixed values M = 60, K = 7, φ k = 2.4 × 10 6 MBPS, s C2NBS When R increases, the total time for the system to complete the tasks increases correspondingly, these 3 strategies are in accordance with the law. At the same time, the total time consumed by OMEC and OCMEC is always lower than NMEC, and OCMEC is always lower than OMEC. So, computing offloading and data caching can reduce the system delay. With the increase of R, the total load of the system increases. When R ∈ {1, 2, 3}, because of the low task load, the gain effect of OCMEC on reducing the delay is relatively low; when R ∈ {3, 4, 5, 6}, the task load increases continuously, and the total time-consuming gap between OCMEC and NMEC is increasing, and the gain effect of reducing the delay is getting better and better. When R ∈ {6, 7, 8}, the total load exceeds the capacity of MEC-BS and MEC-NBS, and they cannot fully meet the calculation requirements of the tasks, so the total time gap gradually narrows, and the gain effect for reducing the delay gradually decreases. Therefore, the strategy proposed in this paper can perform better when the number of users is larger, and the proposed base stations data caching and computational offloading strategies can effectively reduce the end-user service response latency. Different Numbers of MEC-NBS As shown in Figure 6, we measured the total time consumption by the system in completing all tasks under the 3 strategies with different numbers of MEC-NBS. In simulations, we increase K from 2 to 9, while other settings are listed in Table 1 When K is increased, the total time to complete tasks is decreased correspondingly under OMEC and OCMEC strategies. The curve of NMEC is flat since the further offloading and data caching are disabled, so K has no effect on NMEC. At the same time, the total time consumed by OMEC and OCMEC is always lower than NMEC, and OCMEC is always lower than OMEC. So, computing offloading and data caching can reduce the system delay. With the increase of K, the system can accept more tasks and provide better performance through computing offloading and data caching, the total time-consuming gap between OCMEC and NMEC is increasing, and the gain effect of reducing the delay is getting better and better. But we still observe that, when the number of K increases to a certain threshold, the optimization effect for time consumption slows down gradually. Different Service Rates of MEC-NBS As shown in Figure 7, we measured the total time consumption by the system in completing all tasks under the 3 schemes with different service rates of MEC-NBS. In simulations, we increase φ k from 2.2 × 10 6 to 2.7 ×10 6 MIPS, while other settings are listed in Table 1 φ k represents the computation capability of MEC-NBS. Similar to Figure 6, when φ k is increased, total utilization time of the system is decreased under OMEC and OCMEC strategies. The line of NMEC is straight since φ k has no effect on it. The total time consumed by OMEC and OCMEC is always lower than NMEC, OCMEC is always lower than OMEC. So, computing offloading and data caching can reduce the system delay. With the increase of φ k , the system can accept more tasks and provide better performance through computing offloading and data caching, the total time-consuming gap between OCMEC and NMEC is increasing, and the gain effect of reducing the delay is getting better and better. When φ k increases to a certain threshold, the optimization effect for time consumption slows down gradually. It is worth noting that the gain of φ k to total time optimization is far less than K. Therefore, MEC-NBS can effectively reduce the system delay, in which the combination of data caching and computational offloading than one separate computational offloading brings more delay benefits. Different Data Transmission Rates between C and MEC-NBS As shown in Figure 8, we measured the total time consumption by the system in completing all tasks under the 3 schemes with different data transmission rates between C and MEC-NBS. In simulations, we increase the s C2NBS k , ∀k ∈ K from 50 to 90 MB/s, while other settings are listed in Table 1, except fixed values M = 60, R = 6, K = 6, φ k = 2.4 × 10 6 MBPS, s NBS2C The data transmission rates between C and MEC-NBS impact the time consumptions of transmission processes in further offloading and data caching. As the transmission rates increase, the time consumptions decrease, so the time benefits gained by further offloading and data caching grows. As shown in Figure 8, the curves of our scheme and OMEC appear a downward trend as s C2NBS k , ∀k ∈ K increase, whereas the curve of NMEC is unchanged since further offloading and data caching are disabled. , the data transmission rates between C and MEC-NBS, k = 1, 2, . . . , K. Different Data Transmission Rates between MEC-NBS and C As shown in Figure 9, we measured the total time consumption by the system in completing all tasks under the 3 schemes with different data transmission rates between C and MEC-NBS. In simulations, we increase the s NBS2C k , ∀k ∈ K from 10 to 40 MB/s, while other settings are listed in Table 1 Similar to the data transmission rates between C and MEC-NBS, s NBS2C k , ∀k ∈ K impacts the time consumptions of transmission processes in further offloading and data caching. Although the curves of our scheme and OMEC are not very smooth, we can still notice that the overall trend of the curves are decreasing. Similarly, the curve of NMEC is still unchanged because further offloading and data caching are disabled. Different Data Transmission Rates between MEC-BS and MEC-NBS As shown in Figure 10, we measured the total time consumption by the system in completing all tasks under the 3 schemes with different data transmission rates between MEC-BS and MEC-NBS. In simulations, we increase the s BS2NBS k and s NBS2BS k , ∀k ∈ K from 10 to 40 MB/s, while other settings are listed in Table 1 Similar to the data transmission rates between MEC-NBS and C, s BS2NBS k and s NBS2BS k , ∀k ∈ K impacts the time consumptions of transmission processes in further offloading and data caching. The overall trend of the curves of OCMEC and OMEC are decreasing. Similarly, the curve of NMEC is still unchanged because further offloading and data caching are disabled. Comparison with Other Optimization Algorithms As shown in Figure 11, we compare our algorithm with two other ones, including the Particle Swarm Optimization (PSO) algorithm and the Simulated Annealing (SA) algorithm. The convergence speeds of the 3 algorithms are measured to find the solution of our optimization problem with M = 70, R = 6, K = 5, φ k = 2.4 × 10 6 Table 1. The simulation results show that although all of the 3 algorithm can converge after multiple iterations, our Genetic algorithm has the fastest convergence speed (our Genetic algorithm is converged at iteration 252, PSO algorithm is converged at iteration 281, and SA algorithm is converged at iteration 297). It demonstrates that our algorithm is more suitable for our optimization problem. Conclusions As one of the key research directions in the future mobile network, mobile edge computing technology can effectively alleviate the computing pressure and data interaction pressure of mobile cloud, reduce the response delay of mobile terminals, and improve the overall efficiency of the system. In this paper, the characteristics of computing offloading and data caching in mobile edge computing are deeply studied, and a new resource management scheme based on the cooperation of multiple MEC base stations is proposed. In our scheme, MEC-BS and MEC-NBS can perform computational tasks offloaded from the cloud and cache the data needed to compute tasks from the cloud, minimizing the time consumed to perform all tasks. In our research, various computing tasks of mobile terminals are defined by computing load and data load, and resource management problem is transformed into optimization problem with the objective of minimizing system time consumption. The Cooperative Resource Management Algorithm proposed in this paper consists of several separate and parallel running cycles. The performance of our scheme is evaluated and compared with the following two schemes: (1) schemes without data caching function; and (2) schemes without data caching and computing offloading. Compared with the other two schemes, our scheme shows obvious superiority in three different situations, and greatly improves the performance of the system. Our scheme provides a new resource management method leveraging the cooperation of multiple MEC-BS. It can efficiently enhance the performance of computation-intensive and delay-sensitive applications, and improve the user experience and device supports on them. In our future works, we plan to extend our research to the area of IoT applications, such Internet of Vehicles, Industrial Internet of Things, etc. The joint computation offloading and data caching problem will be studied to efficiently exploit the joint utilization of the resources of all BSs and the cloud to enhance the performance of task processing in these applications.
11,893
2021-06-22T00:00:00.000
[ "Computer Science", "Engineering" ]
Annealing-Modulated Surface Reconstruction for Self-Assembly of High-Density Uniform InAs/GaAs Quantum Dots on Large Wafers Substrate In this work, we developed pre-grown annealing to form β2 reconstruction sites among β or α (2 × 4) reconstruction phase to promote nucleation for high-density, size/wafer-uniform, photoluminescence (PL)-optimal InAs quantum dot (QD) growth on a large GaAs wafer. Using this, the QD density reached 580 (860) μm−2 at a room-temperature (T) spectral FWHM of 34 (41) meV at the wafer center (and surrounding) (high-rate low-T growth). The smallest FWHM reached 23.6 (24.9) meV at a density of 190 (260) μm−2 (low-rate high-T). The mediate rate formed uniform QDs in the traditional β phase, at a density of 320 (400) μm−2 and a spectral FWHM of 28 (34) meV, while size-diverse QDs formed in β2 at a spectral FWHM of 92 (68) meV and a density of 370 (440) μm−2. From atomic-force-microscope QD height distribution and T-dependent PL spectroscopy, it is found that compared to the dense QDs grown in β phase (mediate rate, 320 μm−2) with the most large dots (240 μm−2), the dense QDs grown in β2 phase (580 μm−2) show many small dots with inter-dot coupling in favor of unsaturated filling and high injection to large dots for PL. The controllable annealing (T, duration) forms β2 or β2-mixed α or β phase in favor of a wafer-uniform dot island and the faster T change enables optimal T for QD growth. Introduction Self-assembled InAs quantum dots (QDs) have been used in optoelectronic devices, such as laser diodes [1], super-luminescence diodes [2], quantum emitters [3] and photodetectors [4], with flexible wavelength tuning by metamorphic growth [5][6][7], compatibility with silicon for integration [1,8,9], high working temperatures (T) [10] and speed [11]. Molecular beam epitaxy with simple nucleation mechanism, ultra-high vacuum and precise shutter could fabricate defect-free QDs with exciton formed [12]. For mass production of these devices, a large substrate with homogeneous island is desired. However, different Ts in the wafer center and surrounding ('Surr') show diverse migration; besides, the QD self-assembly has a continuous size distribution and critical coverage for large dots [13]. The QD density increases monotonously as the In deposition rate raises (932 µm −2 at the highest) [14] or using Al or Sb atoms [15,16] as preferential nucleation sites (>1000 µm −2 ). The strain sites in bilayer QDs also form uniform QDs (110 µm −2 ) in photoluminescence (PL) spectral full width at half maximum (FWHM) of~17.5 meV [10]. Around the QD growth T, the GaAs (001) surface has many arsenic-rich (2 × 4) reconstruction phases: α, β, β2 and γ [17], with different influences on In adatom migration and nucleation. After GaAs growth, the γ phase with arsenic dimers in favor of cluster formation must be reduced for the QD island, usually, by 'stay-by' at the growth T in lower arsenic pressure for minutes, to form β with ordered surface arsenic in favor of In migration (T-sensitive). In this work, we use in situ annealing to form β2 nucleation sites for a uniform QD island at a spectral FWHM of~23.6 meV (110~180 µm −2 ); an α phase with sufficient migration forms QDs in a wafer-uniform high PL at FWHM of 24~25 meV; a β phase at a high rate forms a QD density of 580~820 µm −2 with wafer-uniform PL spectra (i.e., large dots) (FWHM: 34~41 meV, as in [14]); a mediate rate forms uniform-size QDs in β (density: 320~400 µm −2 , FWHM: 28~34 meV) while diverse QDs are formed in β/β2 (density: 370~440 µm −2 , FWHM: 92~68 meV) with island in advance. Based on the PL behaviors, a physical picture of nucleation is obtained: The annealing forms β2-mixed α or β phase on a 3-inch wafer for a uniform island. The mediate annealing forms more β2 sites to limit migration while high (low) annealing forms an α (β) phase to promote it. The α phase with a Ga-Ga bond is in favor of In migration; the arsenic bond to Ga breaks the Ga-Ga bond and desorbs the Ga-As molecule to transfer α to β2 (i.e., the nucleation site) in the lower-T region while forming a Ga-As-Ga bond (i.e., direct α->β transfer) in the higher-T region with In migration unaffected. The QD growth parameters were optimized and the QD height distribution was analyzed. PL spectra at a low T or strong excitation reflect photocarrier populations in discrete states. The ground state in QDs shows a thermal activation energy of 0.24 eV (as the T-dependent PL spectra reflect), in favor of a high-T work. High-density QDs with a little-filled excited states are desired for high injection and optimal emission. The annealing offers a flexible tune of reconstruction phase, dependent on the annealing T, duration and arsenic pressure. Experimental Section The QDs were grown on a 3-inch N + GaAs(001) substrate in a solid-source molecular beam epitaxy. The epi-ready substrate was directly fixed on the standard holder with sapphire wafer at its backing for uniform heating. The QD growth was fulfilled as follows: After deoxidizing at 690 • C, the substrate was cooled to 630 • C to grow a 300 nm GaAs buffer at a rate of 0.6 µm/h and an arsenic (As 2 ) V/III ratio of~15 and then to 525~540 • C to grow three layers of InAs QDs capped by a 4 nm InGaAs strain-reducing layer (SRL) and a 40 nm GaAs space, an uncapped QD layer for atomic force microscopy (AFM). The QD height was extracted from an AFM image for statistics of the size distribution [13]. The In deposition in each QD layer was divided into 4~8 circles, each with deposition step (2~4 s, rate: 0.05, 0.1 or 0.2 ML/s) and interrupt step (10 s) for sufficient migration, in an As 2 pressure of 1.3 × 10 −6 Torr, with the critical coverage θ c for the island monitored by reflection high-energy electron diffraction (RHEED). The SRL in a proper In content forms Dot-in-Well to reduce In-Ga mixing and keep the QD height uniform and, also, to tune the QD wavelength. The in situ annealing was performed at the same As 2 pressure by heating to >580 • C in~6 min and then cooled back to the QD growth T (schematized in Figure 1a). The phase change is based on the fact that at a high T arsenic has a much higher vapor pressure than Ga [18] to reduce the residual surface arsenic, and the surface atom migration is fast for reconstruction. The annealing at different Ts form different amounts of β2 (as the nucleation site) among the β or α phases, offering a flexible phase preparation, as compared to the 'stay-by' to form the β/γ phase. Since the duration of the QD growth is <90 s, faster than the substrate T change (−10 • C/min) and the reconstruction phase change (tens of min), the annealing enables QD island at an optimal T and optimal reconstruction phase, β2 or β2 site-mixed α or β. The T mentioned here is the nominal T obtained in the electronic module. The PL performance was studied using a spectrograph equipped with a cooled InGaAs linear array detector: the room-T PL was measured by a multi-mode optical fiber beam splitter to introduce a laser (632.8 nm, 2 mW) and collect PL and a fiber probe on a sample micro-region of 62 × 62 µm 2 ; the cryogenic-T PL was measured by confocal microscope (collection efficiency <5%) with a high-power 532 nm laser excitation (25 mW, focus on 4 µm 2 ) and with the sample mounted on the cold finger (T = 4.5 K) in a closed-cycle helium cryostat. Their comparison reflects the carrier populations in QD discrete states. The QDs show two size-modes and the PL is mainly from large dots with the spectral FWHM related to its size distribution. The PL intensity at different sample Ts follows the Arrhenius equation I(T) = I 0 /[1 + ΣA i exp(−E i /kT)], where E i is the QD thermal activation energy and i means different discrete levels. Figure 1 gives the phase diagram and structure models of the (2 × 4) reconstruction phases with descriptions of their influences on In adatom migration and nucleation. phase, β2 or β2 site-mixed α or β. The T mentioned here is the nominal T obtained in the electronic module. The PL performance was studied using a spectrograph equipped with a cooled InGaAs linear array detector: the room-T PL was measured by a multi-mode optical fiber beam splitter to introduce a laser (632.8 nm, 2 mW) and collect PL and a fiber probe on a sample micro-region of 62 × 62 μm 2 ; the cryogenic-T PL was measured by confocal microscope (collection efficiency <5%) with a high-power 532 nm laser excitation (25 mW, focus on 4 μm 2 ) and with the sample mounted on the cold finger (T = 4.5 K) in a closed-cycle helium cryostat. Their comparison reflects the carrier populations in QD discrete states. The QDs show two size-modes and the PL is mainly from large dots with the spectral FWHM related to its size distribution. The PL intensity at different sample Ts follows the Arrhenius equation I(T) = I0/[1 + ΣAiexp(−Ei/kT)], where Ei is the QD thermal activation energy and i means different discrete levels. Figure 1 gives the phase diagram and structure models of the (2 × 4) reconstruction phases with descriptions of their influences on In adatom migration and nucleation. [17] and schematic of In migration and nucleation on them. γ at the lowest T with arsenic dimer tends to bond In adatom to form clusters; β with ordered arsenic promotes In migration; β2 with atomic stage is in favor of In adatom bond to form tall QD; α at higher T with Ga-Ga bond and less surface arsenic promotes In migration and enables easy transfer to β2 with arsenic (remove Ga atom in Ga-Ga bond) or to β in higher T (form Ga-As-Ga bond), for wafer-uniform island. The deposition of sub-ML Ga will form Ga-rich (4 × 6) reconstruction [18]. At T > 580 °C , there is Ga-rich (2 × 1) reconstruction [19] where In desorption is great, hard to island. (Figure 2c) was at 525 °C with annealing. As the In deposition amount raises, the QD density first increases and then decreases (with clusters). From the PL spectra it was found that for the rate of 0.05 ML/s the optimal amount is 1.4 ML (θc: 1.2 ML) before forming bi-mode large QDs at 1.6 ML (larger spectral FWHM). For the rate of 0.1 (0.2) ML/s with slow migration, the optimal amount for a sufficient island is 2.0 ML (θc: 1.6 ML). For the rate of 0.2 ML/s with limited migration (β2), as the In deposition raises from 2.0 to 2.2 ML, the QD density remains at 5.8 × 10 10 cm −2 in the center while rising from (7.4 ± 1.2) × 10 10 to (8.2 ± 0.9) × 10 10 cm −2 at the Surr (the variation is related to the T distribution), with the highest one reaching 9.1 × 10 10 cm −2 as presented. The dilute QD growth shows three critical coverages (θc −0.1, −0.07 and −0.01 ML) for large dot (height: 4~5, 7~11 and 12~18 nm) formation [13]; the QD ensemble here shows a continuous height distribution (2~9 nm) with more In coverage. The QDs showed two size modes and the room-T PL spectra are from large dots in the 2nd critical coverage. The spectral peak at 1.32 μm in samples 2a1.4 and 2b2.0 (shifting (b) Structure model of arsenic-rich (2 × 4) reconstruction phases [17] and schematic of In migration and nucleation on them. γ at the lowest T with arsenic dimer tends to bond In adatom to form clusters; β with ordered arsenic promotes In migration; β2 with atomic stage is in favor of In adatom bond to form tall QD; α at higher T with Ga-Ga bond and less surface arsenic promotes In migration and enables easy transfer to β2 with arsenic (remove Ga atom in Ga-Ga bond) or to β in higher T (form Ga-As-Ga bond), for wafer-uniform island. The deposition of sub-ML Ga will form Ga-rich (4 × 6) reconstruction [19]. At T > 580 • C, there is Ga-rich (2 × 1) reconstruction [20] where In desorption is great, hard to island. (Figure 2c) was at 525 • C with annealing. As the In deposition amount raises, the QD density first increases and then decreases (with clusters). From the PL spectra it was found that for the rate of 0.05 ML/s the optimal amount is 1.4 ML (θ c : 1.2 ML) before forming bi-mode large QDs at 1.6 ML (larger spectral FWHM). For the rate of 0.1 (0.2) ML/s with slow migration, the optimal amount for a sufficient island is 2.0 ML (θ c : 1.6 ML). For the rate of 0.2 ML/s with limited migration (β2), as the In deposition raises from 2.0 to 2.2 ML, the QD density remains at 5.8 × 10 10 cm −2 in the center while rising from (7.4 ± 1.2) × 10 10 to (8.2 ± 0.9) × 10 10 cm −2 at the Surr (the variation is related to the T distribution), with the highest one reaching 9.1 × 10 10 cm −2 as presented. The dilute QD growth shows three critical coverages (θ c −0.1, −0.07 and −0.01 ML) for large dot (height: 4~5, 7~11 and 12~18 nm) formation [13]; the QD ensemble here shows a continuous height distribution (2~9 nm) with more In coverage. The QDs showed two size modes and the room-T PL spectra are from large dots in the 2nd critical coverage. The spectral peak at 1.32 µm in samples 2a1.4 and 2b2.0 (shifting to 1.22~1.24 µm at T = 5 K, as Figure 2h,i shows) is from large dots at a height of 5~9 nm, >80% of all. The QD height decreases to~7.4 nm after capping, as simulated in terms of their discrete levels [21]. For the high rate of 2.0 ML/s (i.e., the 2c series), the QDs show lower height and dense small dots (height: 2~4.5 nm). The room-T PL spectral peak at 1.25 µm (shifting to 1.16 µm at 5 K, Figure 2j) is from large dots at a height of 4.5~7.0 nm; the higher amount 2.2 ML greatly increases them (33% of all dots, 190 µm −2 ) for higher PL, with 7~8 nm-height QDs grouped into clusters. The coexisting clusters (more at the Surr) affect the PL performance of dense QDs little due to limited migration; while a fast migration at a low rate (sample 2a and 2b) shows QD PL seriously affected by cluster formation. In the 2c series, the PL spectra show wafer uniformity since the pre-grown annealing at 580 • C formed β2 sites on the whole wafer for preferential nucleation; the dense QDs show a PL spectral FWHM of 34 (41) meV at the wafer center (Surr), largely uniform (a lower or higher-T growth will increase the FWHM); the lower PL intensity than sample 2a is from random filling of photo-holes and electrons in dense QDs. At low T with complete filling in of large dots, there is a comparable maximal intensity (see T-dependent spectra, Figure 3). Compared to sample 2a and 4b with low-T PL emission at 1.22~1.24 µm from large dots at a height of 5~8 nm (Figure 2h,j), the dilute QDs show PL emissions at 1.22~1.24 µm (T = 77 K) correlated to QDs of AFM height 15~17 nm [13], i.e., there is more strain inside than in the dense QDs in 2a and 4b. In this learning, the shorter emission wavelength of dense QDs in the 2c series is mainly from their smaller height than in 2a and 2b (see QD height distribution), instead of strain accumulation. Large dots get the highest density (240 µm −2 , 80% of all) in the rate of 0.1 ML/s (2b2.0). However, as the low-T PL spectra show (see Figure 2h-j), the dense QDs in the 2c series show more ground-state populations, i.e., more large dots formed, likely during the SRL capping. As Figure 3 shows, sample 4b (in the same low rate and low QD density as 2a) shows obvious p-state population and the s-state emission reduces at T < 160 K, related to the QD filling-induced 'environment' electric field or the population in small dots, which must be considered for devices. The excited state in the dense QDs (2c) is little filled as T varies or as excitation power raises, likely from their inter-dot coupling, desired for high injection and optimal emission in the ground state. For the growth rate of 0.1 ML/s, sample 2b2.0 shows similar PL spectra and wafer distribution as the rate of 0.05 ML/s (2a1.4); its spectral FWHM is 28.3 (34.1) meV at the wafer center (Surr), higher than 2a1.4 (24.6 meV, at the center); its greater number of large dots show more s-state transition in the low-T PL spectrum (Figure 2h). In Figure 3, the T-dependent PL spectra reflect comparable intensity for dense QDs in sample 2c and 4b at low T and QD thermal activation energy,~0.24 eV for QD s-state and 0.17 eV for p-state, reflecting the QD electron level offset to the continuum conduction band [21] (meanwhile, the QD hole level offset to the continuum valence band is~0.19 eV, promising for a high T 0 in a laser diode). As the sample T reduces, the PL peak blue-shifts; with respect to the QD s-state, a larger shift as the T varies existed in p-state, reflecting more influence from the continuum band. For dense QDs (2c), there is a smaller energy shift in the QD p-state, which was related to its inter-dot coupling and mini-band formation possibly. Figure 4 explores the influence of different annealing Ts (T an ) on the QD growth at a rate of 0.05 ML/s and at the optimal amount (1.4 ML) and growth T (T gr ) of 540 • C. The PL peak is from large dots (height: 5~9 nm) in the spectral FWHM of~24 meV. The smallest FWHM is 23.0 or 23.6 meV, obtained in 4d or 4c with β2 (T an = 600 • C). Although the same nominal T was found for 4c and 4d, the real T was a little higher in 4d with more β2 for uniform island at both the center and Surr (see AFM image), forming QDs in lower density and smaller FWHM, with sharp tops and small bases, and with more strain accumulated for a low PL. In β2, as sample 4c-f (AFM images, PL spectra and QD height distribution) shows, the QD island is sensitive on growth T (T gr ), i.e., a 5 • C increase will improve In migration greatly. The QD height distribution in the left reflects the QD growth: In sample 4f and 4e with lower T gr , the QD height is distributed at 2~9 nm equally with PL spectrum in a broad profile and peak at 1.3 µm from the large dots (height: 7~8 nm). In 4b and 4d with higher T gr , the QD height distribution changes and more large dots (height: 7~10 nm) form. In the β phase (4a,c and 2a), the QD growth shows similar height distribution as in 4b and 4d but at lower height of 3~9 nm. The 1-nm increase of the QD height in β2 is from preferential nucleation in the β2 nucleation sites (atomic stage). In α (T an = 620 • C, 4b) or β ('stay-by', 4a) with improved migration for a sufficient island, it shows a high PL in a larger FWHM and the same density (1.9 (2.6) × 10 10 cm −2 at center (Surr)) as sample 2a1.4. In the α phase (4b), the PL spectra are wafer-uniform: at the Surr, the high PL intensity, comparable to the center, is due to β2-mixed α phase to improve migration and nucleation for the uniform island of tall and large-base QDs (see AFM image) in FWHM of~24.9 meV. Compared to this, the β phase shows diverse-size QDs at the Surr in lower T for migration. The T-related migration leads to the same thing in 4c and 4d. The wafer center showsa sufficient migration for island at T gr ≥ 535 • C. Sample 4g (0.1 ML/s fast migration, T an = 580 • C, 4 min, small-amount β2) shows diverse QDs (height 2~7 nm, see size distribution) in the center, of density 3.7 × 10 10 cm −2 at a deposition amount of 1.6 ML (the same level is achieved at amount 2.0 ML in β phase, 2b) and spectral FWHM of 117 nm (two size modes, 2~5 nm and 5~7 nm in height, with spectra overlapped) (higher spectral FWHM and PL intensity with QD height covering 3~10 nm are expected when the In deposition amount increases to 2.0 ML); while it shows uniform QDs (FWHM~68 meV) at the Surr in density of 4.4 × 10 10 cm −2 and similar spectrum as 2c (i.e., lower T with slow migration for sufficient island on a small amount of β2 sites). Without the annealing or β2 site, it forms uniform QDs in the β phase at the same T gr (2b in Figure 2, FWHM~28.6 meV, large dots of height 5.5~8 nm, see height distribution). Sample 4f grown in 0.05 ML/s with T an = 600 • C (more β2 sites) also shows diverse-size QDs at both the wafer center and Surr (see AFM images), uniform spectrum in the center (un-ripened), and a lower and broad spectrum in the Surr (more overlap of the QD base). The diverse-size QDs in sample 4g (center) shows little overlap of the QD base. In all, the growth of diverse-size QDs required β2 nucleation sites and a non-uniform In atom supply (i.e., more β2, lower rate, higher T gr ), unlike the growth of uniform dense QDs (i.e., high rate, proper T gr and proper β2 amount) with sufficient migration. For high rate, the high-T gr growth often promoted migration to form QDs at a lower density than we expected, as the AFM images in Figure 5 show. In the β phase with a smaller amount of β2 sites (T an = 550 • C), sample 5a also shows lower density QDs, of 3.3 (4.9) × 10 10 cm −2 at the wafer center (Surr), and different PL intensities on wafer. In all, to form a QD density >5 × 10 10 cm −2 , it is better to use proper β2 nucleation sites at a proper T gr to limit migration and increase the island. Unlike the precise calibration of T gr each time by RHEED observation of the β-to-γ transition point, the pre-grown annealing (T an , duration) offers a universal modulation of the reconstruction phase on the whole wafer (β2-mixed β or α) with a large tolerance of wafer T variation. Figure 5 explores the annealing effect on QD growth at a high rate. QDs were grown at 540 • C in sub-optimal amount and at a density of 2.5 (3.4) × 10 10 cm −2 at the wafer center (Surr), as shown in Figure 5d-f. This were lower than that grown at 525 • C with the same amount (3.3 (4.7) × 10 10 cm −2 , see Figure 2c, the plot of QD density v.s. deposition amount). Different annealing Ts (±20 • C) define different reconstruction phases as indicated. In β2 (5d,e) or α (5f,g) that mixes with β2, it forms wafer-uniform PL-lacking QDs, with a broad size distribution (height 3~9 nm). In the β phase (5b,c) with T-sensitive migration on the wafer, the optimal deposition amount reduces at the high T gr : Sample 5c, at an amount of 1.8 ML, shows a sufficient island and PL-optimal QDs, wafer-uniform due to the mixed β2 (like the case in 2c, 5d,e), at a lower density of 2.1 (3.0) × 10 10 cm −2 at the wafer center (Surr), and a broad height distribution, non-uniform island. Sample 5b at 2.0 ML shows a red-shift PL spectral peak in the center (QD height increased to 4.5~10 nm, see the distribution) and a degraded PL at the Surr (QD density increased to 3.6 × 10 10 cm −2 , with a high rate and high T for sufficient island). In β2 and α phases, the PL-optimal deposition amount is retained, independent of the higher T gr , as the PL spectra (similar to that grown at 525 • C with the same amount, 2c1.8) reflects. In the β2 phase (5d,e), with limited migration on the whole wafer, the PL is wafer uniform; the higher-In SRL in 5e greatly improved it since In-Ga mixing is reduced and the QD height is retained. In the α phase (5f,g), the increase of the arsenic pressure (2 × 10 −6 Torr) during annealing (5g) shows a blue-shift PL spectral profile at the Surr with QD density increased, while a PL peak in the center with the QD density increased too, to 3.0 × 10 10 cm −2 , in the same height distribution as shown. The reason for the PL blue-shift at the Surr is the Ga-As bond formation and desorption in the form of Ga-As molecules in an arsenic atmosphere needed to break the Ga-Ga bond in the α phase and leave β2 with limited migration for island. In the wafer center at a higher T, the fast adatom migration forms a Ga-As-Ga bond (i.e., α transfer to β with migration unaffected), and there is always a PL peak from the QDs, independent of the arsenic pressure. In all, the high-T growth in β shows an island in advance (5c, 4g) from the mixed β2 sites; the high-T growth in α (5f, 4b) shows a transfer to β2 at the wafer Surr or β at the center, with an improved wafer-uniform QD island. . For (f) and (g), at wafer Surr, the images with higher density were selected. Bottom: Excitation power-dependent T =5 K PL spectra of QD samples (h) 2b2.0, (i) 2a1.4, and (j) 2c2.0. There are two spectral peaks. For the higher-energy peak, the slope (S) of excitation power-dependent intensity I = I0 x (P/P0) S is marked. As reference, the slope for the large-dot ground-state (i.e low-energy) peak is normalized to 1. . For (f) and (g), at wafer Surr, the images with higher density were selected. Bottom: Excitation power-dependent T = 5 K PL spectra of QD samples (h) 2b2.0, (i) 2a1.4, and (j) 2c2.0. There are two spectral peaks. For the higher-energy peak, the slope (S) of excitation power-dependent intensity I = I 0 × (P/P 0 ) S is marked. As reference, the slope for the large-dot ground-state (i.e., low-energy) peak is normalized to 1. Figure 3. T-dependent PL spectra of QDs in sample 2c (a) and 4b (see Figure 4) (b), insets: PL intensity and peak energy (red: p-state, black: s-state) as a function of sample T in measurement, with Arrhenius fitting curve to extract Ea. Sharp line at 1064 nm is from 532 nm laser source. Figure 4 explores the influence of different annealing Ts (Tan) on the QD growth at a rate of 0.05 ML/s and at the optimal amount (1.4 ML) and growth T (Tgr) of 540 °C . The PL peak is from large dots (height: 5~9 nm) in the spectral FWHM of ~24 meV. The smallest FWHM is 23.0 or 23.6 meV, obtained in 4d or 4c with β2 (Tan = 600 °C ). Although the same nominal T was found for 4c and 4d, the real T was a little higher in 4d with more β2 for uniform island at both the center and Surr (see AFM image), forming QDs in lower density and smaller FWHM, with sharp tops and small bases, and with more strain accumulated for a low PL. In β2, as sample 4c-f (AFM images, PL spectra and QD height distribution) shows, the QD island is sensitive on growth T (Tgr), i.e., a 5 °C increase will improve In migration greatly. The QD height distribution in the left reflects the QD growth: In sample 4f and 4e with lower Tgr, the QD height is distributed at 2~9 nm equally with PL spectrum in a broad profile and peak at 1.3 μm from the large dots (height: 7~8 nm). In 4b and 4d with higher Tgr, the QD height distribution changes and more large dots (height: 7~10 nm) form. In the β phase (4a,c and 2a), the QD growth shows similar height distribution as in 4b and 4d but at lower height of 3~9 nm. The 1-nm increase of the QD height in β2 is from preferential nucleation in the β2 nucleation sites (atomic stage). In α (Tan = 620 °C , 4b) or β ('stay-by', 4a) with improved migration for a sufficient island, it shows a high PL in a larger FWHM and the same density (1.9 (2.6) × 10 10 cm −2 at center (Surr)) as sample 2a1.4. In the α phase (4b), the PL spectra are wafer-uniform: at the Surr, the high PL intensity, comparable to the center, is due to β2-mixed α phase to improve migration and nucleation for the uniform island of tall and large-base QDs (see AFM image) in FWHM of ~24.9 meV. Compared to this, the β phase shows diverse-size QDs at the Surr in lower T for migration. Results and Discussion The T-related migration leads to the same thing in 4c and 4d. The wafer center showsa sufficient migration for island at Tgr ≥ 535 °C . Sample 4g (0.1 ML/s fast migration, Tan = 580 °C , 4 min, small-amount β2) shows diverse QDs (height 2~7 nm, see size distribution) in the center, of density 3.7 × 10 10 cm −2 at a deposition amount of 1.6 ML (the same level is achieved at amount 2.0 ML in β phase, 2b) and spectral FWHM of 117 nm (two size modes, 2~5 nm and 5~7 nm in height, with spectra overlapped) (higher spectral FWHM and PL intensity with QD height covering 3~10 nm are expected when the In deposition amount increases to 2.0 ML); while it shows uniform QDs (FWHM ~68 meV) at the Surr in density of 4.4 × 10 10 cm −2 and similar spectrum as 2c (i.e., lower T with slow migration for sufficient island on a small amount of β2 sites). Without the annealing or β2 site, it forms uniform QDs in the β phase at the same Tgr (2b in Figure 2, FWHM ~28.6 meV, large dots of height 5.5~8 nm, see height distribution). Sample 4f grown in 0.05 ML/s with Tan = 600 °C (more β2 sites) also shows diverse-size QDs at both the wafer center and Surr (see AFM images), uniform spectrum in the center (un-ripened), and a lower and broad spectrum in the Surr (more overlap of the QD base). The diverse-size QDs in sample 4g (center) shows little overlap of the QD base. In all, the growth of diverse-size QDs required β2 nucleation sites and a non-uniform In atom supply (i.e., more β2, lower rate, higher Tgr), unlike the growth of uniform dense QDs (i.e., high rate, proper Tgr and proper β2 amount) with sufficient migration. For high rate, the high-Tgr growth often promoted migration to form QDs at a lower density than we expected, as the AFM images in Figure 5 show. In the β phase with a smaller amount of β2 sites (Tan = 550 °C), sample 5a also shows lower density QDs, of 3.3 (4.9) × 10 10 cm −2 at the wafer center (Surr), and different PL intensities on wafer. In all, to form a QD density >5 × 10 10 cm −2 , it is better to use proper β2 nucleation sites at a proper Tgr to limit migration and increase the island. Unlike the precise calibration of Tgr each time by RHEED observation of the β-to-γ transition point, the pre-grown annealing (Tan, duration) offers a universal modulation of the reconstruction phase on the whole wafer (β2-mixed β or α) with a large tolerance of wafer T variation. Figure 5 explores the annealing effect on QD growth at a high rate. QDs were grown at 540 °C in sub-optimal amount and at a density of 2.5 (3.4) × 10 10 cm −2 at the wafer center (Surr), as shown in Figure 5d-f. This were lower than that grown at 525 °C with the same amount (3.3 (4.7) × 10 10 cm −2 , see Figure 2c, the plot of QD density v.s. deposition amount). β2 sites) also shows diverse-size QDs at both the wafer center and Surr (see AFM images), uniform spectrum in the center (un-ripened), and a lower and broad spectrum in the Surr (more overlap of the QD base). The diverse-size QDs in sample 4g (center) shows little overlap of the QD base. In all, the growth of diverse-size QDs required β2 nucleation sites and a non-uniform In atom supply (i.e., more β2, lower rate, higher Tgr), unlike the growth of uniform dense QDs (i.e., high rate, proper Tgr and proper β2 amount) with sufficient migration. For high rate, the high-Tgr growth often promoted migration to form QDs at a lower density than we expected, as the AFM images in Figure 5 show. In the β phase with a smaller amount of β2 sites (Tan = 550 °C), sample 5a also shows lower density QDs, of 3.3 (4.9) × 10 10 cm −2 at the wafer center (Surr), and different PL intensities on wafer. In all, to form a QD density >5 × 10 10 cm −2 , it is better to use proper β2 nucleation sites at a proper Tgr to limit migration and increase the island. Unlike the precise calibration of Tgr each time by RHEED observation of the β-to-γ transition point, the pre-grown annealing (Tan, duration) offers a universal modulation of the reconstruction phase on the whole wafer (β2-mixed β or α) with a large tolerance of wafer T variation. Figure 5 explores the annealing effect on QD growth at a high rate. QDs were grown at 540 °C in sub-optimal amount and at a density of 2.5 (3.4) × 10 10 cm −2 at the wafer center (Surr), as shown in Figure 5d-f. This were lower than that grown at 525 °C with the same amount (3.3 (4.7) × 10 10 cm −2 , see Figure 2c, the plot of QD density v.s. deposition amount). The PL performances of different density QDs is estimated in a formula considering the random filling of photo-electrons and holes in QDs: I~η[n + (n − 1)C(N − n,1) + (n − 2)C(N − n,2) + (n − 3)C(N − n,3)+ . . . ]/C(N,n) for N > n and~ηN/n for N < n, where N denotes the QD number in a sub-µ-region 200 × 200 nm 2 , n the laser-excited photo-electron number, η the QD quantum yield, C(m,n) the combination of n electrons filling in m dots and I the PL intensity (i.e., spectral peak area). In this estimation, dense QDs in the 2c series maintains the highest η (PL performance). This is also reflected in the T = 5K PL spectra with the excited state filled (Figures 2h-j and 3): sample 2c series maintains the highest PL (i.e., little-filled excited state); the samples grown at a rate of 0.05 ML/s with fewer large dots show an obvious population on the excited state (e.g., 2a shown in Figure 2i); the sample at 0.1 ML/s with more large dots shows less population on the excited state (2b). As the excitation power increased, the excited state is more populated, as Figure 2h-j shows. Conclusions In summary, we use pre-grown annealing to form less-arsenic reconstruction β2 and fabricate high-density uniform InAs quantum dots (QDs) on 3-inch GaAs substrates with wafer-uniform PL. The maximum density reaches 5.8 (8.2 ± 0.9) × 10 10 cm −2 with a room-T PL spectral FWHM of 34 meV (41 meV) in wafer center (Surr), grown at a high rate and low T to limit In migration. Low rate growth at high T forms uniform QDs at a density of 1.9 × 10 10 cm −2 and a spectral FWHM of~23.6 meV. Both show wafer-uniform PL spectra, unlike the β phase (T-sensitive). Mediate rate at a low T forms uniform QDs in the β phase (3.2 × 10 10 cm −2 , FWHM~28.6 meV) at the center and diverse QDs in β2 (3.7 × 10 10 cm −2 and spectral FWHM~117 nm). The pre-grown annealing provides a controllable modulation of the surface reconstruction and enables an optimal T to form dense wafer-uniform QDs.
8,579.2
2023-06-28T00:00:00.000
[ "Materials Science", "Physics" ]
A Neural Transition-based Model for Nested Mention Recognition It is common that entity mentions can contain other mentions recursively. This paper introduces a scalable transition-based method to model the nested structure of mentions. We first map a sentence with nested mentions to a designated forest where each mention corresponds to a constituent of the forest. Our shift-reduce based system then learns to construct the forest structure in a bottom-up manner through an action sequence whose maximal length is guaranteed to be three times of the sentence length. Based on Stack-LSTM which is employed to efficiently and effectively represent the states of the system in a continuous space, our system is further incorporated with a character-based component to capture letter-level patterns. Our model gets the state-of-the-art performances in ACE datasets, showing its effectiveness in detecting nested mentions. Introduction There has been an increasing interest in named entity recognition or more generally recognizing entity mentions 2 (Alex et al., 2007;Finkel and Manning, 2009;Lu and Roth, 2015;Muis and Lu, 2017) that the nested hierarchical structure of entity mentions should be taken into account to better facilitate downstream tasks like question answering (Abney et al., 2000), relation extraction (Mintz et al., 2009;Liu et al., 2017), event extraction (Riedel and McCallum, 2011;Li et al., 2013), and coreference resolution (Soon et al., 2001;Ng and Cardie, 2002;Chang et al., 2013). Practically, the mentions with nested structures frequently exist in news (Doddington et al., 2004) and biomedical documents (Kim et al., 2003). For example in 1 We make our implementation available at https:// github.com/berlino/nest-trans-em18. 2 Mentions are defined as references to entities that could be named, nominal or pronominal (Florian et al., 2004). Traditional sequence labeling models such as conditional random fields (CRF) (Lafferty et al., 2001) do not allow hierarchical structures between segments, making them incapable to handle such problems. Finkel and Manning (2009) presented a chart-based parsing approach where each sentence with nested mentions is mapped to a rooted constituent tree. The issue of using a chart-based parser is its cubic time complexity in the number of words in the sentence. To achieve a scalable and effective solution for recognizing nested mentions, we design a transition-based system which is inspired by the recent success of employing transition-based methods for constituent parsing (Zhang and Clark, 2009) and named entity recognition (Lou et al., 2017), especially when they are paired with neural networks (Watanabe and Sumita, 2015). Generally, each sentence with nested mentions is mapped to a forest where each outermost mention forms a tree consisting of its inner mentions. Then our transition-based system learns to construct this forest through a sequence of shift-reduce actions. Figure 1 shows an example of such a forest. In contrast, the tree structure by Finkel and Manning (2009) further uses a root node to connect all tree elements. Our forest representation eliminates the root node so that the number of actions required to construct it can be reduced significantly. Following (Dyer et al., 2015), we employ Stack-LSTM to represent the system's state, which consists of the states of input, stack and action history, in a continuous space incrementally. The (partially) processed nested mentions in the stack are encoded with recursive neural networks (Socher et al., 2013) where composition functions are used to capture dependencies between nested mentions. Based on the observation that letter-level patterns such as capitalization and prefix can be beneficial in detecting mentions, we incorporate a characterlevel LSTM to capture such morphological information. Meanwhile, this character-level component can also help deal with the out-of-vocabulary problem of neural models. We conduct experiments in three standard datasets. Our system achieves the state-of-the-art performance on ACE datasets and comparable performance in GENIA dataset. Related Work Entity mention recognition with nested structures has been explored first with rule-based approaches Zhou, 2006) where the authors first detected the innermost mentions and then relied on rule-based postprocessing methods to identify outer mentions. McDonald et al. (2005) proposed a structured multi-label model to represent overlapping segments in a sentence. but it came with a cubic time complexity in the number of words. Alex et al. (2007) proposed several ways to combine multiple conditional random fields (CRF) (Lafferty et al., 2001) for such tasks. Their best results were obtained by cascading several CRF models in a specific order while each model is responsible for detecting mentions of a particular type. However, such an approach cannot model nested mentions of the same type, which frequently appear. Lu and Roth (2015) and Muis and Lu (2017) proposed new representations of mention hypergraph and mention separator to model overlapping mentions. However, the nested structure is not guaranteed in such approaches since overlapping structures additionally include the crossing structures 3 , which rarely exist in practice (Lu and Roth, 2015). Also, their representations did not model the dependencies between nested mentions explicitly, which may limit their performance. In contrast, the chart-based parsing method (Finkel and Manning, 2009) can capture the dependencies between nested mentions with composition rules which allow an outer entity to be influenced by its contained entities. However, their cubic time complexity makes them not scalable to large datasets. As neural network based approaches are proven effective in entity or mention recognition (Collobert et al., 2011;Lample et al., 2016;Huang et al., 2015;Chiu and Nichols, 2016;Ma and Hovy, 2016), recent efforts focus on incorporating neural components for recognizing nested mentions. Ju et al. (2018) dynamically stacked multiple LSTM-CRF layers (Lample et al., 2016), detecting mentions in an inside-out manner until no outer entities are extracted. Katiyar and Cardie (2018) used recurrent neural networks to extract features for a hypergraph which encodes all nested mentions based on the BILOU tagging scheme. Model Specifically, given a sequence of words {x 0 , x 1 , . . . , x n }, the goal of our system is to output a set of mentions where nested structures are allowed. We use the forest structure to model the nested mentions scattered in a sentence, as shown in Figure 1. The mapping is straightforward: each outermost mention forms a tree where the mention is the root and its contained mentions correspond to constituents of the tree. 4 Shift-Reduce System Our transition-based model is based on the shiftreduce parser for constituency parsing (Watan-abe and Sumita, 2015), which adopts (Zhang and Clark, 2009;Sagae and Lavie, 2005). Generally, our system employs a stack to store (partially) processed nested elements. The system's state is defined as [S, i, A] which denotes stack, buffer front index and action history respectively. In each step. an action is applied to change the system's state. Our system consists of three types of transition actions, which are also summarized in Figure 2: • SHIFT pushes the next word from buffer to the stack. • REDUCE-X pops the top two items t 0 and t 1 from the tack and combines them as a new tree element {X → t 0 t 1 } which is then pushed onto the stack. • UNARY-X pops the top item t 0 from the stack and constructs a new tree element {X → t 0 } which is pushed back to the stack. Since the shift-reduce system assumes unary and binary branching, we binarize the trees in each forest in a left-branching manner. For example, if three consecutive words A, B, C are annotated as Person, we convert it into a binary tree {P erson → {P erson * → A, B}, C} where P erson * is a temporary label for P erson. Hence, the X in reduce-actions will also include such temporary labels. Note that since most words are not contained in any mention, they are only shifted to the stack and not involved in any reduce-or unary-actions. An example sequence of transitions can be found in Figure 3. Our shift-reduce system is different from previous parsers in terms of the terminal state. 1) It does not require the terminal stack to be a rooted tree. Instead, the final stack should be a forest consisting of multiple nested elements with tree structures. 2) To conveniently determine the ending of our transition process, we add an auxiliary symbol $ to each sentence. Once it is pushed to the stack, it implies that all deductions of actual words are finished. Since we do not allow unary rules between labels like X1 → X2, the length of maximal action sequence is 3n. 5 Action Constraints To make sure that each action sequence is valid, we need to make some hard constraints on the ac-5 In this case, each word is shifted (n) and involved in a unary action (n). Then all elements are reduced to a single node (n − 1). The last action is to shift the symbol $. tion to take. For example, reduce-action can only be conducted when there are at least two elements in the stack. Please see the Appendix for the full list of restrictions. Formally, we use V(S, i, A) to denote the valid actions given the parser state. Let us denote the feature vector for the parser state at time step k as p k . The distribution of actions is computed as follows: (1) where w z is a column weight vector for action z, and b z is a bias term. Neural Transition-based Model We use neural networks to learn the representation of the parser state, which is p k in (1). Representation of Words Words are represented by concatenating three vectors: where e w i and e p i denote the embeddings for i-th word and its POS tag respectively. c w i denotes the representation learned by a character-level model using a bidirectional LSTM. Specifically, for character sequence s 0 , s 1 , . . . , s n in the i-th word, we use the last hidden states of forward and backward LSTM as the character-based representation of this word, as shown below: Representation of Parser States Generally, the buffer and action history are encoded using two vanilla LSTMs (Graves and Schmidhuber, 2005). For the stack that involves popping out top elements, we use the Stack-LSTM (Dyer et al., 2015) to efficiently encode it. Formally, if the unprocessed word sequence in the buffer is x i , x i+1 , . . . , x n and action history sequence is a 0 , a 1 , . . . , a k−1 , then we can compute buffer representation b k and action history representation a k at time step k as follows: where each action is also mapped to a distributed representation e a . 6 For the state of the stack, we also use an LSTM to encode a sequence of tree elements. However, the top elements of the stack are updated frequently. Stack-LSTM provides an efficient implementation that incorporates a stackpointer. 7 Formally, the state of the stack b k at time step k is computed as: where h t i denotes the representation of the i-th tree element from the top, which can be computed recursively similar to Recursive Neural Network (Socher et al., 2013) as follows: where W u,l and W b,l denote the weight matrices for unary(u) and binary(b) composition with parent node being label(l). Note that the composition function is distinct for each label l. Recall that the leaf nodes of each tree element are raw words. Instead of representing them with their original embeddings introduced in Section 3.3, we found that 6 Note that LSTM b runs in a right-to-left order such that the output can represent the contextual information of xi. 7 Please refer to Dyer et al. (2015) for details. concatenating the buffer state in (5) are beneficial during our initial experiments. Formally, when a word x i is shifted to the stack at time step k, its representation is computed as: Finally, the state of the system p k is the concatenation of the states of buffer, stack and action history: Training We employ the greedy strategy to maximize the log-likelihood of the local action classifier in (1). Specifically, let z ik denote the k-th action for the i-th sentence, the loss function with 2 norm is: where λ is the 2 coefficient. Experiments We mainly evaluate our models on the standard ACE-04, ACE-05 (Doddington et al., 2004), and GENIA (Kim et al., 2003) datasets with the same splits used by previous research efforts (Lu and Roth, 2015;Muis and Lu, 2017). In ACE datasets, more than 40% of the mentions form nested structures with some other mention. In GENIA, this number is 18%. Please see Lu and Roth (2015) for the full statistics. Setup Pre-trained embeddings GloVe (Pennington et al., 2014) of dimension 100 are used to initialize the word vectors for all three datasets. 9 The embeddings of POS tags are initialized randomly with dimension 32. The model is trained using Adam (Kingma and Ba, 2014) and a gradient clipping of 3.0. Early stopping is used based on the performance of development sets. Dropout (Srivastava et al., 2014) is used after the input layer. The 2 coefficient λ is also tuned during development process. Results The main results are reported in Table 1. Our neural transition-based model achieves the best results in ACE datasets and comparable results in GENIA dataset in terms of F 1 measure. We hypothesize that the performance gain of our model compared with other methods is largely due to improved performance on the portions of nested mentions in our datasets. To verify this, we design an experiment to evaluate how well a system can recognize nested mentions. Handling Nested Mentions The idea is that we split the test data into two portions: sentences with and without nested mentions. The results of GENIA are listed in Table 2. We can observe that the margin of improvement is more significant in the portion of nested mentions, revealing our model's effectiveness in handling nested mentions. This observation helps explain why our model achieves greater improvement in ACE than in GENIA in Table 1 since the former has much more nested structures than the latter. Moreover, Ju et al. (2018) performs better when it comes to non-nested mentions possibly due to the CRF they used, which globally normalizes each stacked layer. Decoding Speed Note that Lu and Roth (2015) and Muis and Lu (2017) also feature linear-time complexity, but with a greater constant factor. To compare the decoding speed, we re-implemented their model with the same platform (PyTorch) and run them on the same machine (CPU: Intel i5 2.7GHz). Our model turns out to be around 3-5 times faster than theirs, showing its scalability. 9 We also additionally tried using embeddings trained on PubMed for GENIA but the performance was comparable. GENIA Nested Non-Nested P R F 1 P R F 1 Lu and Roth (2015) Ablation Study To evaluate the contribution of neural components including pre-trained embeddings, the characterlevel LSTM and dropout layers, we test the performances of ablated models. The results are listed in Table 1. From the performance gap, we can conclude that these components contribute significantly to the effectiveness of our model in all three datasets. Conclusion and Future Work In this paper, we present a transition-based model for nested mention recognition using a forest representation. Coupled with Stack-LSTM for representing the system's state, our neural model can capture dependencies between nested mentions efficiently. Moreover, the character-based component helps capture letter-level patterns in words. The system achieves the state-of-the-art performance in ACE datasets. One potential drawback of the system is the greedy training and decoding. We believe that alternatives like beam search and training with exploration (Goldberg and Nivre, 2012) could further boost the performance. Another direction that we plan to work on is to apply this model to recognizing overlapping and entities that involve discontinuous spans (Muis and Lu, 2016) which frequently exist in the biomedical domain.
3,694.6
2018-10-03T00:00:00.000
[ "Computer Science" ]
The Mathematical Model for the Secondary Breakup of Dropping Liquid : Investigating characteristics for the secondary breakup of dropping liquid is a fundamental scientific and practical problem in multiphase flow. For its solving, it is necessary to consider the features of both the main hydrodynamic and secondary processes during spray granulation and vibration separation of heterogeneous systems. A significant di ffi culty in modeling the secondary breakup process is that in most technological processes, the breakup of droplets and bubbles occurs through the simultaneous action of several dispersion mechanisms. In this case, the existing mathematical models based on criterion equations do not allow establishing the change over time of the process’s main characteristics. Therefore, the present article aims to solve an urgent scientific and practical problem of studying the nonstationary process of the secondary breakup of liquid droplets under the condition of the vibrational impact of oscillatory elements. Methods of mathematical modeling were used to achieve this goal. This modeling allows obtaining analytical expressions to describe the breakup characteristics. As a result of modeling, the droplet size’s critical value was evaluated depending on the oscillation frequency. Additionally, the analytical expression for the critical frequency was obtained. The proposed methodology was derived for a range of droplet diameters of 1.6–2.6 mm. The critical value of the diameter for unstable droplets was also determined, and the dependence for breakup time was established. Notably, for the critical diameter in a range of 1.90–2.05 mm, the breakup time was about 0.017 s. The reliability of the proposed methodology was confirmed experimentally by the dependencies between the Ohnesorge and Reynolds numbers for di ff erent prilling process modes. Introduction Scientific research of the processes of vibrational prilling [1,2], granulation [3,4] and separation of gas-dispersed systems [5,6] is an essential problem that has not been wholly studied is the mechanism of the breakup of dropping liquid. In this regard, a crucial hydrodynamic criterion that determines the behavior of droplets in a heterogeneous environment [7] is the Weber number (We) [8]. Its critical value [9] was experimentally evaluated in the articles [10,11], which aimed to define the single droplet breakup and simulations of droplet deformations in airflow. The main areas of use of dispersed systems are as follows: production of granulated fertilizers; development of compact cooling towers of power plants and monodisperse nozzles; granulation of nuclear fuel; granulation of vitamin preparations; development of express systems for the diagnosis of cells and bioactive substances; development of new composite-based granular materials; designing micro dispensers for medical and biological products; dispensers for rare substances; droplet generators for studying combustion processes, as well as for heat and mass transfer; drip radio space heat exchangers and contactless fueling systems. Consequently, the dispersed systems have gained popularity since they ensure resource conservation, environmental friendliness and quality of new products obtained in technological processes. When the critical Weber number is exceeded, the secondary breakup occurs. However, the proposed values differ significantly [12]. Mainly, D. Pazhi and V. Galustov in 1984 developed the fundamentals of spraying liquids [13]. As a result, it was experimentally obtained that the Weber number is in a range of We cr = 4-20. Moreover, under conditions close to critical, the mechanisms of droplet breakup significantly differ. In this case, there are two types of droplet breakup: vibration mode and consequent destruction of the droplet with the formation of a thin film. Additionally, it was experimentally established that the mechanism of breaking up the droplet liquid depends on the hydrodynamic characteristics of the nonstationary flow, which affects the duration of the effect of the gas flow on the droplet. According to the various conditions of the process, which were previously studied by S. Ponikarov [14] from the Department of Machines and Apparatus for Chemical Production at Kazan National Research Technological University, L. Ivlev and Y. Dovgalyuk [15] from the Research Institute of Chemistry at Saint Petersburgh University, as well as A. Cherdantsev [16] from the Nonlinear Wave Processes Laboratory at Novosibirsk State University, there are different mechanisms of the droplet breakup. The first one is blowing up the middle of a droplet with the subsequent breaking up the toroidal particle. The second one is in the disordered breakup of the droplet into several particles. The last one is in tearing small droplets from the surface of the droplet blown by the flow. Thus, the study of the characteristics for the secondary breakup of the dropping liquid is an urgent scientific and practical problem. Its solution will allow considering the peculiarities of the operating processes of the prilling [17,18], separation of heterogeneous systems [19,20], pneumatic classification [21,22], spraying of liquid mixtures [23,24] and other hydromechanical, heat and mass transfer processes. In major technological processes, the breakup of droplets and bubbles occurs with the simultaneous action of several dispersing mechanisms. However, there are significant difficulties in creating mathematical models of breakup liquid droplets and finding accurate analytical solutions. Two types of instability of the typical wave nature occurring in different particle surface parts have been established. The first mechanism, the Kelvin-Helmholtz instability, occurs in the presence of a shift between the layers of a continuous environment or when two contact media have a significant difference in their velocities [25]. In this case, the boundary layer is failed, when the Weber number is relatively small. When the Weber number exceeds its critical value, microparticles are separated due to wave perturbations on the droplet's side surface. The Rayleigh instability [26] is the second mechanism related to the spontaneous increase in pressure, density and velocity pulsations in the inhomogeneous environment in a gravitational field or moves with acceleration [27]. For example, on the frontal surface of a falling droplet, oscillations Energies 2020, 13, 6078 3 of 17 occur under the free-fall acceleration. The side surface layer oscillates as a result of the maximum flow velocity. Simultaneously, the particle breakup mechanism under turbulent pulsations' influence has a different character and acts on the particle surface [28]. Thus, droplets' breakup is quite complicated and determined by the ratio of inertia forces, surface tension, viscosity and other factors. Literature Review In the article [9], it is noted that a complete model for the aerodynamic breakup of liquid droplets has not been developed. All existing ideas about the laws of breakup and the determining parameters are mainly obtained from the experiment. The most studied is droplet breakup in shock waves. Notably, the main process parameters are the dimensionless criteria of Weber (We), Laplace (La) or Ohnesorge (Oh), Bond (Bo) and Strouchal (Sh). Notably, the Weber number has the most significant influence on the breakup mode [29]. Moreover, according to the Weber number, there are different classifications of breakup modes, distinguishing six [30] or five [9] main mechanisms. Furthermore, according to the previous experience, the most consistent with the up-to-date concepts, the classification of droplet breakup modes for low-viscosity liquids should be considered. In 1988, M. Clark conceptually studied the droplet breakup model in a turbulent flow [31]. In 1998, W. Ye, W. Zhang and G. Chen [32] investigated the effect of Rayleigh-Plateau instability for a wide range of wavelengths numerically. In 1999, L. Ivlev and Y. Dovgalyuk [15] studied blowing up the middle of a droplet to save its toroidal shape with the subsequent disintegration. They developed the methodology to research the disordered destruction of the droplet. In 2020, G. Chiandussi, G. Bugeda and E. Onate [33] proposed variable shape definition with C 0 , C 1 and C 2 continuity functions in shape optimization problems. In 2019, A. Cherdantsev, in his D.Sc. thesis, "The wave structure of a liquid film and the processes of dispersed phase exchange in a dispersed-annular gas-liquid flow" [16] investigated the process of tearing small particles from the surface of a droplet. Since the Khyentse-Kolmogorov's equation cannot be applied to describe the breakup of liquid due to the velocity gradient, which excludes the hypothesis of isotropic turbulence. Consequent research of V. Sklabinskyi and B. Kholin solved this problem [34] by considering the velocity gradient. The breakup's nature is significantly different for different characteristic velocities of the continuous and dispersed phases' relative motion. In 1984, S. Ponikarov, in his D.Sc. thesis, "Droplet breakup in centrifugal equipment of chemical plants" [14] conducted a comparative analysis of theoretical and experimental studies of the droplet breakup process. It was established that there are several fundamental mechanisms of the breakup, which correspond to different ranges of the Weber number. The viscous friction can crush liquid droplets and bubbles entering the shear flow of a continuous environment. Thus, G. Kelbaliev and Z. Ibragimov, in their research work [35], studied the droplet breakup process in the Quetta stream. They found that a droplet breakup in a turbulent gas flow occurs differently if the environment's density is insignificant compared to the droplet density. In this case, inertial effects play an essential role in the mechanism of the droplet breakup. Experimental confirmation of a critical Weber number was presented in the research work [36]. It has also been established that there are at least two mechanisms of the breakup [37]. At a specific ratio of the length of the drop to its diameter, the breakup occurs with the formation of two new particles of approximately equal size. Suppose this ratio is not met, the droplet thin in several places at once. Another mechanism of the breakup is that the smaller droplet is moved away from a larger one. This mechanism is observed when the vortex velocity becomes critical, making the unstable droplets. Additionally, the developed mathematical model describes gas-dynamic processes during mixture formation and evaporation of liquid droplets in the nonstationary supersonic flow. This technique allows for designing air-jet engines, power plants, high-performance ejectors, heaters and various technological devices. Additionally, droplet breakup's experimental technique in a stream with a shock wave was presented in [38]. Moreover, the research [39] is devoted to identifying zones of jet self-oscillations as the Hartmann effect and determining the corresponding oscillation frequency at which the process of a fluid jet breakup occurs. In 2019, Z. Wu, B. Lv and T. Cao improved the Taylor analogy breakup (TAB) and Clark models for droplet deformations [40]. However, these models still operate by linear differential equations, in which inertia, stiffness and damping coefficients are expressed through the similarity criteria. Such approaches, mainly, do not allow substantiating a critical value of the Weber number. Preliminary analysis of the research works on the process of the breakup of liquid droplets in a wide range of changes in Reynolds, Weber and Laplace numbers [41] indicates the existence of different types of the gas-dynamic breakup differing in intensity and trajectories of detached parts of droplets [42]. Finally, in 2001, B. Gelfand et al. experimentally determined that the Weber number's values less than its critical value do not lead to droplets' breakup [43]. This hypothesis requires a theoretical justification, which is realized below. Moreover, there is still no reliable mathematical model of the breakup process that analytically determines the Weber number's critical value. The presence of many up-to-date studies also emphasizes the urgency of the considered problem. Particularly, D. Kim and P. Moin [44] developed the subgrid-scale capillary breakup model for liquid jet atomization. X. Li et al. [45] studied the breakup dynamics of the low-density gas-liquid interface during the Taylor bubble formation. A. Salari et al. [46] investigated the breakup of bubbles and droplets in microfluidics. N. Speirs et al. [47] studied jet breakup in normal liquids. C. Tirell, M.-C. Renoult and C. Dumouchel [48] proposed the methodology for measuring extensional properties during a jet breakup. A. Dziedzic et al. [49] studied the substrate effects on the breakup of liquid filaments. J. Zhang et al. [50] investigated the process of an in-fiber breakup. Finally, J.-P. Guo et al. [51] proposed the instability breakup model of fuel jets. The critical overview of the above studies allows stating the following research gaps. First, since generally, the critical Weber number varies significantly in a range of 4-20 for the secondary breakup of droplets, its range should be narrowed for the particular range of droplet diameters. Second, the proposed methodologies generally operate with similarity criteria and empiric constants. As a result, the droplet breakup time and the Weber number's critical values were not still substantiated analytically. A Mathematical Model At the foundation of the breakup of liquid droplets is a mechanism according to which the deformation of a droplet takes the form of an elongated ellipsoid with the subsequent transformation of its shape and disintegrating into two approximately equal particles. Therefore, the process of breakup of a droplet into two equal parts of the spherical shape is considered below. In the initial stage, this process can be described as a deviation from the equilibrium state. The last one is determined by the action of inertia forces m 1 a C1 , gravity G, Archimedes force F A and surface tension force F σ (Figure 1). For the description of the proposed mathematical model, the following changes were considered in the geometry of a droplet and the description of the forces' impact on it. At the initial time (t = 0), the drop is spherical. Before its crushing, forces of inertia, gravity and Archimedes act. Further, at the initial stage of secondary crushing, the surface tension force is changed due to the change in the angle of inclination of the tangent to the forming secondary droplet. This angle changes as the drop are breaking. Consequently, the surface tension force makes a time-varying contribution to the overall force action. In this case, the total mass of the main drop and the satellite remains unchanged. Finally, the secondary breakup time is determined from the separation of a single droplet into a couple of droplets. Under these forces' actions, the mass centers of a couple of mass m1 (kg) reached acceleration aC1 (m/s 2 ). In this case, the fundamental equation describing the motion of a single droplet has the following form: where m1 = ρp(V / − Vs) is the mass of a droplet, kg; aC1 = 0.5d 2 Δz/dt 2 is the acceleration of the mass center (m/s 2 ), determined as the second derivative from the relative distance of the droplet parts Δz (m) with respect to time t (s); Δα is wetting angle (rad). The following dependencies determine the acting forces mentioned above: where g-gravitational acceleration (m/s 2 ); mp-mass (kg) of a droplet with volume V (m 3 ); ρp, ρm-the densities of the dropping liquid and the environment, respectively (kg/m 3 ); σ-surface tension coefficient (N/m); L-the perimeter of the wetted contour (m). Droplet volume V = 2(V / − Vs) is defined as the double difference of its spherical part (V / ) and segment (Vs): where r = (R − Δr)-the current radius of droplets (m), which is defined as the difference between its initial value R and the magnitude of the change in radius Δr due to its breakup; h = (r − 0.5Δz)-the height of the spherical segment, which is associated with the relative displacement Δz of the parts. The dependence between the change in radius Δr of a droplet and its displacement Δz can be determined from the conservation law for mass: where V0 = 4πR 3 /3-initial volume (m 3 ). Considering Equation (3) after identical transformations allows writing cubic equation concerning dimensionless ratio Δz/R: Under these forces' actions, the mass centers of a couple of mass m 1 (kg) reached acceleration a C1 (m/s 2 ). In this case, the fundamental equation describing the motion of a single droplet has the following form: where m 1 = ρ p (V / − Vs) is the mass of a droplet, kg; a C1 = 0.5d 2 ∆z/dt 2 is the acceleration of the mass center (m/s 2 ), determined as the second derivative from the relative distance of the droplet parts ∆z (m) with respect to time t (s); ∆α is wetting angle (rad). The following dependencies determine the acting forces mentioned above: where g-gravitational acceleration (m/s 2 ); m p -mass (kg) of a droplet with volume V (m 3 ); ρ p , ρ m -the densities of the dropping liquid and the environment, respectively (kg/m 3 ); σ-surface tension coefficient (N/m); L-the perimeter of the wetted contour (m). Droplet volume V = 2(V / − Vs) is defined as the double difference of its spherical part (V / ) and segment (Vs): where r = (R − ∆r)-the current radius of droplets (m), which is defined as the difference between its initial value R and the magnitude of the change in radius ∆r due to its breakup; h = (r − 0.5∆z)-the height of the spherical segment, which is associated with the relative displacement ∆z of the parts. The dependence between the change in radius ∆r of a droplet and its displacement ∆z can be determined from the conservation law for mass: where V 0 = 4πR 3 /3-initial volume (m 3 ). Considering Equation (3) after identical transformations allows writing cubic equation concerning dimensionless ratio ∆z/R: Energies 2020, 13, 6078 6 of 17 At the initial stage of the drop breakup, when the relative displacement is insignificant in comparison to the initial radius (∆z << R), the above equation is reduced to a linear one, and the dependence between the parameters ∆z and ∆r takes the following form: From the condition of the breakup process completion, it can be established that the reduction of the radius ∆r does not exceed its maximum value ∆r max = (1 − 2 1/3 ) ≈ 0.2. In this case, the last expression can be reduced to a simplified one in a linearized form as ∆z ≈ 0.25∆r ( Figure 2). Energies 2020, 13, x FOR PEER REVIEW 6 of 17 At the initial stage of the drop breakup, when the relative displacement is insignificant in comparison to the initial radius (Δz << R), the above equation is reduced to a linear one, and the dependence between the parameters Δz and Δr takes the following form: From the condition of the breakup process completion, it can be established that the reduction of the radius Δr does not exceed its maximum value Δrmax = (1 − 2 1/3 ) ≈ 0.2. In this case, the last expression can be reduced to a simplified one in a linearized form as Δz ≈ 0.25Δr ( Figure 2). (2) dependencies between the dimensionless reduction of the droplet radius (Δr/R) and the dimensionless ratio of the relative displacement of its parts (Δz/R). The wetting angle Δα included in the droplet motion's fundamental equation is determined by the following trigonometric expression: It is simplified, which occurs at the initial stage of the droplet breakup (considering the relatively small values of parameters Δz and Δr) and applying the first limit. Additionally, in the first approximation, the perimeter of the wetting contour can be determined by the following dependence: Substitution of expression (2) to Equation (1) considering Formulas (7) and (8) allows rewriting the equation of motion in the following form: Notably, in stationary mode (d 2 Δz/dt 2 = 0; Δz = Δz0 = const), the droplet breakup occurs under the following condition: In this regard, limit value dcr = 2R of the unstable droplets' diameter is determined from the condition of zero displacement Δz0. Thus, in the case of relatively heavy drops (ρp >> ρm), it can be obtained as follows: (2) dependencies between the dimensionless reduction of the droplet radius (∆r/R) and the dimensionless ratio of the relative displacement of its parts (∆z/R). The wetting angle ∆α included in the droplet motion's fundamental equation is determined by the following trigonometric expression: It is simplified, which occurs at the initial stage of the droplet breakup (considering the relatively small values of parameters ∆z and ∆r) and applying the first limit. Additionally, in the first approximation, the perimeter of the wetting contour can be determined by the following dependence: Substitution of expression (2) to Equation (1) considering Formulas (7) and (8) allows rewriting the equation of motion in the following form: Notably, in stationary mode (d 2 ∆z/dt 2 = 0; ∆z = ∆z 0 = const), the droplet breakup occurs under the following condition: Energies 2020, 13, 6078 7 of 17 In this regard, limit value d cr = 2R of the unstable droplets' diameter is determined from the condition of zero displacement ∆z 0 . Thus, in the case of relatively heavy drops (ρ p >> ρ m ), it can be obtained as follows: A particular Solution of the Model Applying the method of variations allows considering the nonstationary relative motion of a droplet near its equilibrium position. In this case, the displacement of the droplet ∆z = (∆z 0 + δz) is defined as a sum of the stationary component ∆z 0 and variation δz relative to the equilibrium state, and the differential equation of motion in variations takes the following form: A general solution has the following form: where the following oscillatory parameter is introduced (s −1 ): Integration constants C 1 , C 2 are determined from the initial conditions. Particularly, for zero initial deviation δz(0) = 0 and velocity (dδz/dt)| t = 0 = v 0 it can be obtained C 1 = v 0 /λ and C 2 = 0. Thus, solution (13) takes the following form: A Critical Value of the Weber Number The resulting dependence allows evaluating the time T s (s) of droplet breakup. According to the condition: the following time can be obtained: where We = ρ p v 0 2 R/σ is the Weber number as the ratio of the specific inertia and the surface tension forces. It should be noted that the critical value of the Weber number We cr = 3π/4 1/3 ≈ 5.94 obtained from this formula allows considering individual cases of solving the Equation (17). Notably, in the case of significant velocities (We >> We cr ), the breakup time does not depend on the Weber number due to the relative smallness of the surface tension forces compared with the inertia forces. In this case, the last formula is approximately equal to the following one: Energies 2020, 13, 6078 8 of 17 Introduction of the Ohnesorge and Laplace numbers [52]: allows obtaining the critical values of the Ohnesorge and Laplace numbers: The theoretical line of this dependence is graphically presented in Figure 3. Energies 2020, 13, x FOR PEER REVIEW 8 of 17 allows obtaining the critical values of the Ohnesorge and Laplace numbers: The theoretical line of this dependence is graphically presented in Figure 3. On the other hand, for relatively slow droplets (We << Wecr), the following approximation can be written: Characteristics of the Secondary Breakup In the case of deceleration of a droplet (d 2 Δz/dt 2 < 0), the vibration frequency, at which the resonance of the droplet liquid occurs with its subsequent breakup, is equal to the following one (Hz): This dependence allows obtaining the expression for the critical diameter of droplets under the external vibrational impact: Thus, an increase in vibration frequency leads to a decrease in the size of droplets. Consideration of the Oscillating Wall It should be noted that the above-mentioned equation of motion for the liquid droplets does not consider the vibration force Fv (N), which is determined by the dependence previously obtained by L. Blekhman On the other hand, for relatively slow droplets (We << We cr ), the following approximation can be written: Characteristics of the Secondary Breakup In the case of deceleration of a droplet (d 2 ∆z/dt 2 < 0), the vibration frequency, at which the resonance of the droplet liquid occurs with its subsequent breakup, is equal to the following one (Hz): This dependence allows obtaining the expression for the critical diameter of droplets under the external vibrational impact: Thus, an increase in vibration frequency leads to a decrease in the size of droplets. Consideration of the Oscillating Wall It should be noted that the above-mentioned equation of motion for the liquid droplets does not consider the vibration force F v (N), which is determined by the dependence previously obtained by where v 0 -amplitude of vibration velocity (m/s); z 0 -the distance from a droplet to the source of vibrations (m). Notably, the vibrating force acts on the particles phase in a continuous medium near an impenetrable wall in the normal direction. The effect of vibrational weighing is caused by the pressure averaged over the oscillation period [54] due to convective inertia forces. The introduction of this force allows representing the differential Equation (9) in the more generalized form: ∆ .. In variations δz relative to the stationary position z 0 , the last equation takes the following form: Thus, the droplet breakup condition under the vibrational impact (F v > F σ ) at the stability mode limit corresponds to a zero value of the coefficient before variation δz. This fact allows determining the critical size of droplets under the impact of the vibrational force: This expression indicates that even highly dispersed particles can be involved in the secondary breakup process when the particles approach the oscillating wall. Consideration of the Resistance Force For considering the impact of the resistance force F r (N) on the secondary breakup, the right part of the fundamental Equation (9) should be supplemented by the appropriate component. Notably, in the case of conditionally linear dependence of the resistance force on the droplet velocity [55]: the equation of motion in variations takes the following form: where µ m -dynamic viscosity of the environment (Pa·s); ν m = µ m /ρ m -kinematic viscosity of the environment (m 2 /s). The last differential equation has a general solution: where p 1,2 = −n ± √ λ 2 + n 2 are the characteristic equation's roots containing the damping factor n = 4.5πν m /R 2 (s −1 ). For the above initial conditions, the integration constants are determined as follows: Finally, solution (30) takes the following form: Notably, the previously obtained solution (15) is a particular case (for n = 0) of a more general solution (30). Considering the resistance force, the time T s of droplet breakup reduced to finding the positive root of the transcendental equation: which is for the case of n << λ with sufficient for practical needs accuracy can be simplified to the following cubic equation: Its positive root can also be determined using the methods for parameter identification of hydromechanical processes [56], particularly by the following iteration procedure: where T s1 -the breakup time determined by the formula (18); i-a number of the current iteration. It is convenient to choose the initial value of the breakup time at the initial iteration as T s <0> = T s1 . Experimental Research Data Investigation of hydrodynamic parameters for the secondary droplet breakup was carried out on the modernized vibrational priller of the Department of Chemical Engineering at Sumy State University for the following parameters: the wall thickness-1 mm; hole diameter-1 mm; modeling liquid-water at temperature 20 • C; dispersion medium-air at temperature 25 • C. The design scheme of the experimental stand is given in Figure 4 Experimental Research Data Investigation of hydrodynamic parameters for the secondary droplet breakup was carried out on the modernized vibrational priller of the Department of Chemical Engineering at Sumy State University for the following parameters: the wall thickness-1 mm; hole diameter-1 mm; modeling liquid-water at temperature 20 °C; dispersion medium-air at temperature 25 °C. The design scheme of the experimental stand is given in Figure 4. The stand functions as follows: Due to the created vacuum by pump 10, the liquid from the bottom of the buffer tank 9 is transferred into the pipeline. After, it is fed into pipe 3 of the upper part of the vibrating priller. Fluid flow is monitored by an electromagnetic flowmeter 12 and regulated by The stand functions as follows: Due to the created vacuum by pump 10, the liquid from the bottom of the buffer tank 9 is transferred into the pipeline. After, it is fed into pipe 3 of the upper part of the vibrating priller. Fluid flow is monitored by an electromagnetic flowmeter 12 and regulated by a valve 11. Through pipe 3, the liquid is fed into the annular collector 4. After this, the operating fluid passes through the filter element 5 as a perforated cylinder. As it passes through it, air bubbles are released from the liquid volume. The filter grid is fixed on the perforated cylinder to prevent the clogging of holes in a basket 1. The liquid then flows to the perforated bottom 1, gradually filling its volume. The filling level is monitored by a liquid level meter 20. Under pressure created by the hydrostatic liquid level, the liquid flows out of holes in the perforated bottom 1. Using a particular program, PC 13 generates a vibrational signal. Using the low-frequency amplifier 14, the magnetic actuator 6 through the rod 7 brings the resonator 8 into oscillating motion with a given frequency (Hz) and amplitude (V). The resonator disk is placed above the central part of the perforated bottom. The disk and the bottom's liquid gap provide a hydrodynamic connection between all the studied hydromechanical system elements. When imposing the vibrations, the resonator disk performs reciprocating motion. Oscillatory waves propagate as elastic deformations in the liquid and are transferred to the perforated bottom. As a result, regular perturbations are superimposed on the fluid flowing from the holes. This effect causes the breakup of a liquid jet into droplets in places of narrowing. The obtained images (Figure 5a) can be processed using both the "Matlab" [57] and the "Digimizer" software, particularly by the algorithm for the detection of circular elements [58]. This approach allows determining droplets' size and distances between them (Figure 5b). in the liquid and are transferred to the perforated bottom. As a result, regular perturbations are superimposed on the fluid flowing from the holes. This effect causes the breakup of a liquid jet into droplets in places of narrowing. The obtained images (Figure 5a) can be processed using both the "Matlab" [57] and the "Digimizer" software, particularly by the algorithm for the detection of circular elements [58]. This approach allows determining droplets' size and distances between them (Figure 5b). It should be noted that the camera was positioned perpendicular to the screen at a distance of 500 mm. This is the minimum focal length from the subject to the focal plane during photography. The photographs of the obtained drops were processed using the program Digimizer that automatically detects an object by measuring the geometrical characteristics. A marking grid is required to set the measurement scale (pixels/mm) for the program when processing photos. Measurement results were obtained using built-in algorithms to detect objects, particularly for finding round shapes and sizing them. The experimental results for different liquid levels are summarized in Table 1 and graphically presented in Figure 6. It should be noted that the camera was positioned perpendicular to the screen at a distance of 500 mm. This is the minimum focal length from the subject to the focal plane during photography. The photographs of the obtained drops were processed using the program Digimizer that automatically detects an object by measuring the geometrical characteristics. A marking grid is required to set the measurement scale (pixels/mm) for the program when processing photos. Measurement results were obtained using built-in algorithms to detect objects, particularly for finding round shapes and sizing them. The experimental results for different liquid levels are summarized in Table 1 and graphically presented in Figure 6. The data in Table 1 presents the mean values. Notably, the standard deviation does not exceed the value of 0.02 mm. An Example of Numerical Calculation Based on the proposed methodology, an example of numerical calculation can be realized. Particularly, as initial data, the following parameters were chosen: the modeling liquid was water at a temperature of 20 °C; density of particles ρp = 1·10 3 kg/m 3 ; surface tension coefficient σ = 7.3·10 -2 N/m; kinematic viscosity of the medium νm = 1.5·10 -6 m 2 /s; gravitational acceleration g = 9.81 m/s 2 . The oscillation frequency of the actuator f = 260 Hz, at which the secondary breakup occurs with the formation of satellites of approximately the same size compared to the diameter of the main droplet; the critical value of the Weber number Wecr = 5.94; the experimentally obtained range of droplet diameters is in a range of 1.90-2.05 mm ( Table 1). The values of critical diameters determined by Formulas (11) and (26) exceed the holes' diameter, consequently were not considered. Thus, for this experimental case, the expression (22) allows evaluating a breakup droplet diameter of 1.98 mm. This value falls within the range of observed values of droplet diameters with a relative error of about 4%. Discussion Due to the analytical and experimental results described above, the following detailed discussion should emphasize a range of applicability of the proposed mathematical model and substantiate its advantages compared to the well-known models. Remarkably, the Rayleigh instability model describes why and how a falling stream of fluid breaks up into satellites with less surface area suitable for substantiation of polydisperse modes. Additionally, according to the C 0 continuity model [59], the time The data in Table 1 presents the mean values. Notably, the standard deviation does not exceed the value of 0.02 mm. An Example of Numerical Calculation Based on the proposed methodology, an example of numerical calculation can be realized. Particularly, as initial data, the following parameters were chosen: the modeling liquid was water at a temperature of 20 • C; density of particles ρ p = 1 × 10 3 kg/m 3 ; surface tension coefficient σ = 7.3 × 10 −2 N/m; kinematic viscosity of the medium ν m = 1.5 × 10 −6 m 2 /s; gravitational acceleration g = 9.81 m/s 2 . The oscillation frequency of the actuator f = 260 Hz, at which the secondary breakup occurs with the formation of satellites of approximately the same size compared to the diameter of the main droplet; the critical value of the Weber number We cr = 5.94; the experimentally obtained range of droplet diameters is in a range of 1.90-2.05 mm ( Table 1). The values of critical diameters determined by Formulas (11) and (26) exceed the holes' diameter, consequently were not considered. Thus, for this experimental case, the expression (22) allows evaluating a breakup droplet diameter of 1.98 mm. This value falls within the range of observed values of droplet diameters with a relative error of about 4%. Discussion Due to the analytical and experimental results described above, the following detailed discussion should emphasize a range of applicability of the proposed mathematical model and substantiate its advantages compared to the well-known models. Remarkably, the Rayleigh instability model describes why and how a falling stream of fluid breaks up into satellites with less surface area suitable for substantiation of polydisperse modes. Additionally, according to the C 0 continuity model [59], the time of droplet breakup is determined more simplistically, like expression (18). However, a more general solution (17) clarifies this approach because the droplet breakup's mathematical model is a nonlinear one. Additionally, it should be noted that the numerical simulation models commonly use two spray breakup models. The first one is the TAB model [32]. However, it should be pointed out that the TAB model is recommended for only low-Weber-number injections and is well suited for low-speed sprays into a standard atmosphere. Second, for Weber numbers greater than 100, the wave model is more applicable [33]. Consequently, the proposed analytical model allows closing the gap between these two extreme cases. In addition, the proposed method does not operate only with similarity criteria. It is precisely operated by the initial equations reflecting the physical and geometrical essence of the secondary breakup. As a result, the proposed approach's main advantage is the possibility of analytical substantiation of the Weber number's critical value in a range experimentally predicted by previous researchers. Moreover, it should be noted that according to the TAB model, the breakup time is determined numerically using empiric constants, which are dependent on the Ohnesorge, Taylor and Weber numbers. On the other hand, the proposed technique based on Equations (17) and (32) makes it possible to estimate droplet breakup time analytically. It should also be noted that in the case of mainly used prilling equipment, the atomization mode can be obtained for more than 800 Hz. This frequency depends on the height of a liquid layer, a medium's physical properties and the holes' geometry. In addition, droplet breakup time can be determined using the particle image velocimetry (PIV) method [60]. Moreover, the proposed model allows determining Weber's critical values, Ohnesorge and Laplace numbers for different Reynolds numbers. Particularly, expressions (21) describe the dependencies for the Ohnesorge and Laplace numbers' critical values for the Reynolds number's different values. It should be noted that the obtained range is additionally proved by the experimental results data [61]. Finally, this article significantly supplements the existing mathematical models. It allows predicting the modes of liquid jets breakup and developing new equipment for granulating products with improved characteristics. Conclusions Thus, the article investigates the secondary breakup of dropping liquid of the dispersed phase in chemical technology processes. As a result, a mathematical model of the nonstationary decay of droplets was developed. This model considers the impact of volume and surface forces on the relative displacement of the decaying droplet. The simulation results have allowed developing the dependence for a droplet's critical size and determining the required vibrational frequency. Additionally, the critical diameter of unstable droplets was obtained, and the dependence of breakup time was calculated. These data have allowed analytically determining the critical value of the Weber number. Additionally, the dependencies between the Ohnesorge, Laplace and Reynolds numbers are obtained analytically and proved experimentally. Particularly, for droplet diameters in a range of 1.6-2.6 mm, the Reynolds and Ohnesorge numbers are in a range of Re = (3.1-7.2) × 10 3 and Oh = (0.35-0.81) × 10 −3 , correspondently. The reliability of the achieved results was confirmed by the fact that the critical value of the Weber number We cr = 5.9 is in a range We cr = 4-20, obtained experimentally by the previous researchers D. Pazhi and V. Galustov, as well as by consistency between the determined critical size of decaying droplets 1.98 mm and the experimentally obtained range of diameters 1.90-2.05 mm for the droplet breakup at the oscillation frequency 240 Hz for the example of the modernized vibrational priller of Sumy State University. In this case, the relative error is about 4%. The obtained results can help create appropriate techniques and methodologies for designing the vibrational prillers to get monodispersed prills and granules. Moreover, the proposed mathematical model can also be extended to ensure the gas-dynamic equipment's reliability for the separation of gas-liquid systems.
9,102.8
2020-11-20T00:00:00.000
[ "Physics" ]
Economic threat heightens conflict detection: sLORETA evidence Abstract Economic threat has far-reaching emotional and social consequences, yet the impact of economic threat on neurocognitive processes has received little empirical scrutiny. Here, we examined the causal relationship between economic threat and conflict detection, a critical process in cognitive control associated with the anterior cingulate cortex (ACC). Participants (N = 103) were first randomly assigned to read about a gloomy economic forecast (Economic Threat condition) or a stable economic forecast (No-Threat Control condition). Notably, these forecasts were based on real, publicly available economic predictions. Participants then completed a passive auditory oddball task composed of frequent standard tones and infrequent, aversive white-noise bursts, a task that elicits the N2, an event-related potential component linked to conflict detection. Results revealed that participants in the Economic Threat condition evidenced increased activation source localized to the ACC during the N2 to white-noise stimuli. Further, ACC activation to conflict mediated an effect of Economic Threat on increased justification for personal wealth. Economic threat thus has implications for basic neurocognitive function. Discussion centers on how effects on conflict detection could shed light on the broader emotional and social consequences of economic threat. and conflict resolution MacDonald et al., 2000;Mansouri et al., 2009;Miller and Cohen, 2001;Yeung et al., 2004). For example, functional magnetic resonance imaging (fMRI) studies reveal that dorsal ACC activation to conflict predicts lateral PFC activation and behavioral adaptation (Badre and Wagner;Botvinick et al., 2004;Kerns et al., 2004). Similarly, electrophysiological studies reveal that a family of event-related potential (ERP) components elicited by different conflictsnamely, the N2 (or N200), the error-related negativity (ERN or Ne) and the feedback-related negativity (FRN)-have all been source localized to the dorsal ACC (Cavanagh and Shackman, 2015), and mean amplitude in these conflict-related components often predicts behavioral adjustments or better performance, i.e. conflict resolution (Gehring et al., 1993;Luu et al., 2003;Nieuwenhuis et al., 2003). Though foundational models of ACC functional organization subdivided the ACC into emotional-rostral and cognitive-dorsal regions (e.g. Bush et al., 2000), contemporary models recognize the dorsal ACC as a cortical hub involved in integrating cognitive and emotional processes (Critchley et al., 2013;Cromheeke and Mueller, 2014;Egner et al., 2008;Shackman et al., 2011;Inzlicht et al., 2015). For example, the adaptive control model posits that negative affect, pain and cognitive control engage a domain-general process to solve comparable problems-i.e. these domains act as signals of conflict or uncertainty in current goal-pursuits that may require adaptive changes in attention and behavior (Shackman et al., 2011). A common neurobiological process provides a mechanistic explanation of why negative affect, pain and cognitive control can all modulate each other. Economic threat, as an anxiety-provoking and uncertain experience, should cause dorsal ACC activation and increase conflict detection sensitivity. This reasoning is consistent with neuropsychological theory and research in which anxiety and uncertainty function to increase sensitivity to aversive stimuli and increase avoidance motivation (Gray and McNaughton, 2000;McNaughton and Corr, 2004;Hirsh et al., 2012). Indeed, anxiety and uncertainty are associated with larger amplitudes in electrophysiological markers of conflict. For example, higher trait sensitivity to punishment and negative arousal is associated with larger FRN amplitudes to monetary loss (De Pascalis et al., 2010). Negative emotionality is also associated with larger FRN amplitudes and increased source-localized ACC activation during the FRN timeframe (Santesso et al., 2012). Higher trait anxiety is associated with larger ERN amplitudes (Aarts and Pourtois, 2010;Moser et al., 2013). Higher levels of both state and trait anxiety are associated with larger N2 amplitudes (Righi et al., 2009;Sehlmeyer et al., 2010). To our knowledge, no research has experimentally examined the causal impact of economic threat on ACC functioning in conflict monitoring. Here, we first randomly assigned participants to either an economic threat or a control condition. After the manipulation, we used electroencephalogram (EEG) to index conflict detection processes to infrequent blasts of white noise (i.e. startle stimuli) in a passive auditory oddball task. This task elicits a characteristic ERP, namely an N2-P3 complex, in which the N2 component reflects conflict detection between frequent, expected and neutral auditory stimuli and infrequent, unexpected and aversive auditory stimuli (Patel and Azzam, 2005). To index ACC activation during conflict detection processes, we used standardized low-resolution brain electromagnetic tomography (sLORETA; Pascual-Marqui, 2002) on 64-channel EEG to source localize the effect of economic threat versus a control condition on whole brain activation during the N2 component. We hypothesized that economic threat would cause increased activation in the dorsal ACC during the N2 timeframe, indicating heightened sensitivity to conflict. Notably, conflict sensitivity is frequently cited as a key mechanism in political, moral and group-based ideologies. For example, according to the model of motivated social cognition, increased sensitivity to uncertainty and threat compels people to seek psychological sanctuary in 'black-and-white' beliefs and values that promote tradition, certainty, consensus and clarity (Jost et al., 2003). Indeed, a variety of discrete threats, including economic threat, shift people toward more fervent belief in nationalism, ingroup superiority and authoritarianism (Hogg, 2000;McGregor et al., 2001;Nail et al., 2009;Hogg et al., 2013). Additionally, these types of beliefs are reliably associated with dorsal ACC and lateral PFC structure and function. For example, conservative ideology is associated with reduced N2 mean amplitude (Weissflog et al., 2013), reduced ERN mean amplitude (Amodio et al., 2007) and reduced ACC cortical volume (Kanai et al., 2011). Similarly, religious belief is associated with reduced ERN mean amplitude (Inzlicht et al., 2009), and group-focused moral beliefs are associated with reduced cortical volume in both the dorsal ACC and lateral PFC (Nash et al., 2017). Following from this research, we conducted exploratory analyses to test whether increased dorsal ACC activation would mediate socio-economic attitude change elicited by real-world economic threat. To this end, we created a measure of wealth justification to index a relevant socio-economic belief. We reasoned that participants may seek to quell angst about money by exaggerating support and motivation for personal wealth. Alternatively, economic threat may inspire motivated social cognition for the certainty and clarity provided by normative capitalistic beliefs (Jost et al., 2003). Critically, if support for personal wealth represents a kind of conflict resolution, then it should be driven by heightened ACC activation to conflict. This finding could potentially give insight into the emotional and social consequences of real-world economic threat. Participants Ethical approval for this study was provided by the University of Alberta Human Research Ethics Board (Protocol 00084513). Participants (N = 110; modal age = 19; age range = 17-26; females = 61) with normal or corrected-to-normal vision were recruited from a first-year psychology class and earned class credit. Based on pilot data indicating that the current manipulation had a medium to large effect size on anxious uncertainty (Cohen's d = 0.65), we aimed to include 50 individuals per condition and stopped collection at the end of the 2019 fall term (power analyses in G*Power: difference between two independent groups, 'expected' effect size d = 0.65, alpha = 0.05, power = 0.80, and number of groups = 2, total sample size = 60). A total of 7 participants were excluded due to poor connectivity (as indicated by impedances > 10 kOhms, N = 5) or missing EEG data (N = 2), leaving 103 participants for analyses. Procedure Participants first completed an electronic informed consent, then were fitted with a 64-channel EEG headset (Brain Products) and seated at a computer station in an electrically and sound-shielded room. All materials were completed on a computer and in the following order. Participants first answered demographic questions and several personality questionnaires as part of a larger research project on individual differences in the neuroscience of self-regulation (all data available upon request). Participants were then randomly assigned to either the 'Economic Threat' condition or the 'No-Threat Control' condition. Participants then completed the passive auditory oddball task. Again, as part of a separate line of research, participants then completed a color-naming Stroop task, and a Balloon Analog Risk-Taking task (manuscripts in prep.). After, participants completed the wealth justification scale. Finally, participants rated the degree to which the economic threat manipulation made them feel a range of different positive and negative emotions, including, in order, Good, Happy, Smart, Successful, Likeable, Meaningful, Frustrated, Confused, Uncertain, Empty, Anxious, Ashamed, Insecure, Lonely, Stupid, Out of Control and Angry (1 = Strongly Disagree, 5 = Strongly Agree). They then completed a measure of conscientiousness. Participants were then debriefed, had the headset removed and hair washed, and thanked for their time. Economic threat manipulation Participants in the Economic Threat condition read an ostensibly real article from CBC.ca about an unsettling economic forecast in Canada that would specifically impact young adults. The forecast was putatively compiled by top Canadian researchers who concluded that a recession was imminent and that students would be hit hardest given the vulnerable position they were left in by the 2008 economic crisis. As such, this article was tailored to participants in our sample, i.e. young students. Participants in the No-Threat Control condition read an ostensibly real article from CBC.ca about a more neutral economic forecast that emphasized stability and a continuation of the status quo. Notably, both forecasts were based on real, publicly available economic predictions from financial news outlets. Passive auditory oddball paradigm After the manipulation, participants listened to a series of standard tones (pure 1000 Hz tones for 50 ms) and white noise blasts (0-20 000 Hz 'hissing' sound for 50 ms, both at volume setting 50 in Windows) presented at 75 and 80 dB SPL, respectively. The ratio of white noise to standard tones was 1:9. A stimulus was presented each second and the entire paradigm lasted for three and a half minutes, for a total of 210 trials (approximately 21 white noise and 189 standard tone trials). Participants were informed that they would hear noises but were not instructed to do anything other than fixate on a small circle presented on the computer screen during the task. Importantly, the passive auditory oddball task allows more basic conflict detection processes to be probed without interference from task-based processing in the dorsal ACC. Wealth justification scale We constructed a 12-item wealth justification scale to measure the degree to which people support and justify personal wealth. This scale included items such as 'The amount of money I make is important to me,' 'Accumulating wealth is one of my main goals in life' and 'Getting rich usually means stepping on others to get there. (reverse coded).' Reliability analysis revealed that two items did not correlate with the scale item total and were removed. This 10-item scale demonstrated acceptable reliability, Cronbach's alpha = 0.745. We thus created a composite wealth justification score from the final 10 items. Note that mediation effects are the same with the full 12-item scale. EEG recording and preprocessing Continuous EEG was recorded using the 64 Ag-AgCl channel ActiCHamp EEG system (Brain Products), positioned according to the 10/10 system and digitized at a sampling rate of 512 Hz (24 bit precision; bandwidth: 0.1-100 Hz). During recording, signals were referenced to TP9 electrode positioned over the left mastoid. Offline, EEG was re-referenced to the average mastoids (TP9-TP10), down-sampled to 256 Hz, band-pass filtered between 0.1 and 30 Hz, and notch filtered at 60 Hz. Blinks were statistically removed using the automatic ocular correction developed by Gratton and Coles (1993). Artifacts were then automatically detected using the following parameters: −100 to +100 µV min/max threshold, 50 µV maximum voltage step, 0.5 µV lowest allowed voltage (maximum-minimum) in 100ms intervals. Data were segmented into 1000-ms epochs locked on either standard tone or white noise presentation, 200 ms before to 800 ms after the stimulus. For each participant, all artifact-free epochs were then baseline-corrected by subtracting the average voltage during the −200-0 ms time period prior to the stimulus and averaged, creating average ERPs of standard tone (average per participant = 184.29) and white noise (average per participant = 20.58). The N2 was defined for both standard tones and white noise stimuli as the mean negative amplitude between 100 and 200 ms after stimulus at the site where the startle component was maximal, the fronto-central electrode FCz (see grand average ERPs in Figure 1). Source localization of conflict detection processes We used sLORETA to estimate the cortical sources of scalprecorded activity during conflict detection across economic threat conditions. As opposed to dipole modelling, sLORETA computes activity as current density (A/m 2 ) without assuming a predefined number of active sources. The sLORETA solution space consists of 6239 voxels (voxel size: 5 × 5 × 5 mm) restricted to cortex and hippocampi, as defined by the digitized Montreal Neurological Institute (MNI) probability atlas. sLORETA has been reliably validated by research comparing sLORETA localization of EEG activity to fMRI data (Mobascher et al., 2009;Olbrich et al., 2009), Positron Emission Tomography data (Laxton et al., 2010) and implanted electrodes in intracranial recordings (Zumsteg et al., 2006). For each participant, sLORETA images were computed for scalp-recorded activity for both the white noise and standard tone average ERPs. These images were normalized to a total current density of one and log-transformed. Statistical analyses Preliminary analyses examined if N2 mean amplitudes (100 ms-200 ms) were impacted by the economic threat manipulation. We conducted an independent-samples t-test with the condition variable entered as the grouping variable and a difference score between white noise N2 mean amplitude at FCz minus standard tone N2 mean amplitude at FCz as the dependent variable. One participant is excluded from these mean amplitude analyses for not demonstrating an N2 at FCz (inclusion or exclusion of this participant does not change our primary results). For our main analyses, to isolate conflict detection processes during the N2 component, all images were baseline corrected for pre-stimulus activation (−200-0 ms). In order to more precisely examine conflict detection processes, our analyses focused on the paired contrast between white noise and standard tone sLORETA images. Thus, the standard tone sLORETA images were subtracted from the white noise sLORETA images during analyses to remove processes common to both stimuli, allowing more isolated focus on conflict detection to white noise. Wholebrain voxel-by-voxel independent groups t-tests of the sLORETA images were conducted on the timeframe during the N2 component, or 100-200 ms after stimulus presentation as seen in the grand average N2 (Figure 1), thus comparing intracerebral sources of conflict detection in the Economic Threat and the No-Threat Control conditions. Correction for multiple testing for all 6239 voxels was implemented by means of a nonparametric randomization approach (Nichols and Holmes, 2002). This approach estimates empirical probability distributions and the corresponding critical probability thresholds (corrected for multiple comparisons). We expected that the economic threat manipulation would cause increased activation in the dorsal ACC during the N2 timeframe, indicating heightened sensitivity to conflict. However, our whole brain approach allowed for examining other potential differences in intracerebral sources across conditions. Given that the dorsal ACC is thought to recruit the lateral PFC to regulate conflict or implement cognitive control Mansouri et al., 2009;Miller and Cohen;Yeung et al., 2004), we also conducted exploratory analyses focused on subsequent components in the ERP to white noise. To determine if heightened dorsal ACC activation during the N2 timeframe caused by economic threat leads to increased lateral PFC activation, we used a more focused small-volume analysis corrected for only voxels in lateral PFC regions (see Gianotti et al., 2018 for such an approach). We explored both the P3 and the late positive potential (LPP) timeframes-i.e. two components important in emotional arousal and emotion regulation (Hajcak et al., 2010). The P3 timeframe was determined as 200-400 ms after stimulus at the site where the component demonstrated the maximal mean amplitude, Cz. The LPP timeframe was determined as 480-620 ms after stimulus, maximal at Pz. Results As can be seen in ( Figure 1A and B,) the N2 topography was largely similar across Economic Threat and No-Threat Control conditions, with maximal amplitude at the FCz electrode, and similar peak latencies after white noise stimulus presentation (Economic Threat = 145 ms; No-Threat Control = 148 ms). An independent-samples t-test revealed that participants in the Economic Threat condition, compared to the No-Threat Control condition, demonstrated a non-significant trend towards larger (i.e. more negative) N2 mean amplitude difference scores (white noise-standard tone) at FCz, t(100), = 1.621, P = 0.108 (equal variances not assumed, t(99.998), = 1.666, P = 0.099). However, our primary hypotheses were focused on source-localized activation during the N2 to more precisely target conflict detection processes. Source localization analyses revealed that a cluster of 11 voxels in the dorsal ACC/medial PFC demonstrated significantly higher activation in the Economic Threat condition than in the No-Threat Control condition (see Table 1 and Figure 2), t-value threshold corrected for multiple comparisons = 4.175. The peak voxel was located in Brodmann Area 32 of the dorsal ACC, MNI coordinates = −5, 45, 15, t(101) = 4.860. The cluster also included voxels spanning into Brodmann Areas 9 and 10. No other voxels exceeded the t-value threshold corrected for multiple comparisons. To further characterize this pattern of activation, we extracted individual estimates of current density across voxels in a 10-mm sphere around the peak voxel in the dorsal ACC during the same N2 timeframe (i.e. 100-200 ms after stimulus presentation) and correlated (Pearson correlation, two-tailed) these scores with an anxious uncertain composite of self-reported affect to the economic manipulations. First, a oneway ANOVA revealed that the Economic Threat condition caused increased anxious uncertainty (M = 3.719, SD = 0.968), compared to the No-Threat Control condition (M = 2.389, SD = 1.027), F(1, 101) = 44.981, P < 0.0001. We found that dorsal ACC activation during the N2 was positively correlated with the anxious uncertainty composite, r = 0.226, P = 0.022 (Figure 3). This further supports the idea that economic threat, as an anxiety-provoking and uncertain experience, caused increased conflict detection sensitivity (see S1 in the Supplementary Materials for further analyses using positive and negative affect composites). We next conducted an exploratory mediation analysis to examine if dorsal ACC activation during the N2 mediated an effect of economic threat on the wealth justification scale. Bootstrapped mediational analyses (PROCESS model 4, 5000 bootstrap samples for 95% confidence intervals, see Hayes, 2017) were conducted in which the predictor (X) was the Economic Threat manipulation variable, the mediator (M) was dorsal ACC activation during the N2 and the dependent variable (Y) was the composite wealth justification score. Results showed that the Economic Threat condition caused increased dorsal ACC activation during the N2 and this dorsal ACC activation was associated with increased wealth justification, indirect effect coefficient = 0.4367, SE = 0.2788, 95% CI [0.0012, 1.0949] (CIs do not contain zero). These results demonstrate that economic threat increases dorsal ACC activation during conflict detection, which in turn boosts wealth justification (see S2 in the Supplementary Materials file). Finally, exploratory analyses were conducted to focus on the lateral PFC. We restricted the voxel-by-voxel independentsamples t-tests of sLORETA images to Brodmann Areas classically associated with lateral PFC regions (Brodmann Areas 9, 10, 45, 46 and 47) during the following timeframes, 200-400 ms (the P3) and 480-620 ms (the LPP). Correction for multiple testing was thus calculated for 862 voxels using the same nonparametric randomization approach (Nichols and Holmes, 2002). Results showed that a cluster of 3 contiguous voxels in the left dorsolateral PFC (dlPFC) demonstrated significantly higher activation in the Economic Threat condition than in the No-Threat Control condition during the LPP timeframe, t-value threshold corrected for multiple comparisons = 3.114 (see Table 1 and Figure 4). The peak voxel was located in Brodmann Area 9, MNI coordinates = −40, 10, 40, t(101) = 3.240. As in the N2 analyses, we again extracted individual estimates of current density across voxels in a 5 mm sphere around the peak voxel in the left dlPFC during the LPP timeframe. We conducted mediation analysis (PROCESS model 4, 5000 bootstrap samples for 95% confidence intervals, covariate: baseline left dlPFC activation, see Hayes, 2017) to examine if dorsal ACC activation during the N2 mediated the effect of economic threat on left dlPFC activation. Contrary to expectations, however, mediation analyses revealed an opposing indirect effect, such that increased dorsal ACC activation during the N2 timeframe due to the economic threat manipulation mediated a decrease in left dlPFC activation during the LPP timeframe, (indirect effect coefficient = −0.1753, SE = 0.1072, 95% CIs [−0.4217, −0.0058]). This suggests that increased conflict detection sensitivity caused by Economic Threat appears to disrupt downstream processes instantiated in the lateral PFC. Note that no other voxels during the late positive potential and no voxels during the P3 timeframe exceeded this small-volume t-value threshold corrected for multiple comparisons. Discussion Economic threat is a poignant experience with profound emotional and social effects (e.g. Roszkowski and Davey, 2010;Karanikolos et al., 2013;McInerney et al., 2013;Buffel et al., 2015;Kriesi and Pappas, 2015;Aslanidis, 2016;Bianchi, 2016;Margerison-Zilko et al., 2016;Ybarra et al., 2016). Widespread economic threat looms again today in the context of the COVID-19 pandemic. A better understanding of the underlying neurocognitive mechanisms may help shed light on the broader social and emotional effects of economic threat. To that end, we focused on conflict detection processes. We found that economic threat caused increased activation source localized to the ACC during the N2 component to white noise stimuli. Increased activation in the ACC was also correlated with increased selfreported anxious uncertainty. Further, economic threat, mediated by ACC activation during the N2, increased personal wealth justification. The current findings extend prior research on threat and neurocognitive mechanisms in a key way. The economic threat was a poignant, generalizable experience with important psychological consequences, whereas prior research has typically manipulated discrete and temporary threats during the task that elicit conflict and the N2 in a within-subject design. Manipulating anxiety or threat and conflict concurrently in the same task can sometimes reduce conflict detection sensitivity, reflecting divided attentional resources (Pessoa, 2009). To avoid processes such as divided attention and directly focus on the causal, downstream effects of a real, pervasive and anxiety-provoking threat, we separated the threatening experience from the measure of conflict. Our findings also compliment previous research that has commonly linked conflict detection and resolution to a dorsal ACC-lateral PFC network in which the dorsal ACC identifies conflict or uncertainty in a current goal or task and recruits the lateral PFC to resolve the conflict or implement cognitive control Mansouri et al., 2009;Miller and Cohen;Yeung et al., 2004). Importantly, negative affect is processed in the dorsal ACC as a signal of conflict or the need for increased cognitive control (Shackman et al., 2011). Alternatively, anxiety is associated with limited control capacity (Eysenck et al., 2007). This raised the possibility that increased ACC activation to conflict after economic threat had downstream consequences for lateral PFC function. A more focused correction procedure (e.g. corrected for only voxels in lateral PFC regions) revealed increased left dlPFC activation during a subsequent component in the ERP, the LPP. Further, contrary to expectations, dorsal ACC activation during the N2 timeframe caused by the economic threat manipulation mediated a decrease in left dlPFC activation during the LPP timeframe. Because the LPP component is related to emotional arousal and emotion regulation (Hajcak et al., 2010), and the dlPFC is related to emotion regulation and cognitive control (among other processes), we may speculate that the increased conflict detection sensitivity disrupts downstream emotion regulation processes (e.g. Bishop et al., 2004). Consistent with this, prior research has demonstrated that increased working memory load, a process that activates the dlPFC, reduces LPP amplitude unless participants reported higher levels of anxiety (MacNamara et al., 2011). That is, anxiety disrupted emotion regulation processes that involve the dlPFC. Relatedly, increased sensitivity to conflict instantiated in the ACC may be an underlying neural mechanism in the link between economic threat and increased stress, anxiety and depression on the one hand, and increased authoritarianism, prejudice and nationalism on the other. For example, a pervasive sense of economic threat may lead to chronically higher sensitivity to conflict, emotional dysregulation and chronically higher levels of anxious arousal. Chronic anxiety and distress can lead to depression (Gray and McNaughton, 2000) and negative consequences for an array of neurobiological processes (Rodrigues et al., 2009;Roozendaal et al., 2009;Ulrich-Lai and Herman, 2009). Additionally, we found that ACC activation during conflict mediated an increase in positive attitudes toward personal wealth accrual, possibly because participants were motivated to bolster support of the capitalistic system (Jost et al., 2003;Kay et al., 2009) in order to resolve conflict aroused by the economic threat. Alternatively, this could also reflect a simpler kind of conflict resolution, in which angst about money is resolved by exaggerated support for personal wealth. We anticipate that this exploratory finding will encourage future research examining the causal link between real-world economic threat and conflict resolution processes. For example, our study was limited in that it did not assess the extent to which support for personal wealth served to quell negative affect caused by economic threat. Future work could probe the emotion regulation effects of these and other proposed conflict resolution processes. Researchers could further examine if increased sensitivity to conflict and dorsal ACC activation mediates the broader and more varied changes in mental health outcomes and changes in ideology and belief associated with economic threat. Despite the widespread impact of economic threat, its basic neurocognitive functions have remained unclear. Our results provide novel experimental evidence that economic threat activates conflict detection processes source localized to the ACC. Further, this study shows that these conflict detection processes can have downstream influence on attitude change relevant to economic threat, such as support for personal wealth. Finally, this study provides an important initial contribution toward future research examining the broader social and emotional effects of economic threat. Supplementary data Supplementary data are available at SCAN online.
5,982.2
2020-09-01T00:00:00.000
[ "Economics" ]
Parameter Settings for an Automated Gantry-Robot Painting System using a 3-Gun Atomization Spray Method for an Anti- Static Coating Process Corresponding Author: Zunaidi Ibrahim Faculty of Technology, University of Sunderland, St Peter's Campus, Sunderland, SR6 0DD, UK Email<EMAIL_ADDRESS>Abstract: Dry film thickness and appearance are the main quality improvements that can be achieved by an automated painting system in the manufacturing process. This study presents the spray coating process for aircraft parts using a gantry robot with an automated spray-painting gun to control the spray path, thus achieving the desired coating layer thickness. The experimental results show that the 3-gun atomization spray method, with high capacity airflow and outstanding atomization characteristics, poses the challenge of achieving an even thickness of the overlapping spray pattern from three separate guns, minimizing the paint material consumption and controlling the dry film thickness within the given specification and standard. The optimization was performed to control the spray path and material consumption with a 3-gun spray method to achieve the optimum setting for the spray nozzle to workpiece height to obtain the target thickness. The results show that replacing the manual process with the automated painting process can increase the speed by up to 30-40%, reduce the setup time and increase the capacity of the painting booth. The development of this system can achieve the desired thickness specification, increase productivity and provide safer, more effective and ergonomic working conditions. Introduction Aviation coatings, known as aircraft coatings or paintings, are broadly used in both commercial and military aviation industries ranging from military aircraft sector to the space sector and other flying vehicles. The aerospace sector is one of the most demanding sectors for the coatings process. The coating process which uses materials has to ensure protection from corrosion, abrasion, erosion and more within the temperature changing environments and extreme weather conditions, (Coating.co.uk, n.d.). Smooth and uninterrupted airflow over a plane's wing is vitally important for the efficiency of flight. Leading-edge erosion is a problem also experienced by wind turbines where the front edge of a wing or blade erodes due to constant impact with rain, dirt, bugs and others. By the time, when an aircraft wing arrives at the manufacturing assembly line, similar to all other aircraft wing panels will be covered in a green protective coating. When the protective coating is removed, the paint is applied and then the aircraft wings are likewise coated with a weather-resistant coating system. The walkways on aircraft wings are also coated, this time with a non-slip coating for cargo floor panels and escape routes and payload floorboards which suggested in aerospace coatings by AkzolNobel's Aviox paint and coating standard, (Aerospace Coatings, n.d.;Mike, 2012). Aircraft material supposed to be light and highly versatile and flexible, resistant to corrosion and fluids and provide long term endurance with gleam and gloss color retention over long life service intervals and ■■ maintenance. Composites are once in a while totally smooth, so the paint system needs to oblige the paint pinholes issue and voids should be filled to prevent moisture penetration. Pinholes are gaps littler than 0.5 mm in diameter that happen in the composite surface and can't be filled using a spraying process. The sprayed coating will bridge the pinhole and the extension will break, either by gassing out from the pinhole, by shrinking back as the coating dries or by sanding process techniques (United States Department of Transportation and Federal (Aviation Administration, 2012;Techniques, 2018). Pinhole filling is achieved by pushing the filler into the holes using a special tool such as a squeegee. A low viscosity filler is preferred, as this can be wiped off the composite surface leaving filler in the pinhole and reduce process time and weight. The filler must be wiped in multiple directions and uniform to ensure that angled pinholes are filled properly. Spray-painting processes have been widely used in painting and coating applications, where the paint or coating liquid is atomized and dispersed to deposit on the target surface of a workpiece. The workpiece is a panel to be painted. The spray coverage and thickness of the coating layer are the major concerns in the process. Spray guns are available in gravity feed, pressure and suction formats and a wide-ranging of conventional technology air caps come up with exceptional atomization for coating metal, ceramic, plastic, wood and composite substrates with nearly all types of solvent such as waterborne, high solids and 2K materials which suggested in (Spray and Range, n.d.). In this study, the workpiece is a composite part used in the manufacture of airplane wings. From the manual spraying process, the gantry-type automation system is developed to counter inconsistency in painting thickness, increase productivity and reduce environmental and health issues for the operators. This Automated Spray-Painting System (ASPS) consists of three spray guns located on a three-axis gantry-type carrier and moves in the X, Y and Z directions powered by three electrical motors. ASPS will properly control the spraying path, to achieve uniformity of the coating layer thickness (Sheng et al., 2005;Luangkularb et al., 2014). The spray gun is a type of Low Volume Medium Pressure (LVMP) by De VILBISS (Spray and Range, n.d.). This spray gun with highly and powerful airflow and impressive performance of atomization characteristics are integrated with a separate "balanced" air valve to supply an unlimited flow of compressed air through the gun body; this, coupled with light, feel-right comfort, is to produce high-quality spray coating conditions. Compared to conventional guns and High-Volume Low Pressure (HVLP) guns, the LVMP gun achieves finer atomization and higher transfer efficiency with significantly less air consumption. The DeVilbiss Advance High Demand (HD) spray guns are ideal for small scale working operations or high-volume spraying, with the advantage of immediately improved finishing productivity. Solution mixtures supplied to the spray gun are transferred by a pressurized tank to ensure a constant supply. A separate pressurized tank is filled with thinner for flushing out balance paint solution from the transfer tube. Similarly, spray path planning and distribution pattern models are presented in (Taejung and Sarma, 2003;Somboonwiwat and Prombanpong, 2017). Li et al. (2010) presented an automatic path and trajectory planning for robotic spray painting using a CAD based method. This ASPS system is designed for Aerospace Composites Malaysia (ACM) (ACM Malaysia, 2002) located in Kedah, Malaysia. ACM, a supplier of composite products and subassemblies to the global aerospace industry, is a strategic alliance between U.S. joint venture partners Hexcel and Boeing (Hexcel HexAMTM, 2019). The company made the difference to improve the color and gloss durability of the topcoat paint and to fulfill new regulations that mandated the use of primers and topcoats with low Volatile Organic Compound (VOC) content (Rebecca Horner, n.d.). These regulations and standards also required Boeing to make use of new and less aggressive cleaning solvents before applying paint. Using new solvents, paints and application processes produced several challenges in preparing the surface, painting and achieving a finished appearance satisfactory to the operators. The development of ASPS started in December 2017 in our laboratory and was successfully installed and tested at Paint Booth No. 1 ACM (ACM Malaysia, 2002) in August 2018. This paper is organized as follows: Section 2 encompasses a material and methods of the related researches of manual and automated spray painting and presents the design of the system and comparison between manual and gantry-robot use in the development of the 3-gun spray method. Section 3 describes the detail of the experimental setup. Section 4 describes the results for parameter settings and thickness requirements after implementing the proposed method. Section 5 expresses the conclusion of the proposed developed system. Literature Review The coatings systems used for aircraft exteriors include primers, intermediate coats and topcoats. These coatings are spray-applied in very thin layers, ensuring an even and perfectly cured application. Several paint processes, from primer to anti-static topcoat, are applied to the aircraft parts, such as wings, spoiler, flaps and ailerons, as shown in Fig. 1. Several studies have been undertaken in the analysis of spraying path, spray overlap, spray gun ■■ orientations and spray flow rate (Luangkularb et al., 2014;Taejung and Sarma, 2003;From et al., 2011). Regarding the reduction of downtime and an increase in productivity and efficiency, ASPS systems have been introduced. They allow the painting process speed to be increased by up to 30%-40% by replacing the manual process, eliminate long setup times before painting and increase the number of painting carts inside the painting booth. These ASPS systems also reduce the number of chemical strippers required, thus effecting a cost-saving. Components of an aircraft: wings, spoiler, flaps and ailerons, are painted in a paint booth at ACM. The paint booth must have temperature and humidity control, to be able to protect the components from the elements. The quality of spray thickness and evenness of the coating layer are the major issues in the coating process with the application of the ASPS. Previously, several studies were undertaken in the analysis of trajectory and spraying path control planning for unknown 3D surfaces for industry painting robots, modelling for trajectory planning on automotive surfaces and optimization of robotic spray-painting process parameters as reported by (Li et al., 2010;Meng, 2008;Conner et al., 2005;Chidhambara et al., 2018). Controlling the spraying path is an important parameter to achieve uniformity, or evenness, of the coating layer thickness (Meng, 2008;Chen and Zhao, 2013). Real-time simulation and CADguided tool planning contributed extensively to the uniformity of spray pattern distribution and thickness deposition (Conner et al., 2005) and Analysis of a 6-Degree Of Freedom (DOF) Robot Spray Coating Manipulator (Madhuraghava et al., 2018). In addition, (Tang and Chen, 2015) presented surface modeling of a workpiece and tool trajectory planning for spray painting. In our previous paper (Rudzuan et al., 2019) presented a method for the development of an automated spray-painting system for the coating process. The conventional coating is performed by experienced operators. The anti-static coating is sprayed using a manual gravity feed spray gun inside a paint booth. Meanwhile, space for operators to maneuver in the paint booth limits the area available for the paint carts. Therefore, the number of workpieces inside the paint booth cannot be maximized. With this size of the paint booth, only six carts can be positioned inside the booth at one time, Fig. 2. Figure 2 shows the manual process flow as follows: Step 1: The operator pushes 4 carts with a workpiece into the painting booth (total time took 3 min) Step 2: The operator manually sprays each workpiece from carts 1 to 4, individually (total time taken 10 min) Step 3: After completion of 4 carts, the operator moves these 4 carts to the edge of the painting area for the flash-off period (total time taken 2 min) Step 4: The operator pushes another 2 carts with workpieces for painting (time took 2 min) Step 5: Flash-off period (time took 15 min) Step 6: After completion of the flash-off period for all 6 carts, the operator pushes all carts into the Warm Room for the next process (time took 3 min) Process cycle time: Time is taken to complete the coating process for all 6 carts, including handling time: 35 min. With the elimination of these manual processes, ACM will increase their productivity and safety and improve quality, focusing, in particular, on paint thickness and color unevenness during the process. Manual painting is performed using a single gravity feed spray gun held by the operator, as shown in Fig. 3 and relies on their individual experience and expertise to spray the workpiece. The paint booth operators are exposed to high risk from chemicals as nearly every chemical used in the operation is flammable and noxious. Debris and dust can ruin the paint quality, but it can also harm the operator's lungs. Inside a paint booth, high temperatures combined with dust, debris and moisture can cause numerous lung diseases. Spraying hazards including shock can be created by clogged spray guns and filtration systems. Storage of empty cans inside a spray booth can be sufficient enough to create static electricity when adjacent to highpressure sprayers this can generate an electrocution hazard. Operators need to wear Personal Protective Equipment (PPE) shown in Fig. 3 and masks during this painting process for their safety as suggested in guidelines published for Aviation Standard (United States Department of Transportation and Federal Aviation Administration, 2012). Flash-off time is the necessary waiting time in the Warm Room for the heating process after a coat has been applied to a component. If the flash-off time is short, irregularities such as uneven thickness can be expected. With the implementation of ASPS, productivity and effectiveness are expected to be improved as the cycle time is reduced from 35 min to 26 min 36 2nds per production cycle. Increased and consistent product quality is the most important advantage gained by automated systems of all types. For painting or coating, film thickness tolerance and visual appearance are the most prevalent points to tackle. An industry-standard assumption is that a paint saving of 15% to 30% can be achieved by reducing overspraying when manual painters are replaced by automated systems. Automated systems are flexible to change, expansion and adaption for different workpiece sizes or paint thicknesses. Savings are achieved through film thickness tolerance control; trigger accuracy will also be directly related to many other savings. Design and Development of ASPS with 3-Gun Atomization Spray Method In our previous research, we have design and develop an intelligent system related to autonomous robotic application such as the design and development of an intelligent prosthesis (Ibrahim et al., 2008), path planning based on geographical features information for an autonomous mobile robot (Zunaidi et al., 2006), complex background subtraction for biometric identification (Khalifa et al., 2007), fuzzy multi-layer SVM classification (Hariraj et al., 2018) and also fuzzy wheel controller design (Halin et al., 2018;Mustafa et al., 2018). In this research, we will evaluate and optimize the best parameter settings for ASPS and using the above research finding to develop an intelligent autonomous height control system for ASPS in the future. This research design and develop the ASPS with a 3-Gun atomization spray system using the Gantry-robot system. Gantry-robot systems offer the advantage of large work areas and better positioning accuracy. Positioning accuracy is the ability of the robot to place a part correctly. Gantry robots are easier to program, with respect to motion, because they work with an X, Y, Z coordinate system as shown in Fig. 4. Popular applications for this type of robot are the Computer Numerical Control (CNC) machine and 3D printing. The simplest application is used in milling and drawing machines where a pen or router translates across an X-Y plane while a tool is raised and lowered onto a surface to create a precise design. Pick and place machines and plotters are also based on the principle of the Cartesian coordinate robot (Rudzuan et al., 2019;Engineering, 2015). Controlled by Programmable Logic Controller (PLC), this ASPS provides all necessary data for managing the process optimally, including nozzle selection, workpiece height control, pressure control, paint volume, XYZ spray direction and positioning and alarm buzzer, among others. Technically, the system consists of a gantry spray gunsee Fig. 5, with movement by an explosion-proof servomotor and an adjustable spraying distance from nozzle to workpiece. To fit within the booth dimensions, the gantry system type of robot is designed with a length of 6950 mm, a width of 4800 mm and a height of 2380 mm and is built from mild steel tube, 3"×2" square. The guns can move freely along the X-and Y-axes supported by cable chain covered by flexible bellows and powered by a set of AC servomotors. For gun height adjustment, the gun can also move up and down, controlled by another servomotor. All servomotor movement is controlled by a PLC equipped with a touchscreen panel for automation mode and teach pendant for manual control mode. Three AC servomotors are used in this development-HG-SR52, HG-SR102 and HG-KR23B, all from Mitsubishi Electric Co. (Manual, 2012). Each of the servomotors is installed for controlling the spray gun mechanism in the X-, Y-and Z-axis, respectively. The selection of the servomotor was explained in detail in a previous study on the development of ASPS mechanical design properties that included the servomotor selection and parameter settings such as load and gear type, among others and based on this actual application (Rudzuan et al., 2019). The overall proposed system shown in Fig. 5 comprises the following items:  Main structure with servomotor and spray gun layout  Automated Spray Gun (Nozzle) c/w pressure tank, flow counter, tubing, etc  Explosion-proof motor for XY control c/w cable  Detail section for servomotor mounting Spraying Paint Booth Dimensions of the paint booth are 7000 mm × 5500 mm with an access door at the front, blower on the roof and vacuum mechanism on the floor, as shown in Fig. 6. There is an exit door to the Warm Room at the rear of the booth. Painted workpieces are transferred to the Warm Room for 45 min after 15 min of flashing off time inside the paint booth. The flashing-off process is a partial evaporation process or settle-down process when the liquid changes its phase and becomes partly vapor and partly liquid. There are two pressure tanks required in the development of this system. One pressure tank is filled with the paint mixture and the other is used for solvent or thinners for flushing or cleaning purposes; use of the second pressure tank reduces the flushing or cleaning time. After the spraying process is complete, the pressure tank with the solvent is connected to the hose for the flushing process. With this versatile PT-10M pressure tank (Pressure and Temperature, n.d.), Almost professional painting results can be achieved. Special materials such as enamel paint, chemical liquids and solvent-based coatings can be delivered as smoothly as required (DeSoto® PPG Aerospace, n.d.). Figure 6 above, shows the structural design for the ASPS. The spray gun is held by a linear guide and controlled by a servo motor for height adjustment. Another servomotor is used to move and control the ASPS axis on the linear guide. The proposed ASPS has a main control panel unit with a teaching pendant, touchscreen unit, push-button and tower light. 3-Gun Spray Method The proposed ASPS utilizes an LVMP gun to achieve finer atomization and higher transfer efficiency. The LVMP gun with high transfer efficiency that provides superior cost performance is the DA-300 type DeVilbiss, shown in Fig. 7. ■■ It is used in this development because it is a compact, high-performance, general-purpose automatic spray gun with superior fine atomization and transfer efficiency. Featuring superior atomization, the gun is designed to be suitable for spraying metallic and pearltype paints. The superior atomization is made possible by using the DeVilbiss air cap that is already highly regarded in the market. Compared to conventional guns and HVLP guns, the LVMP gun achieves finer atomization and higher transfer efficiency with significantly less air consumption. These main characteristics are highlighted in the manufacturer's manual on their website (Spray and Range, n.d.). Figure 8 shows the process flow for the proposed ASPS: Step 1: The operator pushes 8 carts with workpieces into the painting booth (total time took 3 min) Step 2: The operator closes the door and selects JOB TYPE at the Master Control Panel (total time taken 5 min 36 2nds) Double confirm selected JOB TYPE with a height sensor. If OK, the autocoating system will start (tower light turns "GREEN." If completed, tower light turns "YELLOW") Step 3: Flash-off period for these 8 carts (total time taken 15 min). On completion, tower light turns "BLUE Step 4: After the flash-off period completes for all 8 carts, the operator pushes all carts into Drying Booth for the next process. The operator pushes the COMPLETE button and the tower light turns "RED" (total time taken 3 min) Process cycle time: Time is taken to complete the painting process for all 8 carts, including handling time: 26 min 36 2nds. Position Setting for Spray Gun In this development, we used three spray guns concept. We design the spray gun positioning which can overlap the spray pattern on each other to perform a better spray quality and maintain the thickness specification. The spray gun position design is shown in Fig. 9. The oval pattern varies with a spray distance of 200 mm each. The guns are held by steel rods and positioned at a 45 angle as the initial setting at the development stage, shown as a side view and can be manually adjusted based on the pattern test requirement. This testing is repeated with angle repositioning of the spray gun decreasing by 5 each time until the pattern requirement as shown in Fig. 10 is achieved. Spray Pattern Test The guns are designed to be in such a position that when the paint is sprayed from the designated height with the correct valve opening (gun knob setting), the desired spray patterns from each gun do not overlap each other, as shown in Fig. 10. The photo shows the actual test pattern, in Fig. 10, with a width of 200 mm and a length of 400 mm that should be retained for all spray patterns. After the measurement and several tests, we found that the pattern and best angle setting is 90 for the two spray guns at the outside edges (left and right gun) and the middle gun setting is 10, as shown in Fig. 11. We use the spray gun settings for further experiments with other parameters and thickness requirements, retaining this angle, as Fig. 11. Paint Material and Mixing Liquid Two paint materials are used in this spray-painting process from DeSoto® PPG Aerospace (n.d.) and Curing Solution and DeSoto® 528×310 Conductive Coating Base Component mixture. Curing solution and Base Component must be mixed in a ratio of 1:1 (Fig. 12) Calculations Painting area with existing carts (4 carts) = 6.6 m 2 Cycle time to complete painting of 4 carts = 2,100 s Painting area with new carts (8 carts) = 15.55 m 2 Cycle time to complete painting of 8 carts = 1,596 s Painting area 1 m 2 with 67.8% increase in productivity Complete System Flow Chart The complete ASPS with automatic air flushing process flow control and thinner flushing when the new paint changing process applied is shown in Fig. 13. We use the Mitsubishi Q-Series-PLC to integrate the automatic spray gun speed, the complete system flow control, the complex algorithm and intelligent motion sequence and the process control. The Mitsubishi Q series also includes a comprehensive range of I/O and having the full intelligent function modules to fulfill the requirements and system algorithm that we design for this ASPS application. The complete process of the proposed ASPS is easy to maintain, reduces the processing time for paint set and reduces the handling time for changing to new paint. We demonstrated that 1 m 2 painting area can be completed within 102.6 s with this system, proving that the painting area can increase by 57.6%. Discussion With the implementation of this automated process, it is not necessary for the operator to be inside the painting booth. This condition will be better for the safety and health of the worker. Furthermore, the requirement for space inside the booth for the operator for manual spraying process becomes unnecessary. With implementing the new cart design to new size as length, L2370 mm, width, W820 mm and high, H920 mm will increase the number of workpieces that can be located on the new cart. This new cart design could increase the number of panels, based on the increase in the painting area, by up to 57.6%. Figure 14 shows the proposed ASPS cycle, actual path trajectory and travel distance during the painting process. The time taken to complete this task is 5 min 36 2nds. The total cycle time includes handling time is 26 min 36 2nds. The ASPS system showed that the robot will travel from the home position at area A and will paint the workpiece at area B before going to area C and finishing with area D. The total cart area cover for this process is 15.55 m 2 for eight carts. Based on the time taken for the above path, we can conclude that the painting area 1 m 2 will increase productivity by 67.8% productivity from the existing manual painting system, as shown in the calculation. Results Several sets of studies have been performed to achieve the best spray pattern shape, optimal overlap area and most importantly even thickness. The gun knob valve opening is tested for 0.5 turns to 2.5 turns. The turn is defined, based on fully closed as 0 turns and one complete turn as 1.0 turn. Alternatively, the turn can be defined in degrees with 0 turns as 0° and one complete turn as 360°. To find the optimum setting for gun knob valve opening, we performed different combinations of gun knob valve opening and nozzle to the workpiece with the same fluid pressure setting at 0.8 bar, atomizing air pressure at 2.5 bar and servomotor speed at 0.4 m/s. Figure 15 shows samples of spray coupon thickness measured and marking process. All thicknesses were measured by an Elcometer (Honda and Matsui, 1991) and recorded. The non-destructive coating thickness measurements can be taken on either magnetic steel surfaces or non-magnetic metal surfaces such as stainless steel or aluminum. Digital coating thickness gauges are ideal for measuring the coating thickness on metallic substrates. Electromagnetic induction is used for nonmagnetic coatings on ferrous substrates such as steel, while the eddy current principle is used for nonconductive coatings on non-ferrous metal substrates. A coating thickness gauge (also referred to as a paint meter) is used to measure the dry film thickness. We setup the experiment to find the optimum height H for our ASPS. The experimental conditions are shown in Fig. 16. We set five different coupon samples on a wooden board and the ASPS sprayed in one direction as shown in the picture. We set al l coupons in the range of the pattern width per stroke 400 mm to achieve overlapping spray from Gun 1 with Gun 2 and Gun 2 with Gun 3 from a single spray. With the same experimental setup, we measured the parameter settings for our ASPS. The first experiment was to find the best parameter settings for the spray gun knob valve opening. In this case we setup different combinations of atomizing air pressure and ASPS servomotor speed. The results are summarized in Table 1 ■■ [turn]. Figure 17 shows the actual photo for the coupons before and after spray from a top view and side view. The second parameters to be decided were atomizing pressure and servomotor speed. In this case, we performed several experiments to find the average thickness within the specification for different combinations of atomizing air pressure and ASPS servomotor speed value. The results listed in Table 2 give the optimum setting for ASPS with an atomizing air pressure of 2.5 bar from the spray gun and a servomotor speed of 0.4 m/s. We used all these parameter settings to study the range of height required to obtain the average thickness within the specification given. The experimental results using the above parameter settings in Fig. 18 show the thickness versus nozzle to workpiece height. In this study, the coating thickness specification given was 0.4 ~ 0.8 mils (10 ~ 20 um). From our observation, the experiment result shown that the best thickness can be obtained with a nozzle distance from workpiece height at 10 inches. The results show that the optimum height setting for ASPS is between 8 and 10 inches to obtain the thickness within the specification and standard. Conclusion Dry film thickness is probably the most critical measurement in the coatings industry because of its impact on the coating process, quality and cost. This study stated the overall process of parameter setting for an automated spray-painting gantry-robot system using a 3-gun atomization spray and using those settings to achieve the best height from a nozzle to a workpiece setting with an optimum overlapped area between the three guns to obtain the target thickness. The outcome demonstrated that the 3-gun atomization spray method with high capacity airflow and outstanding atomization characteristics system can achieve an even thickness of overlapping spray pattern dispersed from three separate guns, minimize the paint consumption and control the dry film thickness within the given specification and standard. The optimization was performed when controlling the spraying path and control the material consumption with the 3-gun spray method to achieve the optimum setting for a spray nozzle to workpiece height and obtain the target thickness. The results showed that, by replacing the manual process, the painting process speed can increase by up to 30%-40%, eliminate long setup times before painting and increase painting cart capacity inside the painting booth. An industry-standard assumption is that a paint saving of 15% to 30% is achieved when painting operators are replaced by ASPS. Savings achieved through film thickness tolerance control and trigger accuracy are directly related to many other savings. They can reduce stack VOC at the exhaust system; reduce rework jobs, thus increasing customer satisfaction; reduce overspray, thus reducing filter and chemical usage; and reduce reliance on expert painters, thus reducing human resources cost for hiring and training. The development of this system can achieve the thickness specification, increase productivity and develop a safe working environment in order to optimize a safer, more effective and ergonomic workplace and process. In future work we need to use ASPS parameter settings, pattern test results, all height data to develop an intelligent autonomous height control system for ASPS which have the flexibility to change, expansion and adaption to different workpiece sizes, type and paint thickness. This work will concern deeper analysis of particular mechanisms for the ASPS to suit various types of a workpiece with an autonomous height control system. Many different adaptations, tests and experiments can be potentially developed in the future.
7,029.4
2019-11-22T00:00:00.000
[ "Materials Science" ]
Implementation of a Computerized Decision Support System to Improve the Appropriateness of Antibiotic Therapy Using Local Microbiologic Data A prospective quasi-experimental study was undertaken in 218 patients with suspicion of nosocomial infection hospitalized in a polyvalent ICU where a new electronic device (GERB) has been designed for antibiotic prescriptions. Two GERB-based applications were developed to provide local resistance maps (LRMs) and preliminary microbiological reports with therapeutic recommendation (PMRTRs). Both applications used the data in the Laboratory Information System of the Microbiology Department to report on the optimal empiric therapeutic option, based on the most likely susceptibility profile of the microorganisms potentially responsible for infection in patients and taking into account the local epidemiology of the hospital department/unit. LRMs were used for antibiotic prescription in 20.2% of the patients and PMRTRs in 78.2%, and active antibiotics against the finally identified bacteria were prescribed in 80.0% of the former group and 82.4% of the latter. When neither LMRs nor PMRTRs were considered for empiric treatment prescription, only around 40% of the antibiotics prescribed were active. Hence, the percentage appropriateness of the empiric antibiotic treatments was significantly higher when LRM or PMRTR guidelines were followed rather than other criteria. LRMs and PMRTRs applications are dynamic, highly accessible, and readily interpreted instruments that contribute to the appropriateness of empiric antibiotic treatments. Introduction Microorganism resistance to antibiotics changes over time and varies according to geographic area, hospital, or even hospital department [1,2]. Various studies on severe infections in hospitalized patients, especially those in intensive care units (ICUs), have associated an inappropriate initial antibiotic treatment with increases in bacterial resistance, morbidity-mortality, hospital stay, and hospital costs [3][4][5][6][7][8][9][10][11][12]. Selection of the appropriate empiric antibiotic treatment requires knowledge of changes in the etiology of infectious processes and in antibiotic resistance patterns in each hospital area [13,14]. The selection of empiric antibiotic therapy is generally based on updated clinical practice guidelines or therapeutic recommendations developed by expert groups from scientific societies [15][16][17], which must be adapted to the epidemiologic characteristics of each country or healthcare area [13,18]. Once microbiological results are confirmed and the satisfactory clinical progression of patients is observed, it is recommended to deescalate antibiotic therapy when possible in accordance with the antibiograms of the identified bacteria [3,6,10,11,15,19]. BioMed Research International Hospital antibiograms are commonly used to monitor local trends in antimicrobial resistance and to prepare antibiotic policies for guiding targeted empiric therapy [20]. Thus, besides the identification and study of microorganism susceptibility, the microbiology laboratory has a role in periodically updating results for designing empiric antibiotic treatment guidelines adapted to the local microbial epidemiology [21,22]. These guidelines should be based on the best available clinical evidence and on the resistance profiles in each healthcare setting [6,18]. They need to be constantly updated, taking account of the clinical usefulness of treatments, the ease of their management, and consensus agreements among professionals [21]. Guidelines are considered to be more useful when defined and implemented by a multidisciplinary team and adequately disseminated and promoted, followed by evaluation of their acceptance and implementation [23,24]. Ideally, these guidelines should be developed for each hospital department, and there is a particular need to avoid needless antibiotic administration for a suspected nosocomial infection in the ICU [25]. The objectives of our study were to design, develop, and implement a new computer application based on the local epidemiologic analysis of bacterial susceptibility to antibiotics and to assess the usefulness to physicians of the information that it offers for selecting the most appropriate antibiotic treatment in ICU patients with suspicion of nosocomial infection. Patients and Methods This study was conducted over a three-year period in a thirdlevel 821-bed hospital, Complejo Hospitalario Torrecardenas (CHT), serving 350,000 inhabitants and eight primary care districts in the province of Almeria (southeast Spain). Study Design. A prospective, quasi-experimental study was conducted in three stages between January 2008 and December 2010. During the first six months, a new computerassisted program was developed for antibiotic selection. Between July and October 2008, four information sessions and two round table discussions were conducted with the specialized physicians and nursing staff of the ICU and Departments of Microbiology, Preventive Medicine, and Pharmacy of the hospital to promote the new program and train participating physicians in its application. Finally, between October 2008 and December 2010, the new system was implemented for antibiotic prescription in the ICU, the patients in the study were followed up, and the results were analyzed. A Computer-Assisted Program for Antibiotic Selection. We developed an application, based on Microsoft.NET Framework with Visual C# and SQL, with Open DataBase Connectivity (ODBC) to the Laboratory Information System (LIS) of the Hospital Microbiology Department in order to provide real-time analysis and updating of data obtained from microbiological studies. The application, designated Guía Electrónica de Resistencias Bacterianas (GERB), was installed in a central server. The database is automatically updated and allows consultation of all antibiograms recorded in the LIS, extracting data according to different selection criteria (e.g., date or date interval, patients, samples, diagnoses, microorganisms isolated, antibiotics tested, and hospital departments), and creating graphics to facilitate interpretation and visualization on the computer screens in the network. Two GERB-based computer applications were developed: local resistance maps (LRMs) and preliminary microbiological reports with therapeutic recommendation (PMRTRs). Both guidelines consider the epidemiology, the evidencebased best treatment options for the most prevalent bacterial pathogens, and the local-specific antibacterial pathogen susceptibility. Local Resistance Maps (LRMs). These maps graphically depict the accumulated susceptibility data in the LIS for all bacteria identified in samples from ICU patients during the previous year and from the patients who have undergone antibiograms. The GERB system is used to create three types of LRM for the empiric treatment of lower respiratory tract infections, based on antibiograms for bacteria isolated from samples from the lower respiratory tract; urinary tract infections, based on antibiograms for bacteria isolated from urine samples; and bacteremias, based on antibiograms of bacteria isolated from blood cultures. The antibiotics included in the LRMs were selected by consensus among the ICU physicians; their inclusion required a minimum of 30 in vitro assays. The percentage of bacteria susceptible to each antibiotic was depicted by using a color code: green: susceptible, yellow: intermediately susceptible, and red: resistant. Physicians had ready access to the maps via touch screens located in the ICUs that were connected to the hospital intranet and automatically updated every 24 h, incorporating any new records entered into the GERB system. Preliminary Microbiological Reports with Therapeutic Recommendation (PMRTRs). A PMRTR was issued when the microbiology laboratory reported a sample to be positive according to the culture results but before a definitive identification and antibiogram. The requesting ICU physician was informed about the microorganism genus or isolated microorganism or, when not available, the result of Gram staining and was given therapeutic recommendations. These named the antibiotics with highest activity against the microorganisms presumably involved and against the specific infectious disease of the patient, according to the information in the GERB, and they included the most favorable pharmacokinetics and pharmacodynamics properties according to the infection focus. This signed report was sent directly to a dedicated remote printer in the ICU. Patients. Inclusion criteria for the patients in the polyvalent 24-bed ICU were as follows: (i) suspicion of nosocomial infection, defined as developing ≥48 hrs after ICU admission, in the lower respiratory tract, urinary tract, or blood (bacteremia), based on clinical symptoms and results of laboratory tests or radiologic exam. The focus was microbiologically defined as the lower respiratory tract or urinary tracts when the corresponding microbiological cultures were positive, regardless of the presence of bloodstream infection; bacteremia was defined by the isolation of one or more highgrade pathogens in a blood culture specimen or the identification of a common skin contaminant or skin flora in at least two separate blood culture specimens from different sites in the same patient; and (ii) susceptibility to antibiotic treatment. Exclusion criteria were presence of signs of infection or being in incubation period at admission, referral from another hospital department or health center, and age under 14 yrs. The Acute Physiology and Chronic Health Evaluation (APACHE) II score [26] was determined for each patient, considering the worst reading in the first 24 h of ICU stay, in order to evaluate the severity of illness and calculate the predicted mortality rate. Antibiotic Selection Criteria. Antibiotic prescription was structured in three levels, with the aim of treating patients in the shortest time possible with the most appropriate antibiotic according to the infection focus and clinical situation. The first level was the implementation of an empiric antibiotic treatment in patients with clinical suspicion of infection. For this purpose, ICU physicians had access via the touch screen in the unit to the LRMs, which depicted the percentage activity of different antibiotics against the microorganisms usually detected in each infectious process, allowing them the possibility of prescribing the treatment in accordance with these data (guidelines). In the second action level, after the putative isolation or identification of one or more microorganisms in a sample from the patient, the physician received a PMRTR prepared by the microbiology specialists (see above). Finally, after identification of the bacteria, the microbiology laboratory issued a definitive report with the corresponding antibiogram. The study was conducted under the following conditions: (i) the information provided by these GERB applications was not binding in any case. Physicians were not obliged to use the LRM and/or PMRTR guidelines and could base their selection of antibiotic therapy on exclusively clinical criteria (in accordance with the guidelines of the Hospital Infections Committee); (ii) the selection of antibiotic treatment was always adapted to the clinical situation of patients and took account of any therapeutic limitations, including allergies, drug interactions, and toxicity, considering renal and hepatic function, administration routes, dose, dose intervals, and so forth; (iii) in the LRM and/or PMRTR guidelines, an antibiotic was recommended when active against ≥75% of all microorganisms isolated in the same infection focus during the previous 12 months; (iv) broad-spectrum antibiotics were used for severe infections (especially in low respiratory tract infections and bacteremias); (v) after receipt of the PMRTR, the physician was able to modify or maintain the initial empiric treatment; and (vi) after receipt of the definitive microbiological report, with bacterial identification and corresponding antibiogram, deescalation was conducted when indicated, selecting the most appropriate antibiotic(s) according to clinical, microbiological, and pharmacological criteria. No study was made of the reasons for the therapeutic decisions taken by the physicians in this study. Antibiotic treatment was considered appropriate when at least one of the prescribed antibiotics was active in vitro against the isolated microorganism(s) and the drug regimen was in accordance with current medical standards. The appropriateness of the therapeutic option and/or antibiotic prescription was assessed by comparing each of the antibiotics recommended and/or prescribed in a patient with the definitive antibiogram of the microorganism(s) finally identified as the causal agent, when available. An antibiotic with synergic activity, for example, aminoglycosides, was not considered appropriate when it was the only antibiotic active against the isolate in vitro. Data Collection. Data were gathered in all studied patients on admission date, sex, age, main diagnosis, personal history of interest (allergies, other diseases, previous medication, etc.), chronic organ failure (hepatic, renal, pulmonary, cardiovascular, and immunosuppression, as defined by APACHE II), clinical progress during hospital stay using a semiquantitative scale [27], analytical results (full blood count, biochemistry, cultures, etc.), and daily body temperature. Other variables recorded were the empiric antibiotic treatment selected (indicating whether LRM guidelines were followed or not), any treatment change (indicating whether PMRTR recommendation was followed), the dose, dosing frequency, administration route, possible toxicity, total hospital stay, and date of discharge or death. Patients were followed up until their death or ICU discharge. 2.6. Statistical Analysis. SPSS 17.0 for Windows was used for the data analyses. Pearson's chi-square test (with continuity correction when required) was used to compare the appropriateness of prescribed antibiotic treatments according to the application of clinical criteria, LRM guidelines, or PMRTR recommendations and to compare patient mortality rates in each of these situations and when no empiric treatment was administered. Fisher's exact test in a 2 × 2 tables was used when the sample size was too small and conditions for Pearson's chi-square test application were not met. The Student's t-test was employed to compare the mean days of ICU stay as a function of the criteria used for empiric antibiotic treatment prescription (clinical, LRM, or PMRTR) and the receipt or not empiric treatment. The Mann-Whitney U test was used when the distribution of a variable was nonnormal according to the results of a previously applied Shapiro-Wilk test. < 0.05 was considered significant in all tests. Results Between October 2008 and December 2010, 218 patients in the ICU of our hospital met the study eligibility criteria, 139 males (63.8%) and 79 females (36.2%). The mean APACHE II score of the study cohort was 16.9 ± 7.5 (range, 2-40). Microbiological documentation of infection was obtained in 137 patients (62.8%) (Table 1), with the identification of 262 different microorganisms from 185 respiratory samples, 26 urine samples, and 51 blood cultures (without considering duplicates in the same sample type). Gram-negative bacteria (63.7%) were the most frequent, followed by Gram-positive bacteria (30.2%), fungi (5.0%), and anaerobes (1.1%). A single microorganism was isolated in 68 (49.6%) of the patients (43 microorganisms in respiratory samples, 2 in urine samples, and 23 in blood cultures), while multiple microorganisms were isolated from the same or different samples in the remaining 69 patients. Microorganisms were isolated from respiratory samples alone in 74 patients, from blood cultures alone in 24 patients, and from urine samples alone in 2 patients; in the remaining 37 patients, microorganisms were isolated from two or more samples from different infection foci. No microorganisms were isolated in culture in 81 (37.2%) of the patients, whose clinical suspicion of infection was not microbiologically confirmed. Assessment of Appropriateness of Antibiotic Prescriptions that Follow LRM Guidelines. Empiric antibiotic treatment was implemented for suspicion of nosocomial infection in 173 of the 218 study patients (79.4%), but LRM guidelines were only followed in 44 of these (25.4%) ( Table 2). When clinical criteria lone were adopted, the most frequently prescribed antibiotics were amoxicillin-clavulanic acid, vancomycin, levofloxacin, carbapenems (meropenem or imipenem), and ceftriaxone. When LRM guidelines were followed, the most frequently antibiotics prescribed were carbapenems, vancomycin, piperacillin-tazobactam, amikacin, and linezolid. Monotherapy was prescribed in 51.8% of patients treated according to clinical criteria (amoxicillin-clavulanic acid in 64.2% of cases) versus 20.5% of those treated according to LRM. The appropriateness of the empiric antibiotic treatments was evaluated by analyzing the antibiotics prescribed in the 92 patients for whom an antibiogram of the isolated microorganism was available. In the 77 of these patients treated according to clinical criteria, 36.4% of the antibiotics prescribed to this group proved to be active against the isolated bacteria, in comparison to 80.0% of the 15 patients treated according to LRM guidelines. Hence, the percentage appropriateness of the empiric antibiotic treatment was significantly higher ( = 0.005) when LRM guidelines were followed. Assessment of Appropriateness of PMRTR Recommendations. The microbiology laboratory issued 139 PMRTRs for 96 (44.0%) of the 218 patients in the study, with a total of 362 recommendations for antibiotic therapy (Table 2). When Gram-negative bacilli were isolated in culture, the most frequently recommended antibiotics were imipenem, amikacin, and piperacillin-tazobactam; when Gram-positive cocci in clusters were isolated, they were linezolid and vancomycin, and when Gram-positive cocci in chains were isolated, they were vancomycin and levofloxacin. The appropriateness of PMRTR therapeutic recommendations was evaluated by comparing each of the 362 recommended antibiotics with the definitive antibiogram of the microorganism(s) eventually identified in each patient, which showed that 90.3% of recommended antibiotics were active against the identified bacterium/bacteria. Assessment of Appropriateness of Antibiotic Prescription after Receipt of PMRTR Recommendations. Antibiotic treatment prescription recommendations were followed in 68 (70.8%) of the 96 patients for whom a PMRTR was issued, leading to the modification of initial empiric treatment in 36 patients (52.9%), its maintenance in 4 patients (5.9%), or the commencement of treatment in 28 previously untreated patients (41.2%). Clinical criteria rather than the received PMRTR were followed in 19 patients (19.8%). In the remaining 9 patients (9.4%), PMRTRs were issued after ICU discharge or death. Table 2 shows that when the criteria were exclusively clinical, the most frequently prescribed antibiotics were ceftriaxone, tobramycin, carbapenems, and vancomycin, in this order. However, when PMRTR recommendations were followed, the most frequent were carbapenems, amikacin, vancomycin, piperacillin-tazobactam, and linezolid. Combined therapy with two antibiotics was predominant both in the prescriptions following clinical criteria (mainly ceftriaxone plus tobramycin) and in those following PMRTR (mainly carbapenem plus amikacin). The appropriateness of antibiotic prescriptions after PMRTR receipt was assessed by comparing the antibiotic prescribed to each patient with the definitive antibiogram of the microorganism(s) finally identified in each sample. According to the definitive antibiogram, 42.1% of the antibiotics prescribed following clinical criteria were active against the isolated bacteria, whereas 82.4% of those prescribed in accordance with PMRTR guidelines were active. The percentage appropriateness of antibiotic treatment prescription was therefore significantly higher ( = 0.001) when PMRTR was followed. GERB Use and Level of Associated Appropriateness. As noted above, LRMs were followed for the prescription of empiric antibiotic therapy in 44 (20.2%) of the 218 patients in the study and were not followed in 174 (79.8%) patients. Active antibiotics against the isolated bacteria were prescribed in 80.0% of the former group but in only 36.4% of the latter. Out of the 87 patients with available PMRTR, the recommendations were followed in 68 (78.2%) but not in 21 (21.8%) patients. Active antibiotics were prescribed against isolated bacteria in 82.4% of the former cases but in only 42.1% of the latter. LRM and PMRTR were both followed in only 8 (9.2%) of the 87 patients for whom they were both available (LRM for empiric treatment prescription, then PMRTR for modification of the initial treatment). Solely clinical criteria were adopted in 32 (36.8%) of the patients, while either LRM or PMRTR were followed in the remaining 47 (54.0%). Mortality and Hospital Stay. The influence of GERB (LRM and PMRTR) was evaluated on the two main clinical variables: mortality and days of ICU stay. This analysis only included the 137 patients with diagnostic certainty of infection, that is, when a clinically significant microorganism was isolated. The mean ICU stay of patients who received empiric treatment following LRM guidelines was 13.8 days, with a mortality rate of 20.0%; their mean ICU admission APACHE II score was 17.7. The mean ICU stay of patients who received empiric treatment following clinical criteria was 19.5 days, with a mortality rate of 27.3%; their mean ICU admission APACHE II score was 17.6. There were no significant differences between patients receiving empiric treatment according to LRM guidelines or clinical criteria in mortality ( = 0.751) or days of stay ( = 0.156), even when nonsurvivors were included in the analysis ( = 0.519). The mean ICU stay of patients treated according to PMRTR recommendations was 19.7 days, with a mortality of 29.4%; their mean ICU admission APACHE II score was 18.0. The mean ICU stay of patients treated according to clinical criteria was 20.1, with a mortality of 36.8%; their mean ICU admission APACHE II score was 19.0. There were no significant differences between patients treated according to PMRTR recommendations or clinical criteria in mortality ( = 0.735) or days of stay ( = 0.943), even when nonsurvivors were included ( = 0.219). Discussion Physicians should prescribe an appropriate empiric antibiotic treatment in patients with clinical suspicion of infection. The criteria adopted are usually based on their own experience or on guidelines that are often developed in another setting, even in another country. Hence, therapeutic treatment is frequently not adapted to the microbial epidemiology of the specific healthcare area, which may favor therapeutic failure. Numerous publications by scientific societies and healthcare institutions have emphasized the need to consider local epidemiology in the development of therapeutic guidelines for the prescription of empiric antibiotics [15-17, 21, 22]. Computerized decision support systems (CDSS) are clinical consultation systems that assist physicians in diagnostic and therapeutic decision making by analyzing patient and population data. They have proven effective to improve medical care, reduce prescription errors, and enhance compliance with recommendations [28,29]. These programs do not replace clinical judgment but rather increase the information available for physicians to be able to make correct decisions [22]. A systematic review associated successful CDSS implementation with the integration of the system in the clinical process and with the availability of recommendations at the time and place of decision making [30]. Evans et al. [28] evaluated the effects of a computerized anti-infective-management program for real-time patientspecific recommendations on the type of antimicrobial, dose, administration route, and treatment duration, finding it to be useful in surgical prophylaxis and in targeted and empiric treatments; use of this program also significantly reduced the number of days that patients received antimicrobial treatment in the ICU. Thursky et al. [31] associated the utilization of a real-time microbiology browser and CDSS for antibiotic prescription with a reduction in total antibiotic prescriptions, especially in the most widely prescribed broadspectrum antibiotics. In the present study, GERB-derived LRMs permitted the rates of bacterial resistance to antibiotics to be monitored, based on the information in the LIS of the microbiology laboratory. In general, this application permits (i) structuring of epidemiologic data by hospital area and by infectious disease, (ii) daily and automatic updating with new laboratory results, (iii) presentation of the information in a web environment, and (iv) presentation in readily interpreted graphics of data on bacterial resistance to the antibiotics habitually used in the treatment of a given infectious disease. After positive cultures are obtained, PMRTRs provide a preliminary report on the putative identification of isolated microorganism(s), issuing therapeutic recommendations based on their most likely susceptibility profile according to the local epidemiology of the hospital unit and the specific infectious disease in question. According to the present results, the utilization of LRMs and PMRTRs contributed to the adaptation of antibiotic treatments, favoring the administration of the most active antibiotics in clinical situation. The lower percentage appropriateness of empiric antibiotic treatments that followed clinical criteria was related to the prescription of monotherapy antibiotics, especially amoxicillin-clavulanic acid, or to the use of combinations of narrow-spectrum antibiotics with high microorganism resistance rates. The considerable increase in percentage appropriateness with treatments following LRM and/or PMRTR guidelines was associated with the prescription of antibiotics with very low resistance rates. A major challenge in evaluating expert systems that support therapeutic decision making concerns the adherence of physicians to their use, which was relatively low in the present study, especially in relation to LRM guidelines. Physicians may be reluctant to abandon their own criteria or well-established antimicrobial therapy guidelines with recognized prestige, especially in the prescription of empiric treatments [32]. The much higher adherence to PMRTR may be attributable to its provision of an explicit recommendation in a printed report with the signature of a microbiology specialist. Adherence to the GERB applications was stronger when the clinical situation of the patient was more severe, finding a mean APACHE II score of 21 in the eight patients for whom LRM and PMRTR were followed, or when the recommendation was to continue with the same antibiotic therapy. The results of this study did not support the hypothesis that application of these GERB applications would significantly reduce the mortality rate and length of ICU stay. Previous studies also found no significant reduction in mortality after the development and implementation of local treatment protocols, although these were associated with an improvement in empiric therapy adaptation and a reduction in the antibiotic treatment duration [13,25]. In common with other investigations of measures designed to improve antibiotic use, it was not possible to conduct a randomized controlled trial, and the design of our prospective study was therefore quasi-experimental. Patients were not managed with a specific protocol, and it was therefore not possible to control for all relevant clinical variables. We cannot rule out the influence of unmeasured variables and we did not evaluate the response to antibiotic therapy according to predefined clinical variables. A further limitation was the difficulty in assessing the clinical impact of the GERB applications, because no microorganism was isolated in a large percentage (37.2%) of patients; therefore, although there was suspicion of infection, there was no microbiological confirmation. It is likely that a large number of the empiric treatments, following either clinical criteria or LRM, were not for a true bacterial infection, although the early onset of antibiotic treatment may possibly have avoided growth of the microorganism in culture. The selection of one antibiotic or another would not have determined the final outcome in the first situation but may have done so in the second. In fact, it is possible that the lower number of patients in which a given microorganism was isolated when empiric treatment was based on LRM guidelines (34.1%) is related to the high appropriateness rates for antibiotic prescriptions in line with these guidelines. Finally, our assessment of the appropriateness of antibiotic treatments did not consider the isolation of other microorganisms against which these treatments are not active. This is the case of fungi, such as Candida spp., which only represented 5% of the microorganisms identified. In conclusion, these new GERB applications offer dynamic, highly accessible, and easily interpreted instruments to assist physicians in the selection of antibiotic treatment. Their implementation increases the percentage of patients administered with an appropriate initial empiric therapy. It would be of interest to perform a similar study in different hospital departments over the same time period in order to examine variations among them. Conflict of Interests Roche Diagnostics, S.L. acquired the rights for the commercial use of the Guía Electrónica de Resistencias Bacterianas (GERB) by a license agreement with the Servicio Andaluz de Salud, University of Granada, and University of Almeria.
6,032.6
2014-08-17T00:00:00.000
[ "Computer Science", "Medicine" ]
HOW TO PAN-SHARPEN IMAGES USING THE GRAM-SCHMIDT PAN-SHARPEN METHOD – A RECIPE : Since its publication in 1998 (Laben and Brower, 2000), the Gram-Schmidt pan-sharpen method has become one of the most popular algorithms to pan-sharpen multispectral (MS) imagery. It outperforms most other pan-sharpen methods in both maximizing image sharpness and minimizing color distortion. It is, on the other hand, also more complex and computationally expensive than most other methods, as it requires forward and backward transforming the entire image. Another complication is the lack of a clear recipe of how to compute the sensor dependent MS to Pan weights that are needed to compute the simulated low resolution pan band. Estimating them from the sensor’s spectral sensitivity curves (in different ways), or using linear regression or least square methods are typical candidates which can include other degrees of freedom such as adding a constant offset or not. As a result, most companies and data providers do it somewhat differently. Here we present a solution to both problems. The transform coefficients can be computed directly and in advance from the MS covariance matrix and the MS to Pan weights. Once the MS covariance matrix is computed and stored with the image statistics, any small section of the image can be pan-sharpened on the fly, without having to compute anything else over the entire image. Similarly, optimal MS to Pan weights can be computed directly from the full MS-Pan covariance matrix, guaranteeing optimal image quality and consistency. INTRODUCTION 1.1 Multispectral pan-sharpening One of the common problems in remote sensing and high resolution image processing is the need for somehow fusing lower resolution, multispectral (MS) bands such as Red, Green, Infrared, etc., with the single, higher resolution Pan band.Ideally, the MS bands can be up-sampled to the full Pan resolution without altering their spectral properties.In practice however, this is hard to achieve.Many pan-sharpening algorithms exist, differing in the degree, to which they maximize the sharpness, and at the same time minimize the color or spectral distortion of the pan-sharpened output image.Older and simpler algorithms, such as the Intensity-Hue-Saturation (IHS) transformation or the Brovey method only work for up to three MS bands, while more modern algorithms work for four or more bands.For a recent survey of pansharpening methods see for instance (Amro, 2011).Since its publication in 1998 (and patented by Kodak in 2000), the Gram-Schmidt pan-sharpen method has become one of the most widely used high quality methods.Many variations and enhancements have been studied and published, e.g.(Aiazzi, 2007).The Gram-Schmidt method is also offered by companies such as Esri, ENVI, and others, in their software packages. The Gram-Schmidt pan-sharpen method The Gram-Schmidt pan-sharpen method in a nutshell: 1) Compute a simulated low resolution Pan band as a linear combination of the n MS bands: 2) Treating every band as a high dimensional vector and starting with the simulated pan band as the first vector, make all bands orthogonal using the Gram-Schmidt vector orthogonalization, or, more precise, a modified version of it.For the Gram-Schmidt pansharpening, both the incoming bands and the arguments of the scalar products are made dc free first (get their means subtracted).This turns the original Gram-Schmidt scalar products into covariances.The iterative procedure stays the same: Compute the angle between the Red band and the Pan band, rotate the Red band to make it orthogonal to the Pan band.In the next step, compute the angles between the Green band and the Pan band and rotated Red band, then rotate the Green band and make it orthogonal to both the Pan band and rotated Red band.And so forth.This Gram-Schmidt forward transform de-correlates the bands. 3) Replace the low resolution simulated Pan band by the gain and bias adjusted high resolution Pan band.Upsample all MS bands accordingly. 4) Reverse the forward Gram-Schmidt transform using the same transform coefficients, but on the high resolution bands.The result of this backward Gram-Schmidt transform is the pan-sharpened image in high resolution. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-1/W1, ISPRS Hannover Workshop 2013, 21 -24 May 2013, Hannover, Germany There are two practical problems here: First, how do we compute the MS to Pan weights needed for step 1? Especially if we don't know the sensor's spectral sensitivity curves or we even don't know the sensor.Second, how can we pan-sharpen a small section of the (potentially huge) image without having to forward and backward transform the entire image?We will address the second question first. Compute the Gram-Schmidt transform coefficients from the MS covariance matrix and the MS to Pan weights Here we show that the Gram-Schmidt transform coefficients or matrix (let's call it the GS matrix), which is usually computed iteratively together with the forward transformed (or "rotated") MS bands, can be computed directly from the MS covariance matrix and the MS to Pan weights.We start by rewriting the original equations ( 10)-( 12) in (Laben and Brower, 2000).First, for simplicity, we drop the subtracted mean terms, or, in other words, assume the incoming bands are already dc free.We will show further below that this has no effect on the mathematical result.Then, omitting the pixel indexes for brevity, and explicitly taking the first band (0) as the simulated Pan band and the remaining bands as MS bands 1 to n, we get The apostrophe denotes the transformed or rotated band, and < a | b > means the covariance of bands a and b.Note that the coefficients above depend on the transformed bands and can only be computed after the bands with lower indexes have been transformed.Therefore our next step is to substitute equation (1) into the above Gram-Schmidt transform equations.We compute the coefficients one column at a time, from left to right, in increasing difficulty.All transform coefficients !" can be written in the form: With their weight vectors !computed also iteratively only taking turns with the transform coefficients, in vector notation, as: Here ! is the MS to Pan weight vector, ! the MS to rotated red band weight vector, ! the MS to rotated green band weight vector, etc.In other words, from the initial MS to Pan weight vector !we calculate the first Gram-Schmidt transform coefficient !" , then Gram-Schmidt transform this initial weight vector to compute the next weight vector ! which allows us to compute the Gram-Schmidt transform coefficients for the next row and so on. The result is a function that takes as input the MS covariance matrix and the MS to Pan weights, and outputs the Gram-Schmidt transform coefficients.The MS covariance matrix can be computed once for the MS raster and stored as part of the raster statistics.The MS to Pan weights can be assumed to be mainly sensor dependent and having been computed in advance. The direct computation of the GS matrix allows to pan-sharpen any small portion of an image without having to compute any other property over the entire image first.The resulting pansharpened image is exactly the same as if globally computed.Another practical use case is that the MS to Pan weights can be varied and only an area of interest pan-sharpened on-the-fly, again without having to compute anything globally.Last but not least the direct computation of the GS matrix allows for analyzing the propagation of errors from either the MS to Pan weights or the MS covariance to the GS matrix. In the beginning of this derivation we had simply dropped the band mean offsets, in contrast to Laben and Brower.This does not make a difference for the end result.The MS covariance does not depend on the band mean.The covariance of any two bands a, b, with one of the two bands constant is 0, which means that adding or subtracting constant mean terms cancels out in the Gram-Schmidt transform coefficients.With the GS matrix being independent of the band means, adding or subtracting these means in the above equations only results in constant, position independent offsets in the rotated bands. When doing the backward transform, the final (forward and backward transformed) MS bands will have the same band means as the original MS bands, as long as the substituted high resolution Pan band has the same mean as the low resolution simulated one.This is the case, thanks to the gain bias transform of the high resolution Pan band before it can replace the simulated one (equations ( 1)-( 3), (Laben and Brower, 2000)). For the same reason any constant offset in the simulated Pan band would have no effect on the result. Compute optimal MS to Pan weights Now we come to the second topic of interest, the computation of optimal MS to Pan weights.Optimal here means optimal for International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-1/W1, ISPRS Hannover Workshop 2013, 21 -24 May 2013, Hannover, Germany Gram-Schmidt pan sharpening.As it turns out, we get this as a by-product of the above calculations.Part of the first column Gram-Schmidt transform coefficients was We had used this formula to compute the left side from the (known) MS to Pan weights and the MS covariance matrix.All we do now is replace the simulated Pan band on the left by the low pass filtered and down sampled real high resolution Pan band, compute the left side, and solve for the weights w.The MS covariance matrix is a real symmetric square matrix with its size equal to the number of MS bands and usually directly invertible.So we get, with index ds for down sampled, and C for covariance matrix, In case C should turn out to be singular (not directly invertible), this equation can also be solved in a least square fashion by Singular Value Decomposition (SVD).Decompose C into its SVD components, compute its pseudo inverse, and then solve (6). This formula has three advantages over any other approach.First, because it only contains covariances, it is inherently independent of any kind of offset or bias from either Pan or MS bands.Second, it is fully consistent with the Gram-Schmidt framework.Third, it is computed solely from image statistics, with the MS covariance matrix usually already pre-computed and stored with the MS image. Compute the MS to Pan weights per sensor or per scene? There are still two major options left to choose from.The above formula (6) allows computing MS to Pan weights for each scene or image individually.Or, we can pick a representative scene for each sensor, compute weights for it, and use those weights for all other images coming from the same sensor.Both workflows have their advantages and disadvantages.For example, in scenes with only brown desert or blue ocean, in icy areas, or areas with full cloud cover, computing the weights again and again not only seems like a waste of CPU time, but also may lead to problems if the image has a constant band (e.g., all white).Working with representative scenes allows to hand-select them, choose relevant content (such as urban areas), and make sure they don't have any alignment problem.Note that the Gram-Schmidt method is more robust to spatial misalignment of the bands than most other pan-sharpening methods because all transform coefficients are computed in the low MS resolution.Good practice is to have good sets of weights pre-computed and use them as a starting point, and only re-compute MS to Pan weights as a refinement for certain scenes as needed. MS to Pan weights can be normalized to 1 The MS to Pan weights are relative weights.As we can see from the Gram-Schmidt transform equations (2) above, any scale factor applied to all weights cancels out.The resulting forward transformed MS bands are independent of such a scale factor.Finally, when we back transform them, as long as the substituted high resolution Pan band got gain adjusted to the simulated Pan band, such a scale factor cancels out there as well.Again, same as with the argument about the band means above, this is why the gain bias adjustment of the high resolution Pan band is important, before it can replace the simulated one. All this gives us the liberty of normalizing the weights to 1, without changing the results. How about negative weights? Another comment to be made is about dealing with negative weights.Although only rarely observed so far, it can happen that a weight can come out slightly negative.For instance, we got -0.01 for the IR weight for the UltraCam sensor which basically has no IR in the Pan band.Another case was -0.11 for the Blue weight for Ikonos, where the insufficient coverage of the Pan bands IR region by just one NIR band on the MS side caused some tilting of the weight vector.At this point the less sensitive reader may not bother about negative weights and just use them.However, negative weights are not possible when they are read off the sensor's spectral sensitivity curves as suggested by Laben and Brower (Laben and Brower, 2000).So they are hard to justify.For this reason we set negative weights back to 0 and renormalize the weights.This has worked fine for us so far.However, for the sake of completeness, we will now outline a more rigorous treatment.Let's rewrite our MS to Pan weight formula as a minimization problem with additional inequality constraints using matrix vector notation (calling the MS-Pan covariance vector p) as Making use of the symmetry of C, we get With obvious substitutions this can be turned into which is a classical quadratic programming problem.On how to solve it, see for instance (Wikipedia, 2013). Pan-sharpen on the fly without computing statistics Consider an emergency response scenario, for example, a wild fire or an earthquake.Satellite images need to be processed and analyzed as quickly as possible to find out where the most damage is and where the rescue personnel should be sent first.Besides RGB, other three band combinations involving an infrared or near infrared channel, and two of the visible bands can be used, too.Such pseudo color images can highlight other features, which are not so easy to see in the regular RGB image. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-1/W1, ISPRS Hannover Workshop 2013, 21 -24 May 2013, Hannover, Germany In an emergency or disaster situation, no one wants to wait until all the statistics including the covariance matrix have been computed, for all images involved.Instead, we use the MS to Pan weights pre-computed for the current sensor.In the absence of image statistics, we pansharpen only the area needed to satisfy the current request (e.g., the screen), but guaranteeing a minimal pixel area.I.e., we enlarge the processed area as needed to avoid strange results that we might get with too few pixels.This way pan-sharpened imagery is available instantaneously, suitable for panning and zooming.On the downside this way of processing is not globally consistent: Exporting the entire image this way would lead to small tiling artefacts, as the Gram-Schmidt transformation matrix will somewhat vary from tile to tile. EXAMPLE Let us look at one example.We have picked a WorldView-2 satellite scene of London and look at the London tower bridge, see SUMMARY AND CONCLUSION In this paper, we have shown the following for the Gram-Schmidt pan-sharpen method: 1.If the MS covariance matrix and the MS to Pan weights are known, the Gram-Schmidt transform coefficients can be directly computed.This turns the Gram-Schmidt pan-sharpen method from a global into a local method, meaning that any small region of the image can be pan-sharpened on the fly without having to compute anything globally over the entire image first. 2. If the MS covariance matrix and the MS to Pan covariance vector are known, optimal MS to Pan weights can be directly computed.Then the Gram-Schmidt transform coefficients can be computed.So the entire Gram-Schmidt transform is fully determined by the MS and MS to Pan covariances. 3. Once the MS to Pan weights have been computed on a representative scene of a given sensor, other images from the same sensor can be pan-sharpened even without image statistics, but then only approximately, by treating every requested tile or area as a full image.Such a local or dynamic mode can be useful for instantaneously pan-sharpening freshly incoming images without having to compute full image statistics first. We hope that this description or recipe makes it easier for others to go with the Gram-Schmidt pan-sharpen method and make optimal use of its potential. Fig. 1-4 below.Here we show: 1.The original MS image around the tower bridge.2. The original Pan image.3. The MS image up-sampled to full Pan resolution.4. The result of the Gram-Schmidt pan-sharpening.Original imagery courtesy of Digital Globe.The images have been processed using Esri's ArcGIS for Desktop 10.2 and then cropped to fit in here.Compare the sharpness of the pansharpened result image to the sharpness of the original Pan image and the colors of the pan-sharpened result image to the colors of the up-sampled MS image. Figure 1 : Figure 1: WorldView-2 image of the London tower bridge.MS image at source resolution.Courtesy of Digital Globe. Figure 2 : Figure 2: Pan image at source resolution.Courtesy of Digital Globe. Figure 3 : Figure 3: MS image up-sampled to Pan resolution using bilinear resampling.
3,970.4
2013-05-02T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Developing Optimal Reservoir Operation for Multiple and Multipurpose Reservoirs Using Mathematical Programming Over the last decades, the increasingwater demand has caused a number of problems, towhich reservoir operation optimization has been suggested as one of the best solutions. In this research, a model based onmixed integer linear programming (MILP) technique is developed for the systematic operation of multireservoirs that are used to cater for the different needs of the Tehran-Karaj plain. These reservoirs include Laar, Latian, and Karaj dams. The system configuration was accomplished through the nodes and arcs of the network flow model approach and system component implementation including sources, consumption, junctions, and the physical and hydraulic relationship between them. The following were performed via comprehensive developed software: system configuration, objective function and constraints formulation, linearization, determining penalty values, and setting priorities for each node and arc in the system. A comparison between the MILP developed model’s results against the periodic data shows 21.7% less overflow, 11.6% more outflow, and 15.9% more reservoir storage, respectively. The outcome of the MILP-based modeling indicates superior performance to the historical period. Introduction Water scarcity has become a serious concern in the process of urbanization and socioeconomic development, specifically in metropolis cities in which water resources are limited [1].Therefore, in order to control and optimize surface water resource utilization, several large dam and water transfer projects have been created or are being designed and implemented in most countries worldwide including Iran.Thus, it is crucial to pay particular attention to the projects' effective operation to obtain the utmost benefits and satisfaction from all the goals set earlier. Uneven periodic and spatial distribution of fresh water and increasing population and social welfare have led to a rising need for water in Iran, principally in Tehran. The limited resources of surface water supplied from the reservoirs of Laar, Latian, and Karaj are not sufficient to cover the water needs of this metropolitan city.Therefore in order to meet the demands, underground aquifers are significantly being used to supplement the water needs.However, during the arid years underground water usage increases drastically, which may have devastating effects on the quality and quantity of aquifers.Thus, it is important to ensure that the ground water is not overextracted, and this can probably be done by optimizing the use the surface water.A proper management of surface water may indirectly influence the use of scarce ground water.Therefore, the aim of this study is to develop a model that can lead to an optimal operation of the current reservoir for better water resource management. Researchers have been continuously developing various methods of optimizing reservoir operation [2].Optimization processes depend on system features, data availability, objective function and constraint types, a number of constraints and variables, and physical relationships governing the system [3].Nonetheless, uneven periodic and spatial distribution of fresh water and increasing population and social welfare have led to a rising need for water in Iran, principally in Tehran.Therefore, one of the requirements in water resource management is to optimize the use of surface water resources. Mixed integer linear programming (MILP) is an integer programming model that allows combining real, integer, and binary variables.It is considered an appropriate approach to formulate and solve problems [4].Since the majority of integer programming models have a similar structure to linear programming, all the available tools can be applied.Robustness, ease of expansion, and a multiobjective nature are additional key features of the mentioned method [5].In contrast, several other integer programming models are nonlinear (MINLP), and recently some researchers have applied this method for problem solving [6][7][8][9]. Since Laar, Latian, and Karaj reservoir systems are large scaled and contain many variables and nonlinear constrains during a long historical period, in order to optimize reservoir system in a specific area and to overcome the problems in linear programming and reservoir characteristics MILP is considered as an adequate approach. In the present research, a MILP classical model was developed to operate the current reservoirs in the study area.The aim is to examine MILP's technical competence in deriving a public policy of reservoir operation. Literature Review Researchers have suggested optimization models to balance the conflicts among water users regarding allocation of water recourses [10].Various methods of optimization and their applications have been comprehensively assessed by numerous researchers such Simonovic [11] and Labadie [12]; issues addressed are water resource engineering and particularly reservoir operation.All techniques have advantages and disadvantages.Regarding the optimization of reservoir system modeling, among the optimal techniques is linear programming (LP) along with alternative methods.Significant advantages of LP method are its ability to solve large-scale problems in the best possible way, convergence towards the global optimum response, an initial solution being not necessary, easy to perform sensitivity analysis and uncomplicated problem solving [13]. Linear programming guarantees reaching optimum answers, but all equations (including objective function and constraints) must be linear [14].In reality, most functions are nonlinear in plenty of relevant cases in water resource management.Moreover, there is a problem to formulate linear programming for reservoir operation. One of the drawbacks with LP formulation in reservoir operation and management is that reservoir storage continuity equation cannot explicitly control spillway, whereas perhaps when the reservoir is not full in optimized solutions some values are considered for the spillway [15].This issue also was reported by Shih and ReVelle [16].It is also possible that, in some reservoirs alongside the spillway outlet, the amount of seepage from the reservoir is considerable, and there is a nonlinear relationship between seepage and storage. From all the progress made on LP we can point to binary linear programming, integer linear programming, and mixed integer linear programming, which are very useful to describe nonlinear and nonconvex statements in the objective function and the constraints [17].Many researchers like Needham et al. [18], Srinivasan et al. [19], Barros et al. [20], and Eslami and Qaderi [21] have used the MILP technique in water resource engineering. Optimization models are usually used for hydroelectric power reservoir operations with various time scales [22].The mixed integer programming method has been widely implemented for planning and midterm scheduling [23], short-term scheduling [24], and real-time scheduling [25].In some research works, this technique is used to formulate and estimate water resource management planning under hydrological uncertainties [26,27]. Different integer programming means like the cut, branch, and bound methods can been applied to solve a range of problems in the field of water resource [28].But to practically solve difficult and large-scale optimization problems, software packages are used due to their high speed and accuracy [29].Ilog-Cplex model is one of the software packages with the capacity to solve integer programming as well as many linear or nonlinear programming problems.It also has the ability to interact with other models, such as GAMS, AMPL, and Microsoft Visual Studio [30].Researchers like Conejo et al. [31], García-González et al. [32], and Baslis and Bakirtzis [23] have utilized these packages to solve MILP and mixed integer nonlinear programming (MINLP). Moraga et al. employed MILP approximation to solve a nonlinear model for planning midterm water transfer between two reservoirs [33].Tu et al. [34] developed a mixed integer nonlinear programming (MINLP) model to optimize hedging rules for reservoir operations.In other research works, El Mouatasim [35] used Boolean integer nonlinear programming (BINLP) for optimum pump performance in reservoir operation.Noory et al. [36] also successfully used the MILP model for optimum irrigation allocation and solving some multicrop planning issues. The system related to reservoirs can be portrayed as a network of nodes and arcs.Nodes represent water storage points, diversion places, and intersections, while arcs represent the release of reservoirs, channel pipelines, and evaporation or other losses.This kind of optimization is based on the flow network model as used by various researchers in CalSim, Modsim, and OASIS software to model large systems [12,37,38]. Case Study The study area covers the plain and city of Tehran, which are located on the southern slopes of the Alborz Mountains and include the basins and dam reservoirs of Karaj, Laar, and Latian.Other than Laar basin dam, the remaining study area is located on the southern slopes of the Alborz Mountains.Several dams are constructed on the rivers in Tehran to control the surface water flow and for supplying a part of urban drinking water and agriculture requirements.Three most important dams are Karaj, Laar, and Latian dams.Recently, Taleghan and Mamlu dams are built but due to the lack of historical data they have not been discussed here.Figure 1 shows the relationship between water resources (surface and ground water) and different uses in the Tehran plain.In this study, only surface water resources in operation are taken into account, including the Laar, Karaj, and Latian dam reservoirs.Table 1 lists the reservoirs' characteristics.Tehran's drinking water supplies are provided for by ground water and surface water sources.Ample resources are allocated yearly to meet drinking water, agriculture, industry, and green space needs.The extraction of ground water resources greatly increases throughout arid years, which has harmful effects on the quality and quantity of ground water resources.Since 1928, several plans have been implemented to supply the water needs of Tehran.Unfortunately, over time, with the increasing population and higher levels of various consumer needs, the implemented plans were unable to meet consumer needs.Several projects are currently under study and will be implemented to provide greater amounts of water to the city of Tehran from other areas. Mathematical Formulation of the Problem The mathematical formulation developed in this study is based on the MILP approach.It has been attempted to introduce a set of constraints that involves correct variables such that the spillway can be controlled and nonlinear equations are presented as a set of linear equations.Mixed integer linear programming (MILP) is applied to perform this action. Mixed Integer Linear Programming.The complexity of computing an integer programming problem depends on two factors: the number of integer variables and the problem structure.Presently, the most popular algorithm for solving integer programming problems is the branch and bound technique, whereby the key feature is the implied counting of current answers.Since the number of current solutions of an integer programming problem of integer number limit is countable, it seems natural for the counting method to be applied to obtain the optimal solution.Suppose that the relationship between the reservoir's release and storage is simulated as the curve fitting Figure 2.This curve must be linearized by the piecewise method.In Figure 2 is the initial storage, is the beginning of the second part, is the beginning of the third part, and is the endpoint of the third part; , , and are the slopes of the lines in Sections 1, 2, and 3, respectively, and 1 , Indeed, the release-storage equation is a curve that is divided into several lines after the cut approximation, and each line has a separate equation: where is the release of reservoir, is the reservoir storage, is the slope of the line, and ℎ 1 is the origin width.The release-storage equation comprises several lines.The overall relationship between release and storage is as follows: The relationship must be achieved so as to calculate the storage at any moment according to release.At any time in the release-storage relation, the storage of the reservoir must be in one section.Thus by using the integer variables, the above equation becomes where ℎ 1 , ℎ 2 , and ℎ 3 represent the intercept in parts 1, 2, and 3, and 1 , 2 , and 3 are integer-independent variables, respectively, which can only have values of zero or one.The total reservoir storage is equal to the sum of the figures in each area of the reservoir: Also, reservoir storage should be located between the upper and lower bounds in every part: Integer variables can only have values of zero or one, and their sum must always be equal to one: According to the storage, one of the integer variables has a value of one and the other zero.So, the final constraints are defined as follows: where is a constant value, depending on how a curve is approximated by the number of lines; the number of integer variables increases and the problem becomes more complicated. System Configuration. As previously mentioned, the optimization model of network flow is used to implement multiobjective optimization of problem reservoirs in the Tehran plain.System topology was executed based on this model, after which the whole system was solved based on MILP. In this method, each system is shown as a combination of nodes and arcs.Nodes must represent sources, reservoirs, junctions, demands, and ground water, and the arcs represent the links between nodes potentially including channels (Figure 3).Some of the main justifications for using the network flow model for system configuration are the ease of expressing river basin systems as schematic figures, capability to use linear programming (and MILP), applicability to large-scale problems, and effortless ability to alter the system.In the network flow model, system components are implemented using nodes and arcs.Most of the analyses regarding reservoir systems are conducted via a type of network flow formulation which is called "Minimum Network Cost." Each node and arc have a minimum and maximum boundary and a value.In the network flow model, all nodes and channels enter into the objective function with a coefficient that reflects the priorities. A sample water supply system is presented in Figure 3, which consists of nodes and arcs.In Figure 3 an overview of dams, refineries, tunnels, and channels of Tehran are demonstrated.Karaj dam is a multipurpose reservoir on the Karaj river.The intake of Bilaghan is located 23 km away from Karaj dam, which receives a part of water requirements from Karaj dam as in the river and Taleghan dam via pipes.Part of that is allocated to supply water of Tehran by pipeline and the other part is allocated to about 50 thousand hectares of agricultural land. Laar dam is another important source of drinking water supply in Tehran.Laar dam also supplies water for agricultural irrigation in Mazandaran Province.Laar tunnel transfers outflow of Laar's dam to Latian reservoir which is approximately 140 million cubic meters. Latian dam is located on Jajrood river of Tehran.After primary physical treatment, Latian release is transferred by two steel and concrete pipes to first and second refinery.Thus, it supplies about 30% of the drinking water of Tehran.Mamlu and Karaj channels are added in the second refinery and they supply drinking water of Tehran. Constrains. Constrains are defined as continuity at each node, continuity at reservoir, elevation-storage relations, and the amount of shortages in drinking water, industry, and agriculture.Constraints with the reservoir can be expressed as follows: The above equation represents the reservoir storage balance; +1 is reservoir storage at the end of period , is reservoir storage at the beginning of period , is the total input in a period of , and is the total release in a period of . The second constraint regards the high boundary and low boundary of reservoir storage, which are defined to achieve suitable storage for flood control and dead storage, and so forth ,min ≤ ≤ ,max for = 1, . . ., , where is reservoir storage in a period of , ,max is the maximum reservoir storage, and ,min is the minimum reservoir storage in the same period.The third constraint is the supply of minimum downstream flow to maintain water quality, wildlife, and the environment: where is release of the reservoir in a period of , ,max is the maximum release of the reservoir storage, and ,min is the minimum release of the reservoir storage in the same period.It is possible in reservoirs that produce energy, for the relationships between the area-storage and elevationstorage are used to obtain the effective head.Water resource management depends on the water needs supply to a great extent.The equation of each necessary node for drinking water, industry, agriculture, and so forth is where is a variable related to the input of a needed node and is a variable related to the rate of shortage in the same node whose range must be defined according to the type of user needs; is the amount of water needed.In addition to supplying needs, in each needed node the user must determine the penalty rate for shortages.The penalty amount depends on the type and quantity requirements.Penalties that are set for shortage of drinking water can be more than the penalties that are set for water shortage in agricultural or industrial purposes.According to the importance of each consumer, the penalty may vary. The connections between nodes are placed by the arc.The arc represents the current between two nodes.Different types of arcs are designed in this model, which must be applied according to usage and type of connection.The considered constraint for each arc is as follows: where ℓ ,max is the maximum current flowing through the spillway, and ℓ is the actual amount of current passing through the spillway in a period . Objective Function. In this developed model objective function will be defined as Minimize ∑ ℓ ℓ ℓ in which "" is all of nodes and arcs that have a defined priority or penalty factor."" is amount of priority or penalty coefficient and "" stands for flow or shortage in each node and arc.The main objective in such issue is minimizing a series of designed phrases so penalties should be fitted to type of consumer and amount of consumption with a positive factor and besides amount of priorities should be defined with a negative factor.The greater the amount of penalties in each constraint, the less violation against constraints by the model.Also, the greater the value of priorities, the more attempts to satisfy the given constraints by the model.Therefore, user should define numeric amount of such priorities and penalties according to condition and also type and amount of consumption appropriately.The objective function is given as Minimize OBJ: where is the number of reservoirs, is a whole period of optimization, and is the penalty coefficients or priority. Clearly, the objective is to minimize the sum of a series of expressions, including controlled channels (CH), shortages of agricultural needs (HO), minimization of overflow (PL), maximization of reservoir storage (ST), and minimization of urban demand shortage (US). 1 is priority use of controlled channels, 2 is priority for agricultural purposes, 3 is penalty factor for overflow, 4 is priority to maintain reservoir storage, and 5 is penalty coefficient for the urban water supply shortages.The coefficients of these terms are determined by the designer.Penalties are greater if the deficiencies are more.Optimal release of each of the reservoirs should be allocated to various uses; of those, drinking water is the most important consumer of water in system of Tehran.The penalties and priorities have been set so that the need of Tehran drinking water is a priority.Therefore, the primary purpose of implementing the operation is minimization of water shortages in the study area as a way to increase the amount of the availability of water.The next priority of project is a high percentage of the available water supply in any time.So overflow rate is very important in assessing operating performance.Priorities are intended to minimize the amount of overflow and caused increased hydropower production and as a result, reservoir storage is maintained. The MILP model presented in this paper is easily developable and very flexible, because penalty coefficients and objective priorities can be modified by the user.For example, we can consider more penalties for overflow than other objectives in order to minimize overflow as a primary objective.Inputs and outputs of the model are presented in the form of spreadsheet software that is easily transferable to other spreadsheets. Presently a mixed integer linear programming model has been developed, which can be used for water resource system optimization in a desired range.This model is prepared based on the formulas given by ( 8) to (13) and the developed formulas' linearized relationship to the nonlinear equations ( 1) to (7) in this problem.The period is equal to 20 years to solve this problem; the number of integer variables is equal to 1440; the average number of variables is equal to 60960 and the number of constraints is 47040. Results In this study, we used the optimization model to simulate operation of multiple and multiobjective reservoir system, without specifying operation rules of reservoirs.It was done by determination of penalty weighting factors and prioritizing in the objective function.Therefore, we did not use the objective function to maximize economic efficiency; we used it to have a better simulation of reservoirs performance compared to historical data.In this study, the objective function is governing equation in simulation of multiple and multiobjective reservoirs operation.This equation consists of several decision variables including the amount of overflow, reservoirs storage, and the amount of water needed for drinking, agriculture, and industry in each period in each reservoir.For water resource system mode, the assumptions and criteria considered are drinking water shortage of 5% and between 10% and 30% shortage of agricultural water, and the return flow from drinking water consumption and agricultural water consumption are considered 60% and 25%, respectively.To maintain the reservoir water balance, water surface elevations at the beginning and end of the optimization period were consider as equal to historical period amount.The optimization is done monthly and a historical period of 20 years has been used for the model from 1983 to 2003.The calculated optimal release from the MILP model was compared against observations of the Laar, Latian, and Karaj reservoirs.The optimal release from Latian dam, Laar dam, and Karaj dam is provided in Figures 4, 5, and 6 respectively. There is an almost good adaptation between the results of the MILP optimal release and the observation amount in dams of Karaj and Latian.The greatest differences in Karaj dam occur in the maximum peaks.Also in Latian dam, after 1995 the delay between the optimal and actual values can be seen, in which existence of periodic droughts can be the reason.A large change in Mazandaran agricultural needs is also one of the main reasons for the creation of the difference between the historical and optimal values in Laar dam. To supply drinking water for Tehran, the average annual of 336 million cubic meters (MCM) (38.89%) is taken from Karaj dam, and 288 MCM (33.33%) is taken from Latian and Laar dams.The remaining needed water, that is, 240 MCM (27.78%), is provided by ground water resources (wells).The underground source is substantial to meet the demand of Tehran drinking water.These values are shown in Table 2.In comparison, Table 3 gives the calculated amount of water supplied by the Karaj, Latian and Laar, and the underground source. The driest year in history was 2001, when totally an average of about 216 MCM of water was drawn from Karaj Dam and 224 MCM from Laar and Latian dam, while 390 MCM was sourced from the wells to supply the drinking water for Tehran as shown in Table 3.In comparison, according to the statistics provided by the Tehran Regional Water Company, while the amount withdrawn from the Karaj and Laar and Latian dam were 214 MCM and 224 MCM, respectively, the amounts withdrawn from the well were over 440 MCM.This amount differs significantly from the calculated one, which indicates over consumption of the ground water resources. Figure 7 shows the comparison between the historical and calculated amount of water abstracted from the Karaj dam, Laar and Latian dam, and the underground source.It can be seen from this figure that the observed amount of water abstracted from the wells is always higher than the calculated amount, whereas this is not the case for the Karaj dam and Laar and Latian dam. Figure 8 shows the amount of water abstracted from the underground source by historical and calculation (MILP) methods.Tables 4-6 show the comparison of the results obtained from the reservoirs' performance, reservoir storage, and overflow rates. Table 4 shows that MILP model has more reservoir storage in all three reservoirs compared to historical period results.This increment of storage may lead to growth in hydropower generation.Table 5 demonstrates that the MILP model increases the outflow for each dam, whereby the maximum increment is 24.64% from the Karaj dam.The average long term of changes in overflows can be seen in Table 6; the results indicate that for Laar dam there is no overflow value in both historical period and MILP method.The negative sign in results means the reduction in overflow, in which in total around 21.7% reduction in all three dams is calculated. Discussion and Conclusions The historical data has shown that Kataj and Latian and Laar dams cannot supply the total water need of Tehran; thus the underground water sources are being used to answer all the demands.However, overusing the underground water has many negative effects quantitatively and qualitatively.In order to supply water needs and minimize the underground water consumption, a model based on MILP technique has been developed to achieve optimal operation of the mentioned reservoirs.In other words, by optimal operation of surface water we can achieve the minimum underground water utilization.The mentioned technique is developed based on the specifications of the existing reservoirs in the case study, data availability, and compatibility of the method with the operation problem type.The reservoir system's operating rules were prepared using weighting factors of penalties and priorities in the objective function.In this study, the maximum emphasis on weighting factors is on shortage of drinking water, the overflow of reservoirs, and Figure 1 : Figure 1: Schematic drawing of the relationship between resources and expenditures for Tehran plain. Figure 2 : Figure 2: The relationship between the release and storage of a dam's reservoir. Figure 3 : Figure 3: A schematic of the prepared water system (MILP model). Figure 4 : Figure 4: The optimal release and observations of Latian dam. Figure 5 : Figure 5: The optimal release and observations of Laar dam. Figure 6 : Figure 6: The optimal release and observations of Karaj dam. Figure 7 : Figure 7: The amount of water supplied from the dams of Karaj, Latian, Laar, and Tehran aquifer (in MCM). Figure 8 : Figure 8: Comparison between taken discharge from the aquifer by observations method and the calculation method (in MCM). Table 1 : Reservoir characteristics in the study area. Table 2 : The observed average annual amount of water abstracted from the dams of Karaj and Latian and Laar and wells (in MCM). Table 3 : The calculated average annual amount of water abstracted from the dams of Karaj and Latian and Laar and wells (in MCM). Table 4 : The comparison between the results of average storage of reservoir by using historical data and MILP method for Karaj, Latian, and Laar dams (in MCM). Table 5 : The comparison between the results of average outflow of reservoir by using historical data and MILP method for Karaj, Latian, and Laar dams (in cms). Table 6 : The comparison between the results of average overflow of reservoir by using historical data and MILP method for Karaj, Latian and Laar dams (in MCM).maintaining factors of reservoir storage level.The results obtained from the MILP model are more appropriate than historical operation results and it shows that the amount of water taken from aquifer is less in MILP calculation.The results of the developed MILP model weighed against the historical data show 21.7% less overflow, 11.6% more outflow, and 15.9% more reservoir storage.Having more reservoir storage leads to improvement in hydropower efficiency and increase in the stored water in the reservoirs. the
6,361.6
2015-02-23T00:00:00.000
[ "Engineering", "Environmental Science" ]
Inference of Gene Regulatory Network Based on Local Bayesian Networks The inference of gene regulatory networks (GRNs) from expression data can mine the direct regulations among genes and gain deep insights into biological processes at a network level. During past decades, numerous computational approaches have been introduced for inferring the GRNs. However, many of them still suffer from various problems, e.g., Bayesian network (BN) methods cannot handle large-scale networks due to their high computational complexity, while information theory-based methods cannot identify the directions of regulatory interactions and also suffer from false positive/negative problems. To overcome the limitations, in this work we present a novel algorithm, namely local Bayesian network (LBN), to infer GRNs from gene expression data by using the network decomposition strategy and false-positive edge elimination scheme. Specifically, LBN algorithm first uses conditional mutual information (CMI) to construct an initial network or GRN, which is decomposed into a number of local networks or GRNs. Then, BN method is employed to generate a series of local BNs by selecting the k-nearest neighbors of each gene as its candidate regulatory genes, which significantly reduces the exponential search space from all possible GRN structures. Integrating these local BNs forms a tentative network or GRN by performing CMI, which reduces redundant regulations in the GRN and thus alleviates the false positive problem. The final network or GRN can be obtained by iteratively performing CMI and local BN on the tentative network. In the iterative process, the false or redundant regulations are gradually removed. When tested on the benchmark GRN datasets from DREAM challenge as well as the SOS DNA repair network in E.coli, our results suggest that LBN outperforms other state-of-the-art methods (ARACNE, GENIE3 and NARROMI) significantly, with more accurate and robust performance. In particular, the decomposition strategy with local Bayesian networks not only effectively reduce the computational cost of BN due to much smaller sizes of local GRNs, but also identify the directions of the regulations. Introduction Gene regulatory networks (GRNs) that explicitly characterize regulatory processes in cells are typically modeled by graphs, in which the nodes represent the genes and the edges reflect the regulatory or interaction relationship between genes [1]. Accurately inferring GRN is of great importance and also an essential task to understand the biological activity from signal emulsion to metabolic dynamics, prioritize potential drug targets of various diseases, devise effective therapeutics, and discover the novel pathways [2][3][4]. Identifying the GRNs with experimental methods is usually time-consuming, tedious and expensive, and sometimes lack of reproducibility. In addition, recent high-throughput sequencing technologies have yielded a mass of gene expression data [5], which provides opportunity for understanding the underlying regulatory mechanism based on the data. Therefore, numerous computational approaches have been developed to infer the GRNs [3,. Such computational methods can be roughly categorized into the co-expression based approaches [6], supervised learning-based approaches [7][8][9][10][11][12][13], model-based approaches [3,[14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30], and information theory-based approaches [31][32][33][34][35][36][37][38][39][40]. The co-expression based methods have low computational complexity, but they cannot infer direct associations or model system dynamics. The supervised learning-based methods make use of the known regulations to infer GRNs on a genome-wide data, such as SEREND [8], GENIES [9] and SIRENE [11], but require additional information of the regulatory interactions to train a model. By guiding the inference engine from the prior information of the known regulations, it can achieve higher precision and outperform many other methods [46]. However, the insufficient information of the labeled or known gene datasets limits the application of this kind of approaches [47,48]. On the other hand, model-based methods can be further classified into ordinary differential equation [14,15], multiple linear regression [18,19], linear programming [20,21], Boolean networks [17,22], and probabilistic graphical models including Bayesian network (BN) [3,16,23,49] and graphical Gaussian model [24,25]. Overall, these model-based methods can provide us a deeper understanding of the system's behaviors at a network level and can also infer the directions of regulations in the network. However, these methods are parameters-dependent and time-consuming, which makes them difficult to deal with large-scale networks. For example, inferring GRNs based on the probabilistic graphical models requires to search the optimal graph from all possible graphs with respect to all genes in the network. Due to this NPhard nature [50] of learning static Bayesian network structure, two common alternative techniques, i.e., a heuristic-based search [26] and the maximum-number-of-parents (maxP) [27,28] were developed approximately to search the sub-optimal graphs. Yet, the heuristic search approaches still have high computational complexity and do not guarantee global optimal. Although the maxP technique by limiting the maximum number of parents for each gene to q can partly reduce the computational complexity, it needs to traverse search all genes for inferring the parents of one gene. Thus, maxP techniques have polynomial complexity of O(n q ) for an nnode GRN [28], which are still unsuitable for large-scale GRNs. To reconstruct dynamic Bayesian networks (DBNs), two structure learning algorithms such as BNFinder [29] and globalMIT [30] have been proposed to infer GRNs, but these algorithms are currently suitable only for small networks since they also require to search all combinations of regulators for a gene. Recently, information theory-based methods are widely used for reconstructing GNRs, such as mutual information (MI) [33,[34][35][36][42][43][44] and conditional mutual information (CMI) [31,38,45]. These approaches are assumption-free methods, measuring unknown, non-linear and complex associations rather than linear-correlations between genes [38,40], and addressing the problem of intense computation for parameters. Thus, they can be used to infer largescale GRNs. However, MI-based methods overestimate the regulation relationships to some extent and fail to distinguish indirect regulators from direct ones, thereby leading to possible false positives [38,51,52]. Although CMI-based methods are able to separate the direct regulations from the indirect ones, they cannot derive the directions of regulations in the network and also tend to underestimate the regulation strength in some cases [32,37,45]. To overcome these limitations of BN, MI and CMI, in this paper, we propose a novel local Bayesian network (LBN) algorithm to reconstruct GRNs from gene expression data by making use of their advantages, i.e., infer the directed network with less false-positive edges and with high computational efficiency. LBN algorithm mainly consists of five distinct elements shown in Fig 1: i) CMI is first employed to construct an initial network, i.e., G MI , which then is decomposed into a series of smaller sub-networks, i.e., local networks or GRNs, according to the nearest relationship among genes in the network with k-nearest neighbor (kNN) method. ii) For these local networks or GRNs, BN method is used to identify their regulatory relationships with directions, generating a series of local BNs which are integrated into a candidate GRN G B . iii) CMI is applied to remove the false positive edges in G B , forming a tentative GRN G C . iv) According to the relationships of kNN among genes in the network, the tentative GRN (G C ) is further decomposed into a series of smaller sub-networks or local networks, in which BN method is implemented to delete the false regulatory relationships. v) The final network or GRN G F is inferred by iteratively performing BN and CMI with kNN decomposition until the topological structure of the tentative network G C does not change. On the benchmark GRN datasets from DREAM challenge [53,54] and widely used SOS DNA repair network in Escherichia coli [55,56], the simulation results confirmed the effectiveness of our LBN algorithm, which is superior to other three state-of-the-art approaches, i.e., ARACNE [36], GENIE3 [13] and NARROMI [20]. Datasets and evaluation metrics The benchmark network datasets play an important role in assessing the effectiveness of algorithms in reconstructing GRNs. Many researchers used the simulated datasets derived from DREAM challenge [53] to evaluate their algorithms. DREAM challenge gives a series of gene expression datasets with noise and gold benchmark networks, which were selected from source networks of real species. In this work, we utilized three simulation datasets as well as two real gene expression datasets to validate our method. The three synthetic datasets in sizes 10, 50 and 100 (marked as dataset10, dataset50 and dataset100, respectively) obtained from DREAM3 challenge contain 10, 50 and 100 genes with 10, 77 and 125 edges, respectively, which come from 10, 50 and 100 samples respectively. The real gene expression dataset is the well-known SOS DNA repair network with experiment dataset in E. coli [55,56], which includes 9 genes with 24 edges. Another large-scale gene expression dataset from E. coli data bank [57] is an experimentally verified network [58], which includes 1418 genes with 2675 edges. In order to validate our algorithm, the true positive rate (TPR), false positive rate (FPR), false discovery rate (FDR), positive predictive value (PPV), overall accuracy (ACC), F-score measure and Matthews correlation coefficient (MCC) are used to evaluate the performance of our LBN and other algorithms. These metrics are defined as follows: (1) process the data, (2) construct the initial network (a largescale network) by CMI or MI, (3) decompose the network into local networks (a number of small-scale networks) by kNN with k = 1, (4) perform BN to obtain local BNs (a number of small-scale networks), (5) integrate local BNs into a candidate network (a large-scale network), (6) perform CMI to obtain the tentative network (a large-scale network). Iteratively performing BN and CMI with kNN (k = 2) until G C topological structure tends to stable, the final network or GRN can be inferred. The solid lines denote the true regulations and the dashed lines denote redundant correlations between two genes. where TP is the number of edges that are correctly identified, TN is the number of non-link edges correctly identified, FP is the number of edges that are incorrectly identified, FN is the number of non-link edges incorrectly identified. By setting different CMI threshold values varying from large to small with a fixed scale, we obtained a series of TP, FP, TN and FN to calculate their corresponding TPR and FPR values, which are used to plot the receiver operating characteristic (ROC) curves. The area under ROC curves (AUC) is calculated as another metric for comparing different algorithms. Evaluating simulation datasets Three synthetic datasets (dataset10, dataset50 and dataset100) from DREAM3 challenge were used to assess LBN algorithm, and three state-of-the-art methods of GENIE3 [13], ARACNE [36], and NARROMI [20] were chosen to evaluate the performance of LBN and those methods. GENIE3 [13] decomposes the problem of inferring a regulatory network of p genes into p different feature selection problems by using Random Forest and Extra-Trees algorithms. ARACNE [36] utilizes the data processing inequality to eliminate the majority of indirect interactions inferred by co-expression methods, which cannot recover all transcriptional interactions in a GRN but rather to recover some transcriptional interactions with a high confidence. NARROMI [20] combines the information theory-based CMI and the path-consistent algorithm (PCA) to improve the accuracy of GRN inference. In NARROMI, MI is firstly used to remove the noisy regulations with low pairwise correlations, and then CMI is utilized to exclude the redundant regulations from indirect regulators iteratively by PCA from a lower order to a higher order. For all the methods in comparison, the parameters were set to default values. We use the Z-statistic test [59,60,38] at the significance level of P-value = 0.05 to select the suitable thresholds for parameters α and β, which are approximately α = 0.03, 0.1 and 0.1 as the threshold values of CMI to construct the gene correlation network G MI for dataset10, data-set50 and dataset100 respectively. In the same way, we also selected β = 0.03, 0.1 and 0.1 as the threshold value of CMI to remove the false positive edges for dataset10, dataset50 and data-set100 respectively. The results in Table 1 show that our LBN method has the highest PPV, ACC, MCC, F and AUC scores among all, except that the AUC of ARACNE on the dataset 100 is a little higher than that of our LBN method. The results on the three datasets with different network sizes selected from real and experimental verified networks in Yeast genomes also demonstrate the effectiveness of our LBN in terms of higher and more robust performances in inferring GRNs. In addition, there are a number of methods for inferring GRNs based on Markov Blanket, such as Grow-shring [61], IAMB [62] and Fast-IAMB [63]. Both of Grow-shring and IAMB methods first identify the Markov Blankets for each variable (or node) by iteratively executing a series of conditional independence and dependence tests, then connect nodes in a consistent way to infer Bayesian network. However, in the process of discovering the Markov Blanket of a target variable T, Grow-shring and IAMB methods require to search almost all other variables, which increases algorithm's time complexity. Although the computational complexity (O(n 2 )) of these two methods is in the same scale as our method and is lower than that of BN method (O(2 n )), numerical computations show that our method performs superior to them for simulation dataset and real datasets or large-scale GRNs. Specifically, in order to assess effectiveness of our LBN method, we compared LBN with Grow-shring and IAMB methods on dataset10. The comparative results of three methods are shown on Table 2, from which we can see that the computational time of our LBN method is considerably lower than that of either Growshring method or IAMB method. In addition, as shown in Table 2, the accuracy of our GRN inference is also high. Inferring SOS network and gene regulatory interactions in E. coli In order to further evaluate the performance of our LBN algorithm, we also implemented our LBN method and other five methods, i.e., GENIE3, ARACNE,NARROMI, Grow-shring and IAMB on the well-known SOS DNA repair network, which is an experimentally verified network in E. coli, with real gene expression data [55,56]. SOS network (Fig 2A) includes two mediators of the SOS response (lexA and recA), four other regulatory genes (ssb, recF, dinI and umuDC) involved in the SOS response, and three sigma factor genes (rpoD, rpoH and rpoS) whose regulations play important roles in the SOS response. Choosing threshold α = β = 0.01, the comparison results of LBN with GENIE3, ARACNE, NARROMI, Grow-shring and IAMB are shown on Table 3, in which we can see that the performance of our LBN method is also superior to GENIE3, ARACNE, NARROMI, Grow-shring and IAMB. For example, the ACC of LBN is 73.6%, which is 4.2%, 25%, 15.3%, 9.7% and 2.8% higher than that of GENIE3, ARA-CNE, NARROMI, Grow-shring and IAMB, respectively, and AUC of LBN achieves at 0.816, which is 0.132, 0.077, 0.025, 0.058 and 0.007 higher than that of GENIE3, ARACNE, NAR-ROMI, Grow-shring and IAMB, respectively. Fig 2B gives the SOS network inferred with LBN, which shows that LBN method infers 15 true regulatory relationships and 10 false regulatory links. These results also indicate that our LBN method can infer most of the true regulatory relationships between genes, and verify the effectiveness and efficiency of LBN method on the real gene expression data. LBN was also applied to construct a large-scale GRN from real gene expression data. We used the experimentally verified reference network in E. coli [58] to evaluate the performance of LBN, and downloaded the gene expression data from the well-known E. coli data bank [57]. The experimentally verified network includes 2675 edges between 160 regulators and 1258 targets that can be found in the gene expression dataset [20]. The comparison results of LBN with GENIE3, ARACNE, NARROMI, Grow-shring and IAMB on the large-scale gene regulatory network in E. coli are listed on Table 4, from which we can see that the proposed LBN method performs better than other methods with the highest average AUC scores, number and proportion for regulators and target genes. These results indicate that our LBN method is also suitable to infer large-scale GRNs. Effects of the strategies of network decomposition and false-positive edge deletion In order to evaluate the effectiveness of the strategies of the network decomposition and falsepositive edge deletion introduced in our LBN algorithm, we tested the performance of different combination ways (i.e. MI+BN, MI+BN+CMI, MI+BN+CMI+kNN+BN) on the dataset10, which includes 10 genes and 10 regulatory edges. MI+BN denotes that MI method was firstly used to construct the initial GRN, then the network decomposition strategy and BN method were adopted to generate GRN; MI+BN+CMI denotes that MI, the network decomposition strategy and BN method were used to infer GRN, then CMI was chosen to remove the false positive edges; MI+BN+CMI+kNN+BN denotes that MI, the network decomposition strategy, CMI and BN methods were used to generate GRN, then kNN and BN methods were respectively taken to decompose GRN, reconstruct GRN and further delete the false positive edges. On the same PC (i5-2400 CPU, 4GB RAM), the results of different combination ways were listed on Table 5. and Fig 3, we can see that the running time of MI+BN was 0.7852s lower than that of BN, while it wrongly predicted 7 regulatory edges, which means that the strategy of MI+BN effectively reduces the computational time, meanwhile it results in more false positive edges; CMI can really remove the false positive edges, and kNN indeed helps the Bayesian network accurately learning and reducing the false positive edges. These results indicate that our strategy of the network decomposition can significantly reduce the high computation cost of the BN method for large-scale GRNs, whereas the strategy of deleting the false-positive edges with CMI and kNN can remarkably enhance the accuracy of the network inference. Effects of the threshold parameters There are two parameters α and β in our LBN algorithm, which determine whether or not there is a link or an edge between two genes in the reconstructed GRN. In order to evaluate the impact of α and β parameters in LBN algorithm, we performed simulations on dataset10 by calculating ACC with different α and β values by fixing another parameter, and the simulated results are shown in Fig 4. From Fig 4A, we found that the ACC value increases gradually in the range 0 α < 0.025, and reaches the highest value (ACC = 0.944) in the range 0.025 α 0.03, and decreases gradually in the range 0.03 < α < 0.045, while the ACC basically remains unchanged (ACC % 0.9). From Fig 4B, we found that the ACC value increases gradually in the range 0 β < 0.024, and reaches the highest value (ACC = 0.944) in the range of 0.024 β 0.03, and decreases gradually in the range 0.03 < β < 0.09. Although the parameters α and β have some influence on the results of the inferred GRNs, the effect is minor in those threshold ranges. Thus, we can select α and β lied in these range (e.g., 0.025 α 0.03 and 0.024 β 0.03) to obtain the best GRN for dataset10. We also performed simulations by calculating ACC with different α and β values on dataset50, dataset100 and SOS DNA dataset, respectively. The experimental results show that we should select the suitable parameters (α and β) for different datasets to obtain the best GRNs. Analysis of LBN computational complexity The computational complexity of LBN method involves five phases or parts. In the phase of inferring an initial network, LBN needs to compute MI or CMI value of each gene pair at zero order, thus the maximum complexity is in the order of O(n 2 ), where n is the total number of genes. For the phase of constructing the directed network, LBN needs to select regulatory genes for each target gene, and thus the maximum complexity is in the order of O(n×2 m ), where m is the number of regulatory genes, and m << n. In the phase of filtering false positive edges by CMI, the time complexity is O(n 2 ). For the phase of further removing the redundant edges with kNN, LBN needs to find n sub-networks, and hence the time complexity is O(n). In last phase of iteratively performing CMI and BN with kNN methods until the topological structure of the tentative or candidate network does not change. If iteratively performing l times, then the total complexity of LBN is O(2l×n 2 +l×n+l×n×2 m ). When n is very large and m << n, the computational complexity of LBN is O(n 2 ), which is lower than that of BN method (O(2 n )). Conclusions In this work, we presented a novel method, namely LBN, to improve the accuracy of GRN inference from gene expression data by adopting two strategies, i.e., the network decomposition and the false-positive edge deletion, which can accurately infer a directed network with high computational efficiency. Specifically, the network decomposition can effectively reduce the high computational cost of BN method for inferring large-scale GRNs, whereas CMI with kNN can delete the redundant regulations and thus reduce the false positives. By iteratively performing CMI and BN with kNN methods, LBN algorithm can infer the optimal GRN structure with regulation directions. The results on the benchmark gene regulatory networks from the DREAM3 challenge and a real SOS DNA repair network in E. coli show that our LBN method outperforms significantly other three state-of-the-art methods of ARACNE, GENIE3 and NARROMI. Clearly, our LBN makes Bayesian network accurately to learn the network structure and reduce the false positives by searching k-nearest neighbors of every gene, and thus, LBN is effective and robust for inferring the directed GRNs. On the other hand, based on probabilistic graphical model, a network inference method called the module network method [64] was also developed. Compared with Segal's Module network method [64] which infers the network among modules, our LBN algorithm adopts the iterative algorithm between CMI and probabilistic graphical model (i.e., BN) to infer the network among genes. Despite the above advantages of LBN, it can be improved from the following two aspects. Firstly, it is still a challenging task to select the parent genes of X gene in the set of variables, which will affect the computational cost and accuracy of inferring GRNs. Secondly, the inferred network is a static network, and thus it is a future direction to extend LBN to consider the dynamical features in the network, e.g., Dynamic Bayesian Networks (DBNs) or Dynamical Network Markers (DNMs) [65] by using time-course or stage-course data, which can be found in wider applications [66][67][68] in biomedical fields. MI and CMI Recently, both of mutual information (MI) and conditional mutual information (CMI) have been widely applied to inferring GRNs [20,31,38,40,55,56,69], due to their capability of characterizing nonlinear dependency, which provides a natural generalization of association between genes. MI can be used to measure the degree of independence between two genes X i and X j , but it tends to overestimate the regulation strengths between genes (i.e., false positive problem). On the other hand, CMI measures the conditional dependency between two genes X i and X j given other gene X k , which can quantify the undirected regulation. With the widely adopted hypothesis of Gaussian distribution for gene expression data, the entropy can be estimated by the following Gaussian kernel probability density function [38,42], where C is the covariance matrix of variable X, |C| is the determinant of the matrix, N is the number of samples and n is the number of variables (genes) in C. Generally, if the sample number is almost equal to the gene number, the empirical covariance matrix is often used to estimate the covariance matrix of the distribution of gene expression profile, which can be considered as a good approximation of the true covariance matrix. However, when the number of samples is smaller than that of genes, the regularized covariance matrix [72,73] is used to estimate the covariance matrix of the distribution of gene expression profile. The number of replicate samples will affect the performance of the method, and increasing replicate samples can enhance the GRN inference algorithm's power. Thus, the entropy of variable X can be denoted as: According to Eqs 2 and 5, MI between two variables (genes) X and Y can be easily calculated by using the following equivalent formula [31,38,70]. High MI value indicates that there may be a close relationship between the variables (genes) X and Y, while low MI value implies their independence. If variables (genes) X and Y are independent of each other, clearly MI(X, Y) = 0. Similarly, under the assumption of Gaussian distributions for gene expression data, CMI of two variables (genes) X and Y given variable (gene) Z can be easily calculated by using the following equivalent formula [31,38]. Obviously, when X and Y are conditionally independent given Z, CMI(X, Y|Z) = 0. In addition, this equivalent expression is an efficient method to calculate CMI between two variables X and Y given one or more variables Z, e.g., if the conditional variable Z = (Z 1 , Z 2 ) is composed of two variables Z 1 and Z 2 , we can obtain the second-order CMI. Bayesian networks A Bayesian network (BN) is a graphical model of the probabilistic relationships among a set of random variable X = {X 1 ,X 2 ,. . .,X i ,. . ..X n }, which is a directed acyclic graph G. In a Bayesian network, the vertices (nodes) are the random variables (genes), and the edges represent the probabilistic dependencies among the corresponding random variables (genes). Under the Markov assumption that given its parents, each variable is independent of its non-descendants, the relationships between the variables (genes) are described by a joint probability distribution P(X 1 ,X 2 ,. . .,X n ), which can be decomposed into a product of conditional probabilities based on the graphical structure: PðX 1 ; X 2 ; :::; where Pa(X i ) is the set of parents of node X i in graph G. In the process of BN structure learning, the most likely graph G for a given dataset D can be inferred by searching for the optimal graph based on a Bayesian scoring metric. That is, by trying out all possible graphs G (i.e., all possible combinations of interaction among genes), the graph G with the maximum Bayesian score (joint probability) is chosen as the most likely gene regulatory network. In general, the number of possible graph G grows exponentially with the number of nodes (or genes), and the problem of identifying the optimal graph is NP-hard [50]. For a larger dataset D which contains more variables, it is not computationally possible to calculate the Bayesian score for all possible graphs G. Therefore, the heuristic search methods, such as greedy-hill climbing approach, the Markov Chain Monte Carlo method and simulated annealing, are often used to infer the Bayesian network structure [28,74]. Here, the optimal graph G can be decomposed into a series of optimal sub-graphs, each of which is centered on one node or gene. However, the parent set of every node X i may be consisted of other nodes in G, the computational complexity of identifying the optimal sub-graphs is considerably high, i.e., it is still not computationally possible to calculate the maximum Bayesian score of all possible sub-graphs of every node for a large-scale network. Generally, the neighbor genes of gene X i most likely regulate it. Thus, we limit the size of parents of each node X i to approximately calculate the maximum Bayesian score of every node. In this paper, as shown in Fig 1, we first construct the undirected network with CMI method, and decompose the network into a series of sub-networks in which the central node just is linked with its k nearest neighbors (or nodes). Due to every sub-network just contains a few nodes, we can identify the set of parents of every central node by calculating the Bayesian scores of all possible sub-network structures of the central node to choose the optimal Bayesian sub-network with maximum joint probability distribution score. Then, by integrating all of the sub-networks, we have the candidate global Bayesian network (or GRN). Note that BN can be extended to dynamic Bayesian network by using time-course expression data. k-nearest neighbor In a graph G(V,E), V represents a set of nodes and E represents edges between nodes. The k closest neighbors of each node are selected according their shortest path distance in the graph structure. That is, the k-nearest neighbor (kNN) of node V i consists of a set of nodes whose shortest path to the node V i is k. In this paper, we used the k-nearest neighbors of each node to decompose a large-scale network to form a series of local Bayesian networks. For each local Bayesian network, the Bayesian network inference method is used to remove the false positive edges. For a large-scale network, we show that it can actually achieve a high accuracy even with the first-and second-nearest neighbors of each node. Actually, the k-nearest neighbors of a gene or node with k = 2 contains the Markov blanket of the node, which includes all the k-nearest neighbors with k = 1 and the partial the k-nearest neighbors with k = 2 for that node. The Markov blanket of a node in a Bayesian network is composed of all the variables that shield the node from the rest of the network, which implies that the Markov blanket of a node is the only knowledge needed to predict the behavior of that node. Thus, we choose k = 2 in this paper. LBN algorithm Given an expression dataset with n genes and N samples, a novel algorithm (called LBN) was developed to infer its underlying GRN. As shown in Fig 1, LBN is composed of four main parts: i) Construct an initial network (or GRN) with MI or CMI method, ii) Decompose the large-scale (initial) network into series of sub-networks by kNN method, i.e., local networks or GRNs, iii) Identify the regulatory relationship among genes by BN method for each sub-network, and iv) Integrate all local BNs as a candidate network, and then remove the false regulatory relationships by CMI, i.e., construct the tentative network. Then we have the final network or GRN by iteratively performing CMI and BN with kNN methods. Numerical computations show that our LBN method can infer the final GRN after iterating 10-20 times. Fig 1 is the schematic diagram of our LBN method, which is described in detail as follows: Step 1: Construct the initial network by CMI. In general, the gene-gene pairs with high MI or CMI values are co-expressed genes, in which one is the target gene, and another is the regulatory gene (regulator). For an expression dataset with n genes, we first compute the MI or CMI values between all gene pairs with Eq 6, deleting these edges whose MI values are smaller than a pre-defined threshold α, and then construct an initial GRN which is an undirected network G MI . Step 2: Decompose G MI into n sub-networks or local networks by kNN. For a larger network G MI which contains a large number of genes, it is a NP-hard problem to try out all its possible structures to search for the most likely gene regulatory network with BN method. Therefore, we proposed a strategy to bypass this problem by decomposing network G MI into series of sub-networks which contains a few genes. Suppose every gene g i in the network G MI is a potential target gene, and its nearest neighbor genes in G MI are its potential regulatory genes (regulators), that is, gene g i and its nearest neighbor genes form a local network G MI . Based on this assumption, the network G MI can be decomposed into n sub-networks or local networks, where n is the total number of genes in the network. Every sub-network is composed of the gene g i and its nearest neighbor genes. Step 3: Construct local BNs by estimating the gene regulations and integrate local BNs into a candidate network. For every sub-network, we calculate the joint probability distribution value of all its possible structure, selecting the network with the maximum joint probability distribution value as the optimal Bayesian sub-network from which we can identify the candidate regulatory genes (regulators) of the target g i . Then, the n optimal Bayesian sub-networks or local BNs are integrated into a directed network G B as a candidate network or GRN from which we can find the regulatory relationship between genes. In the process of constructing the Bayesian sub-network, it can not only identify the edge direction between the interacting genes, but also eliminate the redundant regulation edges. Step 4: Construct tentative network by eliminating the redundant regulations by CMI. MI method commonly tends to overestimate the regulation strengths between genes, which does not consider the joint regulations of a target gene by other two or more genes, and thus results in more false positive edges. In this step, we use CMI to remove false positive edges by computing the first-order CMI(i, j|k), second-order CMI(i, j|k, l) with Eq 7. If CMI(i, j|k) (or CMI(i, j|k, l)) is smaller than a pre-defined threshold β, the edge linked genes i and j is deleted from network G B . Thus, we can generate a tentative network or GRN G C . Step 5: Decompose G C into N smaller networks or local networks. In Steps 2 and 3, the sub-networks decomposed from G MI are the smallest local networks whose shortest path is 1 (i.e., k = 1). Using these sub-networks to construct local GRN with BN method may introduce some false regulatory edges. For further filtering the false genes regulatory edges, we should enlarge the parent set of each gene. However, if selecting more neighbors for one gene as its candidate regulators, it will increase the computational complexity. In this work, we select k = 2 to enlarge the parent set of each gene. Thus, we applied the second-nearest neighbor of each node to decompose G C , forming n sub-networks whose shortest path is 2 (i.e., k = 2), then using the BN method to reconstruct local GRNs for every sub-networks. The candidate GRN G C is calculated by iteratively performing Steps 3-5 until its topological structure does not change. In the end, we can obtain the final network or GRN G F .
7,988.4
2016-08-01T00:00:00.000
[ "Biology", "Computer Science" ]
Hypo-osmotic Stress Activates Plc1p-dependent Phosphatidylinositol 4,5-Bisphosphate Hydrolysis and Inositol Hexakisphosphate Accumulation in Yeast* Polyphosphoinositide-specific phospholipases (PICs) of the δ-subfamily are ubiquitous in eukaryotes, but an inability to control these enzymes physiologically has been a major obstacle to understanding their cellular function(s). Plc1p is similar to metazoan δ-PICs and is the only PIC in Saccharomyces cerevisiae. Genetic studies have implicated Plc1p in several cell functions, both nuclear and cytoplasmic. Here we show that a brief hypo-osmotic episode provokes rapid Plc1p-catalyzed hydrolysis of PtdIns(4,5)P2 in intact yeast by a mechanism independent of extracellular Ca2+. Much of this PtdIns(4,5)P2 hydrolysis occurs at the plasma membrane. The hydrolyzed PtdIns(4,5)P2 is mainly derived from PtdIns4P made by the PtdIns 4-kinase Stt4p. PtdIns(4,5)P2 hydrolysis occurs normally in mutants lacking Arg82p or Ipk1p, but they accumulate no InsP6, showing that these enzymes normally convert the liberated Ins(1,4,5)P3 rapidly and quantitatively to InsP6. We conclude that hypo-osmotic stress activates Plc1p-catalyzed PtdIns(4,5)P2 at the yeast plasma membrane and the liberated Ins(1,4,5)P3 is speedily converted to InsP6. This ability routinely to activate Plc1p-catalyed PtdIns(4,5)P2 hydrolysis in vivo opens up new opportunities for molecular and genetic scrutiny of the regulation and functions of phosphoinositidases C of the δ-subfamily. Phosphoinositide-based regulatory systems are ubiquitous in eukaryotes and contribute to many processes, including signaling from cell surface receptors, assembly/disassembly of the actin cytoskeleton, and vesicle trafficking. Receptor signaling through activation of PtdIns(4,5)P 2 1 hydrolysis by phosphoinositide-specific phospholipases C (phosphoinositidases C; PICs) is the prototype of such regulatory systems. Of the five known PIC families, PIC␦s are ubiquitous in eukaryotes and receptor-controlled PICs of the ␤, ␥, and ⑀ subfamilies (1-3) are only found in metazoans. PIC has been detected only in sperm (4). A prokaryotic PIC that integrated into an emerging proto-eukaryote was probably the common ancestor of all eukaryote PICs (5), and it seems likely that this was more similar to modern PIC␦s than the later-evolved signaling PICs (1)(2)(3). PIC␦s might even retain some of the original functions of this ancestral PIC, so it is unfortunate that we understand so little about their regulation and functions. Improved understanding of PIC␦s is likely to come most readily from organisms that express only one, PIC␦-like, PIC and that lack the PtdIns(4,5)P 2 -consuming Type I phosphoinositide 3-kinases. One such is Saccharomyces cerevisiae, with Plc1p (encoded by PLC1) its sole PIC. The activity of Plc1p thus influences many cell activities. Some, such as chromatin maintenance, transcription, and mRNA export, are nuclear, whereas others, including vacuole homeostasis and proteasome activity, reside in the cytosol compartment. It therefore seems that the pleiotropic phenotypes of ⌬plc1 cells are consequences of multiple flaws in several fundamental processes, in at least two cell compartments. Despite this substantial body of genetic evidence, there is scant information on what controls Plc1p activity in vivo. It was suggested that glucose re-admission to glucose-deprived yeast might activate Plc1p (23), but this response was later attributed mainly to polyphosphoinositide deacylation (24). It was suggested that glucose re-admission might provoke phosphoinositide turnover and activate a plasma membrane H ϩ pump, with Plc1p needed for both responses (25), but again deacylation may have caused much of the observed phosphoinositide loss. Nitrogen re-addition to nitrogen-starved yeast provokes rapid Ins(1,4,5)P 3 formation (26), but this seems not to need Plc1p (27). Hypo-osmotic shock evokes a [Ca 2ϩ ] i rise in yeast (28) and in some animal and plant cells, and hypo-osmotic shock may sometimes activate PIC (29,30). However, the underlying phosphoinositide changes in these responses, and how these are linked to other cellular events, remain uncertain. In this study, we present evidence that hypo-osmotic stress speedily activates Plc1p-catalyzed PtdIns(4,5)P 2 hydrolysis at the plasma membrane in S. cerevisiae, and we define the source of the hydrolyzed PtdIns(4,5)P 2 and the metabolic fate of the liberated Ins(1,4,5)P 3 . Table I lists the yeast strains used. Growth of [ 3 H]Inositol-labeled Yeast and Procedure for Hypo-osmotic Shock-Cells were grown exponentially with [ 3 H]inositol (5 Ci per ml), and maintained in the presence of this label throughout all manipulations until they were killed. They were acclimatized to hypertonic saline medium during 2-h incubations, in media supplemented first with 0.5 M NaCl and then with 0.9 M NaCl. The acclimatized cells (3 ϫ 10 6 ml Ϫ1 , 5 ml) were sedimented (3000 ϫ g, 5 min, 23°C) and resuspended in 0.5 ml of medium supplemented with 0.9 M NaCl at 23°C. 15 min later they were diluted 4-fold with medium containing 0.9 M NaCl (no osmotic shock) or lacking added NaCl (hypo-osmotic stress). ⌬stt4 cells, which are osmotically fragile, were grown throughout in medium supplemented with 0.9 M NaCl. The upper phase plus interface, containing water-soluble inositol polyphosphates, were incubated at 30°C for 30 min, centrifuged (10 5 ϫ g, 30 min, 30°C) and the interfacial pellet discarded. The volume was made 2 ml with H 2 O, and acid was neutralized with an appropriate volume of 1 M NaOH containing 2.5 mM EDTA, 2.5 mM EGTA, and 37.5 mM HEPES, and samples were stored at Ϫ20°C. They were analyzed by HPLC on a 250 ϫ 4.6 mm Partisphere 5-SAX column, eluted at 1 ml/min with a complex gradient: Solution A was H 2 O, and Solution B was 1.25 M (NH 4 ) 2 HPO 4 , pH 3.8. The gradient was: 0 -5 min, 0% B; 5-10 min, ramp to 7% B; 10 -40 min, ramp to 9% B; 40 -110 min, ramp to 100% B; then isocratic to 120 min. Radioactivity was measured with a flow detector (see above). A different gradient, using the same column and Solutions A and B, was used to confirm that the major inositol polyphosphate that accumulated during hypo-osmotic shock was InsP 6 ( Fig. 1): 0 -5 min, 0% B; 5-10 min, ramp to 7% B; 10 -25 min, ramp to 70% B; 25-95 min, ramp to 78% B; 95-110 min, ramp to 100% B; then isocratic to 120 min. Fractions were collected for static scintillation counting. Construction of a ⌬stt4 Strain in BY4742 Background-Genomic DNA retrieved from diploid stt4::kanMX4/STT4 cells by standard techniques was transformed into haploid wild-type BY4742 cells, with 1.0 M sorbitol present during transformation and selection. Kanamycin-resistant colonies were selected on geneticin-agar plates, and replacement of STT4 with the kan r marker was confirmed when PCR amplification of genomic DNA from the kanamycin-resistant colonies yielded an ϳ3.0-kb fragment (not shown). Inositol Polyphosphate Kinase Assays-The kinase activities of GST-Arg82p and GST-Ipk1p, singly and in combination, were assayed by incubating 5 g of each protein for 0 -10 min at 28°C with 25,000 cpm of Measurements of GFP Fluorescence-The dimeric GFP-PLD␦-PH domain construct pTL336 was a gift from T. Levine (36). Changes in plasma membrane and cytosolic GFP intensity in hypo-osmoticallyshocked cells were visualized on a Nikon Eclipse E600 Microscope with an XF100 -3 filter cube (Omega Optical). Images were acquired with an ORCA digital camera (Hamamatsu, Japan) and changes measured using an Intensity Threshold program in Simple PCI (Compix Imaging Systems. RESULTS Our experiments used S. cerevisiae that were in exponential growth until shortly before environmental perturbations were imposed. They were cultured for 5-6 generations with medium containing [2-3 H]inositol, to label all inositol-containing cell constituents close to isotopic equilibrium with the intracellular precursor mixture of [ 3 H]inositol and inositol made de novo by the resident inositol 3-phosphate synthase. Under these conditions, rapid changes in labeling of cellular inositol phospholipids and phosphates will reflect approximately equivalent changes in the relative quantities of the molecules present. Hypo-osmotic Shock Stimulates PtdIns(4,5)P 2 Hydrolysis-We compared the effects of hypo-osmotic stress on the PtdIns (4,5)P 2 complements of wild-type and ⌬plc1 yeast. Suspensions of equilibrium-labeled cells were adapted to high osmolarity, abruptly diluted 4-fold, and after various periods lipids were extracted, deacylated, and the resulting glycerophosphoesters separated by anion-exchange HPLC (see "Materials and Methods"). Fig. 1A shows typical chromatograms from wild-type and ⌬plc1 cells that were hypo-osmotically stressed for 2 min, and Fig. 1B records the amounts of phosphoinositides in the control and unstressed cells. The PtdIns(4,5)P 2 complement of hypo-osmotically stressed wild-type cells started to decrease after ϳ20 s (Fig. 1, C and D), declined by about a half within ϳ2 min (Fig. 1, A-D) and started to return toward the starting value over the following 5-10 min (Fig. 1, C and D). This response was relatively straininvariant (Fig. 1D). Expression of Plc1p from a multicopy plasmid in a ⌬plc1 strain restored [ 3 H]PtdIns(4,5)P 2 breakdown in response to hypo-osmotic stress (not shown). When multicopy overexpression of Plc1p was achieved in wild-type cells, it caused no substantial modification either of basal or stimulated [ 3 H]PtdIns(4,5)P 2 metabolism (see, for example, Fig 2C). Fig. 1 employed an extraction procedure that achieves close to quantitative [ 3 H]PtdIns(4,5)P 2 recovery (35). In most subsequent experiments, we also needed to recover water-soluble phosphates in a state suitable for anion-exchange HPLC analysis. For this, we used a method employing less aggressive acid chloroformmethanol extraction followed by phase partition (see "Materials and Methods"): this method retrieved inositol polyphosphates efficiently but failed to extract about one-third of [ 3 H]PtdIns(4,5)P 2 (not shown). We have not corrected for this under-extraction, so from Fig. 2 onwards the total [ 3 H]PtdIns(4,5)P 2 complements of the extracted cells would have been about 50% greater than the figures reported. InsP 6 Accumulation Accompanies Stress-induced PtdIns(4,5)-P 2 Depletion-The experiments shown in When aqueous phases from unstressed [ 3 H]inositol-labeled yeast were analyzed, there were substantial labeled peaks coincident with InsP 6 (the largest peak, Fig. 2A) and with multiple InsP and InsP 2 isomers (not show), and a small peak of PP-InsP 5 eluted after InsP 6 (at ϳ113 min: Fig. 2A). InsP and InsP 2 changed little during stress and were not studied in detail. Very little radioactivity eluted in parts of the chromatogram where the various isomers of InsP 3 , InsP 4 , and InsP 5 emerge ( Fig. 2A). This region of the chromatogram was devoid of clear-cut peaks in most experiments, but small but distinct peaks were seen in others, whether because of better HPLC or slight variations in metabolism we do not know. To confirm the identity of the putative InsP 6 , an aqueous extract from [ 14 C]inositol-labeled and stressed cells was cochromatographed with authentic [ 3 H]InsP 6 in a second HPLC system (see "Materials and Methods"). The constancy of the 14 C: 3 H ratio across the InsP 6 peak confirmed its identity (Fig. 2B). When wild-type cells were hypo-osmotically stressed for 2 min or more, the most striking change in inositol polyphosphate complement was an approximate doubling of the already substantial InsP 6 concentration (Fig. 2, A and C). ⌬plc1 cells, unstressed or following hypo-osmotic stress, behaved differently: they were devoid of inositol polyphosphates with three or more phosphate groups (Fig. 2A). The information in Fig. 2, D and E, reinforces these deductions. First, the concurrent time courses of PtdIns(4,5)P 2 depletion and InsP 6 accumulation (Fig. 2D) suggest a direct relationship between PtdIns(4,5)P 2 loss and InsP 6 synthesis. We determined which inositol polyphosphates accumulate during hypo-osmotic stress in mutants lacking Arg82p, Ipk1p or Kcs1p, with the results in Fig. 3 and Table II. Hypo-osmotic stress provoked an essentially normal loss of PtdIns(4,5)P 2 in all of these mutants (Table II). ⌬arg82 cells contained a small peak of Ins(1,4,5)P 3 (Fig. 3B, inset; note the expanded scale) but the main feature of the inositol phosphate complement in these cells was the presence FIG. 2. InsP 6 accumulates rapidly in hypo-osmotically shocked S. cerevisiae. A, the inositol polyphosphate complements of unstressed [ 3 H]inositol-labeled wild-type (BY4742) and ⌬plc1 (BY4742 plc1::kanMX4) are compared with cells that were hypo-osmotically shocked for 2 min. B, authentic 3 H-labeled InsP 6 (solid line) precisely co-migrated during anion-exchange HPLC with the 14 C-labeled InsP 6 formed in [ 14 C]inositollabeled yeast during 2 min of hypo-osmotic shock (for details, see "Materials and Methods"). C, inositol polyphosphates other than InsP 6 changed little during a 2-min hypo-osmotic shock. D, reciprocal decrease in PtdIns(4,5)P 2 and increase in InsP 6 when an hypo-osmotic shock was applied to cells that overexpressed Plc1p (BY4742 plc1::kanMX4 ϩ pUG36 -PLC1): wild-type cells behaved similarly. E, the phosphoinositidase C inhibitor U73122 inhibited both PtdIns(4,5)P 2 depletion and InsP 6 accumulation during hypo-osmotic stress. The bar at the left represents unstressed cells and that at the right cells that were hypo-osmotically stressed (2 min) in the absence of U73122. For each panel, the results are representative of 2-4 experiments that yielded similar results. Comparison of the inositol polyphosphate complements of control and hypo-osmotically stressed wild-type yeast with the inositol phosphate complements of similarly treated cells lacking the various inositol polyphosphate kinases For each strain, the information is representative of that gathered from 2-4 experiments. Where no figure is recorded, the quantity of 3 H detected in the relevant compound was near or below the detection limit for the experiment. of greatly increased amounts of multiple isomers of InsP 2 , notably Ins(1,4)P 2 , which is likely to be a direct dephosphorylation product of Ins(1,4,5)P 3 (44,45) (Fig. 3B). The ⌬arg82 cells were devoid of other inositol phosphates with 3 or more phosphate groups (Fig. 3B and Table II). The Ins(1,4,5)P 3 peak became larger during hypo-osmotic stress, but none of the Plc1p-generated Ins(1,4,5)P 3 was converted to more highly phosphorylated products. Instead, much of the PtdIns(4,5)P 2derived radioactivity accumulated as the Ins(1,4,5)P 3 metabolite Ins(1,4)P 2 (Table II and Fig. 3B). ⌬ipk1 cells also accumulated no InsP 6 , but they did show major accumulations of InsP 5 and its pyrophosphorylated derivative PP-InsP 4 , together with small amounts of InsP 4 isomers. All of these were more abundant after hypo-osmotic shock ( Fig. 3C and Table II). In most respects, ⌬kcs1 cells behaved like wild-type cells (Table II). However, they had higher basal and stimulated InsP 6 complements and, as expected, they made no PP-InsP 5 . They accumulated more InsP 5 and PP-InsP 4 than wild-type cells, both with and without hypo-osmotic stress ( Table II). Stimulated PtdIns(4,5)P 2 Hydrolysis Occurs at the Plasma Membrane-To determine where PtdIns(4,5)P 2 hydrolysis occurs in hypo-osmotically stressed cells, we expressed a dimeric GFP-PH domain construct based on the PtdIns(4,5)P 2 -selective PH domain of PIC␦1 in the cells (36). Much of this construct localized around the cell periphery, and there were no obvious concentrations of fluorescence on any intracellular organelles (Fig. 6, inset). This suggests that much of the PtdIns(4,5)P 2 in S. cerevisiae is at the inner face of the plasma membrane. We then analyzed, in parallel, the time-courses of three hypo-osmotically induced events in the cells expressing GFP-PIC␦1: changes in plasma membrane fluorescence, PtdIns(4,5)P 2 depletion and InsP 6 accumulation (Fig. 4). [PtdIns(4,5)P 2 ] declined to a nadir of ϳ60% of the starting value at 2 min, before gradually rising again. As before, the relationship between PtdIns(4,5)P 2 depletion and InsP 6 accumulation was approximately reciprocal (Fig. 4). In the experiment shown, [PtdIns(4,5)P 2 ] rose briefly after imposition of the hypo-osmotic stress, maybe as a secondary effect of the early increase in its precursor PtdIns4P (see above). Plasma membrane GFP fluorescence tracked the PtdIns(4,5)P 2 changes remarkably closely (Fig. 4), indicating that much of the Plc1p-accessible PtdIns(4,5)P 2 was at the plasma membrane. There was no detectable labeling with GFP-PIC␦1-PH of other cellular organelles whose membranes will contain an unknown proportion of cell PtdIns(4,5)P 2 (e.g. nucleus, Golgi), so we have no indication whether Plc1p-catalyzed PtdIns(4,5)P 2 occurred at any of those sites. Ca 2ϩ Entry Is Not the Immediate Activator of Plc1p-There is evidence from animal cells that hypo-osmotic shock sometimes triggers Ca 2ϩ entry, and that in such situations the consequent rise in [Ca 2ϩ ] i may sometimes activate PIC␦ (for references, see the Introduction). We therefore determined whether changing the availability of extracellular Ca 2ϩ would influence the S. cerevisiae response to hypo-osmotic stress. Salt-acclimatized cells were held briefly in a medium that had been depleted of Ca 2ϩ and to which EGTA and BAPTA were added (see "Materials and Methods"). Extracellular Ca 2ϩ was then reintroduced, or not, and simultaneously the cells were hypo-osmotically stressed. Hypo-osmotic shock caused rapid PtdIns(4,5)P 2 hydrolysis and InsP 6 synthesis both in cells with a normal extracellular Ca 2ϩ supply and in Ca 2ϩ -deprived cells (Fig. 5), indicating that the underlying Plc1p activation cannot require any substantial rise in cytosolic [Ca 2ϩ ] that arises from Ca 2ϩ influx. All phosphoinositidases C rely on submicromolar concentrations of Ca 2ϩ for activity (1,3), so it must be assumed that yeast maintain intracellular [Ca 2ϩ ] at a sufficient level to support this stimulated activity even when they are Ca 2ϩ -deprived. This experiment also gave an intriguing, and unexplained, result. Readmission of Ca 2ϩ to Ca 2ϩ -deprived cells provoked a larger accumulation of an Ins(1,4,5)P 3 -like molecule than we saw under any other condition, and hypo-osmotic stress appeared to drive the conversion of this Ins(1,4,5)P 3 to InsP 6 (Fig. 5). pik1-63 cells at raised temperature and ⌬lsb6 cells both maintained substantial concentrations of PtdIns(4,5)P 2 (Table III). By contrast, the PtdIns(4,5)P 2 complement of unstressed ⌬stt4 cells was one-fifth to one-tenth that of wild-type cells. Moreover, there was no discernable fluorescence at the plasma membrane when the GFP-PLC␦1-PH construct was used to localize PtdIns(4,5)P 2 in living ⌬stt4 cells (not shown). Hypo-osmotic stress provoked normal PtdIns(4,5)P 2 hydrolysis and InsP 6 accumulation in the ⌬lsb6 cells and in the temperature-sensitive pik1-63 cells at their non-permissive temperature (Table III). However, it provoked no change in the already low PtdIns(4,5)P 2 complement of ⌬stt4 cells, and no additional InsP 6 accumulated in those cells during iso-osmotic stress (Table III). These results suggest that Stt4p makes most of the PtdIns(4,5)P 2 in S. cerevisiae, and that knocking out STT4 eliminates the synthesis of PtdIns(4,5)P 2 at the plasma membrane. It also seems likely that that Stt4p makes all of the PtdIns(4,5)P 2 that is susceptible to hydrolysis by hypo-osmotically activated Plc1p. DISCUSSION There is still little understanding of the biological function(s) of PIC␦s or of how they are controlled in vivo (2,3). However, mouse sperm need PIC␦4 to initiate the acrosome reaction and achieve efficient fertilization (56), and spore germination is aberrant in Dictyostelium that lack this organism's only PIC␦ (57). Moreover, PLC␦1 accumulates abnormally in the neurofibrillary tangles of Alzheimer's disease (58,59) and in brain subjected to hyperoxic stress (60) or aluminum toxicity (61), and Alzheimer's patients and spontaneously hypertensive rats harbor unusual PLC␦1 alleles (62)(63)(64). There has been a common hope that PIC␦s might transduce receptor signals, and studies of mammalian PIC␦s have suggested some possible activating G-proteins (2). Two G proteins regulate Dictyostelium PIC␦ (65). In particular, a receptor for Conditioned Medium Factor liberates G ␤ ␥ from association with G ␣1 and stimulates PIC (66). However, an alternative model has implicated elevated intracellular [Ca 2ϩ ] as a possible PIC␦ activator, maybe as a result of Ca 2ϩ entry through capacitative channels (67). Plc1p, yeast's only PIC, is PIC␦-like and has been genetically implicated in numerous cell functions (see the Introduction). The traditional view of PICs is that their primary function is to make the second messengers sn-1,2-diacylglycerol and Ins(1,4,5)P 3 . However, recent genetic and biochemical studies have suggested that Ins(1,4,5)P 3 formed by Plc1p-catalyzed PtdIns(4,5)P 2 hydrolysis in yeast is converted to InsP 6 and related inositol polyphosphates, and some of these have previously unsuspected functions in the nucleus (Refs. 17-20, 22, 68, and see the Introduction). Our results confirm the earlier observation that S. cerevisiae must contain Plc1p if they are to make InsP 6 (68). Previous work has not provided clear information on how Plc1p is supplied with PtdIns(4,5)P 2 or how its activity is regulated. The experiments reported in this paper newly establish or reinforce several important features of the Plc1p pathway. First, Stt4p and Mss4p convert PtdIns to the Plc1psensitive PtdIns(4,5)P 2 . Second, hypo-osmotic stress rapidly activates Plc1p-catalyzed PtdIns(4,5)P 2 hydrolysis. Third, much of the stress-induced PtdIns(4,5)P 2 hydrolysis occurs at the plasma membrane. Fourth, the burst of Ins(1,4,5)P 3 that is liberated following Plc1p activation is immediately converted to InsP 6 by Arg82p and Ipk1p. Finally, Kcs1p pyrophosphorylates some of this InsP 6 to PP-InsP 5 . Not only do these results confirm the previously reported roles of Arg-82, Ipk1p, and Kcs1p in converting Plc1p-derived Ins(1,4,5)P 3 to InsP 6 and its pyrophosphorylated derivatives, but they also demonstrate that the normal cell complement of these enzymes is capable of keeping pace with the explosive production of Ins(1,4,5)P 3 that is triggered by hypo-osmotic challenge. S. cerevisiae has three PtdIns 4-kinases: Pik1p (49, 51), Stt4p (50, 52, 69 -71), and Lsb6p (54,72). Our results suggest that Stt4p makes most of a yeast cell's PtdIns4P and, in particular, that this includes all of the PtdIns4P that is precursor to the plasma membrane PtdIns(4,5)P 2 hydrolyzed by stress-activated Plc1p (Table III). This tallies with a recent demonstration that much of cellular Stt4p is at the plasma membrane (71). Earlier genetic studies ascribed Stt4p a function upstream of Mss4p (50) and offered evidence that Stt4p, Mss4p, and Plc1p all lie on a single pathway (73). Our study vindicates these genetic deductions. Our conclusion that Stt4p makes the bulk of yeast PtdIns4P is in apparent conflict with a previous study that assigned approximately equal roles in PtdIns4P (and PtdIns(4,5)P 2 ) synthesis to Stt4p and Pik1p (52). How are these studies to be reconciled? Readily, since the previous study used a brief period of pulse-chase [ 3 H]inositol labeling to label yeast phospholipids. Although this is a convenient technique for labeling cells it does not label lipid pools to close to equilibrium with added inositol, which means that it cannot validly be used to determine the relative rates of synthesis of one product by multiple enzymes in vivo. It has long been apparent that much of the PtdIns(4,5)P 2 in animal cells is at the plasma membrane (74): the best available estimate puts that proportion at 60 -70% (75). There is no equivalent information for yeast, but our evidence that most of its Plc1p-sensitive PtdIns(4,5)P 2 is at the plasma membrane tallies with indications from other work. The first pointer was that Mss4p-generated PtdIns(4,5)P 2 is needed for the integrity of the subplasmalemmal cytoskeleton (46). While this manuscript was under consideration, it also became apparent that Mss4p only makes PtdIns(4,5)P 2 efficiently when it is at the plasma membrane (76): Mss4p in the nucleus makes little PtdIns(4,5)P 2 . Growing and unstressed cells contain about half as much InsP 6 as cells that have been acutely hypo-osmotically stressed, so cells that are receiving no overt stimulus must tonically support a slow but continuous rate of PIC-catalyzed PtdIns(4,5)P 2 hydrolysis. How this slow and sustained Plc1p activity is regulated and where in the cell this basal PtdIns(4,5)P 2 hydrolysis occurs remain to be determined. The GFP-PIC␦1-PH construct reported that the plasma membrane PtdIns(4,5)P 2 complement never decreased by more than about one-half during hypo-osmotic stress, even when the stressed cells were overexpressing Plc1p. This suggests that cells maintain close control of this pathway even when they contain Plc1p in abundance. It also suggested that some form of feedback control must restrain further stress-stimulated PtdIns(4,5)P 2 hydrolysis after a couple of minutes, at a time when the PtdIns(4,5)P 2 level reaches its nadir and [InsP 6 ] stabilizes at a new, and roughly doubled, plateau concentration. How does hypo-osmotic stress activate Plc1p sufficiently for about half of the PtdIns(4,5)P 2 in a cell to be hydrolyzed within a couple of minutes? One view is that PIC␦ activation is a simple response to elevation of cytosolic [Ca 2ϩ ] (67). MDCK cells seem to provide the only precedent for translocation of PIC␦ to the plasma membrane and activation in response to hypo-osmotic shock, but this apparently occurs without a need for Ca 2ϩ entry (29), and our results suggest that a rise in cytosolic [Ca 2ϩ ] does not trigger hypo-osmotic Plc1p activation in yeast (see "Results"). How Plc1p is activated remains to be determined. One possibility is that the primary sensor is a still-to-be-identified membrane stretch receptor protein, in which case the key question would be how its activation signal is transmitted onwards to Plc1p. Intriguingly, Plc1p activation seems not to be reversed immediately if the stress is removed. When isotonicity was quickly restored midway through the most rapid phase of hypo-osmotically driven PtdIns(4,5)P 2 hydrolysis, ongoing PtdIns(4,5)P 2 hydrolysis continued normally for at least the next minute or so. 2 Given the remarkable speed of PtdIns(4,5)P 2 hydrolysis and InsP 6 synthesis in the stressed yeast, without any substantial accumulation of intermediates, we wondered where in the cells the responsible enzymes and the inositol polyphosphate products were located. Similar events seem to occur in S. pombe, though in this case in response to hyper-osmotic challenge (41). Direct information on how inositol phosphates are distributed within eukaryotic cells is scant. The clearest data, from HL60 promyeloid cells, place most of the inositol polyphosphates, including InsP 6 and Ins(1,3,4,5,6)P 5 , either in the cytosol or in a pool that is in free and rapid exchange with that compartment (77). Moreover, the PP-InsP 5 that is made from InsP 6 seems to influence vacuolar morphology in the yeast cytoplasm (21). By contrast, much of the recent work on InsP 6 and its close metabolic relatives in yeast has pointed to important actions in the nucleus (17)(18)(19)(20)68). We attempted to explore this further by comparing the intracellular distributions of biologically functional GFP-Plc1p, GFP-Arg82p, and GFP-Ipk1p constructs with the distribution of an over-expressed nuclear-targeted construct (the nuclear localization signal of SV40 large T antigen coupled to DsRed (78)). The DsRed construct was only in the nucleus, but the over-expressed GFP-Plc1p, GFP-Arg82p, and GFP-Ipk1p were present in both cytoplasm and nucleus, in each case at a higher concentration in the latter (not shown). This leaves the situation unresolved, and an important question for the future will be to determine whether Plc1p and the various inositol phosphate kinases really do carry out multiple functions in more than one cell compartment. Here we have only discussed a relatively straightforward series of events that have at their center Plc1p activation in cells subjected to osmotic perturbation. We have not addressed how a lack of Plc1p and its products, the inositol polyphosphates discussed here and sn-1,2-diacylglycerol, cause dysregulation of multiple cell functions and thus the many ⌬plc1 phenotypes. Our observations make a method for physiologically controlling the catalytic activity of Plc1p available for the first time, and this should facilitate detailed examination of these other questions. We also have evidence, to be reported elsewhere, that activation of Plc1p-catalyzed PtdIns(4,5)P 2 hydrolysis participates in the responses of S. cerevisiae to high temperature, glucose readmission and nitrogen readmission, in surprisingly complex ways.
6,291.6
2004-02-13T00:00:00.000
[ "Biology" ]
EPPSA: Efficient Privacy-Preserving Statistical Aggregation Scheme for Edge Computing-Enhanced Wireless Sensor Networks In edge computing-enhanced wireless sensor networks (WSNs), multidimensional data aggregation can optimize the utilization of computation resources for data collection. How to improve the efficiency of data aggregation has gained considerable attention in both academic and industrial fields. This article proposes a new efficient privacy-preserving statistical aggregation scheme (EPPSA) for WSNs, in which statistical data can be calculated without exposing the total number of sensor devices to control center. The EPPSA scheme supports multiple statistical aggregation functions, including arithmetic mean, quadratic mean, weighted mean, and variance. Furthermore, the EPPSA scheme adopts the modified Montgomery exponentiation algorithms to improve the aggregation efficiency in the edge aggregator. The performance evaluation shows that the EPPSA scheme gets higher aggregation efficiency and lower communication load than the existing statistical aggregation schemes. Introduction In recent years, wireless sensor networks (WSNs) have achieved an accelerated increase in deployment. WSNs are widely utilized in scenarios such as smart homes [1], vehicular ad hoc networks [2][3][4], industrial Internet of ings [5], and monitoring environments [6][7][8]. e sensor devices in WSNs are responsible for sensing real-time data and transmitting the sensed data to control center for data analysis and intelligent control. In a variety of WSN applications, some computations are too time-consuming for sensor devices. Edge computation is an effective solution for resource-limited sensor devices to gain edge devices' assistance, such as data aggregation and neural network models [9]. With the edge computation devices deployed near the target area, the computing load in WSN sensor devices could be distributed to the edge devices. With the help of edge computation devices, cloud data centers provide various services for numbers of applications [10][11][12][13]. To reduce data redundancy and communication delay, data aggregation has become one of the most practical techniques, which can be used in edge computing-enhanced WSNs. Usually, a gateway is an ideal edge device to perform data aggregation operations due to its high computational capability, and mobile edge computing (MEC) also provides an emergent paradigm that brings computation close to mobile sensors [14]. It is worth noting that data aggregation at edge gateways may suffer from some potential security risks [15]. Firstly, the data may be captured or falsified during the delivery process, considering WSNs are usually deployed in an unattended environment. Secondly, adversaries can invade the edge gateway for stealing users' private data. e traditional security approaches cannot be directly applied to edge computing-enhanced WSN data aggregation, since they may be conflicted with aggregation function [16]. Furthermore, due to the dynamic and heterogeneous characteristics of WSN devices, there exits difficulty for the sensed data to be collected, encrypted, used, and stored in accordance with the users' preferences [17,18]. To solve the above problems, homomorphic encryption algorithms have been considered to construct privacy-preserving single-dimensional aggregation schemes [19][20][21]. Furthermore, researchers proposed several multidimensional privacy-preserving data aggregation schemes, the core idea of which is to construct a conversion mechanism between multidimensional data and large integers [19,20,[22][23][24][25][26][27][28][29][30][31][32][33]. ese researches are centered on how to reduce computation costs and communication load while collecting and transmitting the data. Lu et al. [26] proposed an efficient privacy-preserving data aggregation (EPPA) scheme in smart grids. Merging multidimensional data by super-increasing sequence of large primes, Lu et al.'s scheme is more efficient than the one-dimensional data aggregation schemes. Using a polynomial method, Shen et al. [27] constructed a user-level polynomial to store multidimensional values in a single data space based on Horner's rule. Fault tolerance can be used to enhance the security and robustness of a data aggregation scheme. In [32], Mohammadali et al. presented a homomorphic privacypreserving data aggregation scheme with the fault tolerance property, so it can keep data secure even if the aggregator is malicious or curious. Most secure data aggregation schemes only consider summation-based aggregation since the underlying additive homomorphic encryption only supports the modular addition operations. In practice, various types of statistics (e.g., mean, variance and standard deviation) might often need to be supported for data application [34]. erefore, it is necessary to design multifunctional secure data aggregation scheme supporting various data statistics. Zhang et al. [35] proposed a multifunctional secure data aggregation scheme (MODA). is scheme offers the building blocks for multifunctional aggregation by encoding raw data into welldefined vectors. Peng et al. [36] introduced a multifunctional aggregation scheme supporting diversified aggregation functions, including linear, polynomial, and continuous functions. Both of the above schemes implement the statistical functions computed by control center. For example, in [36], the ciphertext sum is generated in the edge device and the mean is calculated using the decrypted sum by control center. us, the total number of sensor devices is required to transmit to control center for calculating the mean by sum/total number. In lots of WSN application scenarios (e.g., industrial monitoring), the total number of sensor devices represents industrial scale which should be kept secret. Smart factories use WSNs and edge computation to create new production forms with better efficiency and flexibility. e total number of sensor devices usually represents industrial production scale in a smart factory. Usually, control center is a thirdparty service from the cloud or a regulatory agency from the government side. Trade secrets can be learned and used by rivals if the scale of a factory's production is disclosed. erefore, it is necessary to compute statistical aggregation functions without exposing the total number of WSN sensor devices. In such a scenario, the control center could use statistical data for scientific analysis and intelligent decisionmaking but would not have any data about the industrial production scale of the smart factory. In this article, we propose the first privacy-preserving statistical aggregation scheme without revealing the total number of sensor devices to control center for edge computing-enhanced WSNs. e contributions of this article can be summarized as follows: (i) We construct an efficient privacy-preserving statistical aggregation scheme based on the Paillier additive homomorphic encryption scheme and the ECDSA digital signature scheme, called EPPSA. e EPPSA scheme supports multiple statistical aggregation functions, including arithmetic mean, quadratic mean, weighted mean, and variance. (ii) In the EPPSA scheme, the mean values can be calculated by the edge device and control center cooperatively, while control center does not know the total number of sensor devices. Firstly, the edge device computes the mean value in ciphertext since it has calculated the sum of the data in ciphertext.. Secondly, after receiving the mean in ciphertext, control center calculates the correct mean by using the modified extended Euclidean algorithm to process the decrypted mean. e EPPSA scheme avoids calculating sum/total number and the total number of WSN sensor devices can be kept secret to control center. (iii) In the EPPSA scheme, we propose three modified Montgomery exponentiation algorithms to improve the aggregation efficiency in the edge device. Our idea is to avoid converting the data between the Montgomery domain and residue domain frequently during the whole process. e ciphertext data in the Montgomery domain can be aggregated by Montgomery multiplications, which are more efficient than ordinary modular multiplications. (iv) We implement the EPPSA scheme and compare it with the existing schemes. Compared with [28], the EPPSA scheme gets 62.5% aggregation performance improvement for 1024 bits modulus. Compared with [36], the EPPSA scheme gets 50% and 33% communication load decrease on arithmetic mean and variance statistics, respectively. e rest of this article is organized as follows: In Section 2, the problem formulation is presented. In Section 3, the related preliminaries are reviewed. In Section 4, the proposed EPPSA data aggregation scheme is given. In Section 5, the secure analysis is given. In Section 6, the performance evaluation and comparison are presented. Finally, Section 7 concludes this article. Problem Formulation In this section, the formalized system model, the security requirements, and design goals are presented. System Model. In the EPPSA scheme, a WSN system is comprised of four parts, namely trusted authority (TA), control center (CC), edge aggregator (EA), and sensor device (SD). e system describes a three-level topological structure, as shown in Figure 1. (i) TA is a trusted third party, which is responsible for generating and distributing the secret keys to all the system participants. In the phase of system initialization, TA sets the ECDSA key pairs into the sensor devices, edge devices, and control center. TA distributes the Paillier public key to the sensor devices, edge devices, and the Paillier private key to control center separately by sending digital envelopes over the Internet. (ii) CC is a powerful service controller of a WSN sensing system. According to special application requirements, CC is responsible for analyzing the data statistics, for example, data mining. CC is assumed to be honest-but-curious. It means that CC attempts to mine valuable information while performing its specified tasks. (iii) EA is a wireless receiving equipment that is deployed at the edge of the WSN. EA is responsible for collection, aggregation, and transmission of sensor data. EA collects encrypted data from sensor devices, aggregates the data, and transmits the aggregated data to CC. EA is a high-performance computing device so that it can perform computationally expensive processes. (iv) SD is deployed at the intended area and is responsible for sensing and communication. SDs automatically sense and encrypt the particular data before sending them to EA. For example, ambient temperature sensors record the real-time temperature in an intelligent agricultural system and report the encrypted data to CC via EA. Security Requirements. In our system model, EA and CC are curious about SD's privacy data, but they cannot collude with each other. Moreover, there is an adversary α assumed to have the capability to eavesdrop on data during their transit. To protect data against internal and external attacks, the following security requirements should be fulfilled: (i) Data confidentiality. Even though data from SDs or EA is eavesdropped on by α during their transit, they cannot be identified. EA cannot infer the privacy information of SDs while aggregating statistic data. When CC receives the statistics data, for example, mean, variance, it cannot identify the individual data or number of SDs. (ii) Authentication. It should be guaranteed that the data are generated by legitimate SD entities. Otherwise, malicious operations from α, for example, replay attack, may undermine the accuracy of the statistics. Similarly, the aggregate data should be guaranteed to be generated by a legitimate EA. (iii) Data integrity. Accuracy and completeness of data in transmission should be guaranteed. When an adversary α forges or modifies the data, the malicious operations should be detected by the receiver. Design Goal. Our design goal is to design an efficient privacy-preserving statistical aggregation scheme. e following design goals should be achieved: (i) Security. e proposed scheme should satisfy the secure requirements mentioned above. e security goal is to prevent individual data and statistical data from being stolen by the adversary. In order to achieve this security goal, both internal and external behavior should be detected. (ii) Efficiency. e proposed scheme should consider computation cost and communication load. On one hand, it is necessary to use lightweight encryption and signing primitives. On the other hand, methods should be adopted to reduce the consumption of aggregate computation. e Paillier Cryptosystem. e Paillier cryptosystem is a widely used public key encryption scheme with additive homomorphic property [37] and is standardized by International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) in 2019 [38]. e Paillier cryptosystem consists of three parts, namely key generation, encryption, and decryption, which are described in Scheme 1. e security of the Paillier encryption algorithm is based on the integer factoring problem. When choosing the parameter g, it is necessary to judge whether n is divisible by the order of g. is can be efficiently checked by testing whether gcd(L(g λ mod n 2 ), n) � 1, where function gcd(.) is the greatest common divisor function. e Paillier cryptosystem has several interesting homomorphic properties, which are associated with the statistics given below: (1) Mean Value Computation on Ciphertext of Paillier Cryptosystem. Shah et al. [39] proposed a solution for noninteger mean value computation in the homomorphic encrypted domain. is method can be adopted by statistical aggregation scheme in WSNs. Let (d 1 , d 2 , . . . , d m ) be a set of numbers. e mean value, denoted by d mean , is the sum of the values divided by the total number of elements, d mean � i�m i�1 d i /m. In practice, the mean d mean may result in integer or float value. Using the homomorphic property of the Paillier cryptosystem given in (2), the mean can be calculated in the encrypted domain. If the plain domain mean d mean is an integer, the encrypted domain mean Enc(d mean ) calculated by (2) results in the correct mean d mean after decryption. However, if the plain domain mean d mean is a decimal, the encrypted domain mean Enc(d mean ) calculated by (2) results in a large integer after decryption. For example, d mean � α/β, where α is not divisible by β. After decryption, Enc(d mean ) will result in αβ − 1 mod n 2 , which is a large integer. Reducing the large integer to the correct mean value is a two-dimensional lattice reduction problem and can be solved by the Lagrange-Gauss lattice reduction algorithm. Shah et al. proposed an efficient method to reduce the large integer called the modified extended Euclidean algorithm (MME). e method is shown in Algorithm 1. e modulus n of Paillier cryptosystem and large integer value w can be considered as independent points in a two-dimensional lattice space ✓. ese two basis vectors, (0, n) and (1, w), can be reduced for optimal values. Algorithm 1 computes the reduced value of w using adapted extended Euclidean algorithm, which is the correct mean value. Montgomery Multiplication. Montgomery multiplication (MM) is an efficient technique for computing modular multiplications [40]. Assuming an odd modulus n is a t-bit number, let r � 2 t . For integers 0 < a, b < k, the Montgomery multiplication is MM(a, b) � ab2 −t mod n. By taking r as a power of 2, the division becomes simple shifting. e process of MM is presented in Algorithm 2. Utilizing the MM algorithm, the Montgomery exponentiation is present in Algorithm 3. For a number α, the corresponding number in the Montgomery domain is denoted by α. The Proposed EPPSA Scheme In this section, we propose the first privacy-preserving statistical aggregation scheme without revealing the total number of sensor devices to control center. In order to achieve the security goals, the edge device and control center calculates the statistics cooperatively, while control center does not know the total number of sensor devices. e Paillier cryptosystem is used as the encryption scheme and the ECDSA algorithm [41] is used as the signature scheme. e EPPSA scheme consists of four phases including system initialization, data encryption, secure statistical aggregation, and secure statistics reading. In the system initialization phase, TA initializes the WSN system by generating and distributing the secret keys of the Paillier and ECDSA algorithms. In the data encryption phase, sensor device SD i collects raw data and encrypts these data to generate a data report. en sensor device sends the encrypted data report to EA via wireless networks. In the secure statistical aggregation phase, EA calculates sum and mean value in the encrypted domain and sends the statistical report to CC. In this phase, EA does not reveal the total number of sensor devices to CC. In the secure statistics reading phase, CC decrypts the statistical report and calculates the quadratic mean and variance of each dimension. Finally, CC gets all the arithmetic mean, quadratic mean, weighted mean, and variance without knowing the total number of sensor devices. Furthermore, to achieve the improvement in aggregation performance, we present three modified Montgomery exponentiation algorithms. Using these algorithms, EPPSA avoids frequent conversion of exponentiation results between the Montgomery domain and residue domain. Modified Montgomery Exponentiation Algorithms. We modified Algorithm 3 to improve the aggregation performance. ree modified algorithms below map the result of modular exponentiation into the Montgomery domain. Compared with Algorithm 3, Algorithm 4 removes the step θ � MM(θ, 1), which converts the result into the correct domain z * n . is denotes that the exponentiation result is still in the Montgomery domain. e result of exponentiation is denoted by θ to be distinguished from the one in Algorithm 3. Compared with Algorithm 3, Algorithm 5 removes the step c � MM(c) and θ � MM(θ, 1). e base number and result of Algorithm 5 are both in the Montgomery domain and are denoted by c and θ, respectively, to be distinguished from the ones in Algorithm 3. Compared with Algorithm 3, Algorithm 6 removes the step c � MM(c). e base number of Algorithm 6 is in the Montgomery domain and is denoted by c to be distinguished from the one in Algorithm 3. Using these algorithms, encrypted data are converted to the Montgomery domain at the beginning of the process during the process. en encrypted data are kept in the Montgomery domain for further computation. In the end, the results are reconverted back to the residue (non-Montgomery) domain. By reducing the conversions between the Montgomery domain and the residue domain, the aggregation operation can be accelerated. System Initialization. In the proposed system model, we assume that there are m SDs in WSN, which are denoted by ,1 , d i,2 , . . . , d i,j , . . . , d i,l ). Each D i gets an identity ID i and EA gets an identity ID EA . e data in a region can be denoted by a matrix Given secure parameters κ and κ 1 , TA initializes the parameters of the additive homomorphic encryption algorithm and digital signature algorithm. e key generation procedure is shown as follows: (i) Input: c, e, n, where n′ is computed by the extended Euclidean algorithm. Output: θ � c e mod n. ALGORITHM 3: e Montgomery exponentiation Input: n, w, where n is the modulus and w is the large number. Output: R w � MEE(n, w). MM(a, b). else return v ALGORITHM 2: e Montgomery multiplication Security and Communication Networks Step 1: TA chooses prime numbers p, q randomly, where |p| � |q| � κ. Let n � pq and λ � lcm(p − 1, q − 1). Choose g, with g ∈ Z * n 2 , and the order of g is a multiple of n. en, TA generates the encryption key (pk AHE , sk AHE ), where the encryption public key is pk AHE � (n, g) and decryption private key is sk AHE � (p, q, λ). Step 2: TA chooses an Elliptic curve group Γ of an order q 1 with base point (generator) G, which is over the finite field Z p 1 of integers modulo a prime p 1 . e bit length of q 1 and p 1 should be set as the security parameter, that is, |p 1 | � |q 1 | � κ 1 . For each SD i (1 ≤ i ≤ m), TA chooses a secret key of digital signature sk DS ,i ←Z q 1 randomly. TA sets the public key of the digital signature pk DS ,i � sk i · G. e signature key of SD i is (pk DS,i ,sk DS,i ). e signature keys of EA, CC, and TA are generated in the same way, which are denoted by (pk DS,EA , sk DS,EA ), (pk DS,CC , sk DS,CC ), and (pk DS,TA , sk DS,TA ), respectively. e signature algorithm makes use of a hash function H: 0, 1 { } * ⟶ Z q 1 . Step 3: Via a secure channel, TA sends the encryption public key pk AHE and the signature private key sk DS,i to SD i (1 ≤ i ≤ m). It sends the encryption public key pk AHE , the signature public key pk DS,i , and the signature private key sk DS,EA to EA. It sends the decryption private key sk AHE and the signature public key pk DS,EA to CC. After key generation, TA distributes the encryption keys and signing keys. e key distribution procedure is shown as follows: Step 1: TA writes signature key pair (pk DS,i , sk DS,i ) into the senor device SD i (1 ≤ i ≤ m) before deploying the sensor device. TA writes the signature public key pk DS,i and the signature key pair (pk DS,EA , sk DS,EA ) into EA before deploying the edge device. TA sends the signature public key pk DS ,TA and pk DS ,EA to CC through the Internet and give the signature key pair (pk DS,CC , sk DS,CC ) to CC by a USB key device. Step 2: Using the private key sk DS,TA , TA computes a digital signature on sk AHE denoted by σ AHE . Using CC's public key pk DS,CC , TA generates a digital envelope on the Paillier private key sk AHE and the signature σ AHE denoted by σ DE . TA sends the σ DE to CC through the Internet. Step 3: After receiving the digital envelope σ DE , CC decrypts it and gets the Paillier private key sk AHE and the signature σ AHE . Using the public key pk DS ,TA , CC verifies the signature. If the verification is passed, the Pallier private key will be accepted. Data Report Generation. Each sensor device SD i , (1 ≤ i ≤ m), performs the following phases to get a data report: (i) Generate: e SD i firstly generates the raw data vector d i � (d i,1 , d i,2 , . . . , d i,j , . . . d i,l ). en SD i calculates the corresponding quadratic data vector if e i � 1, then θ � MM(c, θ) (6) return θ ALGORITHM 4: e modified Montgomery exponentiation 1 (MME1) (i) Input: c, e, n, where n′ is computed by the extended Euclidean algorithm. if e i � 1, then θ � MM(c, θ) (5) return θ ALGORITHM 5: e modified Montgomery exponentiation 2 (MME2) 6 Security and Communication Networks (c i , c 2 i , c i,wei σ i , TS, ID i ) reported from m sensor devices, EA performs the following steps to generate the statistical aggregation report: Statistical Aggregation. After receiving the data signature is achieved by σ EA � (sig EA mod q 1 , r x,EA mod q 1 ). (iv) Send: EA sends the data report (c EA , σ EA , TS, ID EA ) to CC. Statistical Report Decryption. After receiving the data report (c EA , σ EA , TS, ID EA ) reported from EA, EA performs the following steps to decrypt the statistical aggregation report: Security Analysis In this section, we analyze the security properties of the proposed EPPSA scheme, following the security requirements and design goals given in Section 2. Proof 2. In the EPPSA scheme, the data are encrypted by the Paillier cryptosystem. According to Lamma 1, the result of encryption c i,j � g d i,j r n mod n in the Montgomery domain is Security and Communication Networks a valid format of the ciphertext. Meanwhile, the private key sk AHE is transmitted to CC in digital envelope. e sk AHE is encrypted by CC's public key pk DS ,CC so that α cannot get it. Since Paillier cryptosystem is provably secure against the chosen plaintext attack based on the decisional Diffie-Hellman problem, α cannot guess the plaintext in a nonnegligible probability without the private key sk AHE . Similarly, α cannot obtain statistics by eavesdropping on the transmission between EA and CC. In a word, the data and statistics in transmission are semantically secure. Resistance to Replay Attack Theorem 2. If a replayed data report is transmitted to EA, or a statistical report to CC, it can be detected. Proof 3. If an adversary α replays the data report (c re , σ new , TS new ) to aggregator EA, it needs to forge a new timestamp donated by timestamp new . Since the timestamp is new, α has to forge a new signature σ new of the replayed ciphertext c re . e security of the ECDSA system is based on the computational intractability of the discrete logarithm problem (DLP). e signature key pair (pk DS ,i , sk DS ,i ) is written to SD i directly when system initialization. us, α cannot guess the correct signature of the replayed report in a nonnegligible probability without the private key sk DS ,i . Similarly, the replay attack of the statistical report to CC can be detected for the same reason. In a word, the EPPSA scheme is resistant to replay attack. checks the signature σ i by verifying the equation r x,i ′ mod q 1 � r x,i mod q 1 . If c i is manipulated, the hash value will be incorrect and the signature will not be validated. Performance Evaluation and Comparison In this section, our scheme is evaluated in terms of computation costs and communication costs. e performance results are compared with the scheme proposed in references [28,31,33,36]. Computation Cost. Assume that there are m sensor devices SD i in the system and each of them reports an ldimensional data vector for both our EPPSA scheme and schemes in [28,31,33]. For the fairness of comparison, these schemes are assumed to get moduli with the same bit length. In our EPPSA scheme, the modified Montgomery exponentiations (Algorithms 4, 5, and 6) are used to keep the result of exponentiation in the Montgomery domain. at means the aggregation in EA only needs Montgomery multiplications. Let T MM and T OMM be the time cost of a Montgomery multiplication operation and an ordinary modular multiplication operation, respectively. And time cost of a Montgomery exponentiation is denoted by T ME . In our proposed EPPSA scheme, benefitting from the modified Montgomery exponentiations, (m − 1) · T MM is needed. In [28], the aggregation of each dimension is calculated by (m − 1) · T OMM . In [31], the aggregation of each dimension is calculated by (m − 1) · T OMM . In [33], the cost of aggregation is (m − 1) · T MM + T ME . A comparative summary of computation cost for m SDs aggregation is listed in Table 1. To evaluate the performance, we execute the experiments on a Laptop with Windows 10 OS, Intel ® Core ™ i5-700U 2.50 GHz and 16 GB RAM. And we utilize the OpenSSL library (OpenSSL 1.1.1 h) to provide basic cryptographic primitives. For the evaluation of the EPPSA scheme, we set the n to be 512 and 1024 bits in the Paillier Cryptosystem, and the n 2 to be 1024 and 2048 bits. As the number of dimensions changes, we get the comparison of aggregation computation costs of 1024 bits in Figure 2(a), and the computation costs of 2048 bits in Figure 2(b). In summary, Figure 2 clearly shows that compared with schemes in [28,31,33], EPPSA has the smallest computation cost. For example, compared with [28], the EPPSA scheme gets 62.5% aggregation performance improvement on 1024 bits. [28] (m − 1) · T OMM [31] (m − 1) · T OMM [33] (m − 1)T MM + T ME Communication Cost. Among the previous edge-aided aggregation schemes, the scheme in [36] is the only one that offers statistical functions. erefore, we compare the communication costs of the EPPSA scheme with the scheme in [36]. We consider the communication costs of arithmetic mean and variance for fairness. For the sake of instruction, we denote the bit length of the modulus by L. In [36], EA needs to transmit the aggregated ciphertext of summation and counter to CC for arithmetic mean, in which the communication cost is 2L. In our EPPSA scheme, EA needs to send aggregated ciphertext of arithmetic mean to CC, in which the communication cost is L. In [36], EA needs to transmit the aggregated ciphertext of summation, quadratic summation, and counter to CC for variance, in which the communication cost is 3L. In our EPPSA scheme, EA needs to send aggregated ciphertext of arithmetic mean and quadratic mean to CC, in which the communication cost is 2L. Figure 3 shows the communication cost comparison of EPPSA and [36] in different bit lengths. It can be demonstrated that the communication cost of the EPPSA scheme decreases by 50% on arithmetic mean and 33% on variance. Conclusion In this article, we present an efficient privacy-preserving statistical aggregation scheme for edge computing-enhanced WSNs. e EPPSA scheme adopts the Paillier encryption scheme and ECDSA signature algorithm to guarantee data confidentiality, authentication, and data integrity. Compared with the existing multidimensional and multifunctional data aggregation schemes, the EPPSA scheme improves the efficiency of aggregation and decreases the communication load. Furthermore, the EPPSA scheme improves privacy protection by hiding the total number of devices in the data report. e EPPSA scheme can be applied in various WSN scenarios, such as smart factory, health care, and environmental monitoring. Data Availability e data used in the experiments will be available upon request.
6,861.4
2022-05-02T00:00:00.000
[ "Computer Science" ]
cited. Automatic Acquisition of Corpus for Multimedia Applications Author Evaluations of tools (information retrieval systems, machine learning, speech recognition, machine translation, automatic acquisition of data, etc.) are annually organized throughout evaluation campaigns (TREC, ELRA, ESTER IWSLT, etc.). The building of an ad hoc evaluation corpus in the context of these evaluation campaigns is a complex task and it is done manually today and with a high cost. Indeed, this is a very dedicated corpus that would answer to an application need in a precise context but automating its building is a challenge that will help significantly the organization of these campaigns. As a contribution to this challenge, we propose in a context of multimedia information retrieval, an approach of multilevel extension of a small applicative corpus to a larger and voluminous corpus based on the detection of intersections between the two corpus in terms of lemmas having the same grammatical label, that means to get a list of appropriate terminology for which we use several tools (internal and external to our laboratory) and we try to evaluate them in order to keep consistency and coherence with the original corpus.. Situation Evaluations of tools (information retrieval, machine learning, speech recognition, machine translation, automatic acquisition of data systems, etc.) are annually organized throughout evaluation campaigns (TREC, ELRA, ESTER IWSLT, etc.).These campaigns provide to participants: in a first time, a configuration corpus allowing to "optimize" the performance of each candidate system for the evaluation. The optimisation of each tool is made in function of some functionality requested by the evaluation campaign.For example, in TREC, initially we evaluate the documents to retrieval and then we evaluate portions of documents, which are most relevant or question-answering systems (Q/R), etc. In the earlier stage of the campaign, a second larger corpus is provided to the participants to allow them to make a final configuration and to make their system operational. Then a set of test is provided for participants to provide in return the results of their system.To the set of test (queries type for the documents retrieval or questions type in the case of Q/R systems) correspond to a set of deliverables.The set of test and the set of deliverables constitute the evaluation corpus.This corpus is builded manually, that means with a high cost.As illustration, the INEX corpus of semi-structured XML documents is from national cooperative projects dating to 2002 until today.The most advanced studies, Communications of the IBIMA 2 which are as objective to minimize the cost of building evaluation corpus, use semisupervised methods and Rankboost algorithm to exploit directly the results of systems in competition to propose the "best" results. Evaluation of tools requires heavy human resources and is taken now on the basis of arbitrary corpus, which does not necessarily reflect the needs under operational conditions.In addition to systems based on machine learning, operational uses require the constitution of ad hoc corpus, which is not available yet in the text domain like in the speech recognition domain. In the context of multimedia applications developed in our laboratory, one of the problems to resolve concerning the improvement of research is by taking in account the terminology (phrases, named entities, etc.).Our objective is to enhance and maintain an existing base of terminologies, initially builded manually. Our technical choice involves automatic acquisition of terminologies from learning techniques to resolve problems of quantity (completeness) and quality. Interest and Objectives The goal is to evaluate automatic acquisition of terminology systems, but in conditions close to the operational.The target application concerning the multimedia search VSE "Video Search Engine" but the methods and algorithms developed in this work should be generic to be reused in other research themes.For cost problems, the building of corpus must be automated to the maximum.We dispose for this a corpus of queries and a text data from the VSE application.These data are woefully insufficient from a statistical point of view and cannot serve as a training corpus. The problem to solve is to extend from these small but available corpuses in a voluminous corpus from the Web data (collection methods, cleaning noisy corpus, errors, usage errors, mixing languages, sublanguages, sms, forum, categorization "binary" of collected corpus ...).The result of this work is to provide a common basis of learning for all tools to evaluate. Possible Approach To resolve this last problem which consists to build a corpus of evaluation (which can be simplified into a list of terminology) and to define objective criteria for evaluation (recall, precision, others, etc.), we can organise the solution as follows: Expression of the Application Need: it consists to analyse the initial corpus (to extend) and is characterized by calculable criteria.For example, in VSE and for the TV subtitles, it evaluates their quality and the statistical relevance.For the purposes of enlargement to textual press corpus, it determines the adequacy of the theme VSE/press.If it is necessary to define the profile of press media to crawler, the result of this work is to build a (dynamic) "uniform" corpus allowing it to test the various tools on the same data. Extension of Data: it consists firstly to propose one or more methods of extensions of the application corpus into a larger and voluminous corpus and ensuring the adequacy between the two corpora.Then it consists to prepare a software platform for the acquisition of terminology establishing a list of tools (those existing in our laboratory) and/or other free tools available on the Web.We deplore the available tools with the established corpus and provide updated terminology data based on the evolution of corpora.The result of this work is to build collections of data from different tools to make evaluations. Final Evaluation: it consists to make a comparative evaluation of the obtained terminologies by the different tools used and by different methods (recall, precision, etc.).Then, it would be interesting to establish reviews and recommendations about conditions for using this type of tool with this type of training data and this type of application. We detail in what follows, the two first steps. Expression of the Application Need We have in the context of the VSE project, documents indexed with the query logs and other types of news corpus.We want to build on each of these resources a learning corpus adapted to a need. For this, we must analyse qualitatively and quantitatively our need and characterize as possible the application content. In the following, we work on the example of the 2424actu application corpus.We begin with a detailed description of this corpus. 2424actu is a news search engine offering a multimedia content (video, audio, image, etc.) grouping and merging multiple news sources (TV, radio, press, etc.). 2424actu is a larger panorama of news.Using a simple interface, it provides broadcasts, news stories and articles, which are automatically grouped and classified by theme (international, politics, society, economy, sports, culture). 2424actu is publicly accessible under the URL http://www.2424actu.fr/.However, it presents a considerable interest because it represents what users search in reality. In the context of this project, we have a certain number of accesses to the news provided by several producers in a certain form of cooperation or exchange of services.Today, the number of producers is 48. Fig. 1 shows the most important in terms of production. Fig.1: Principal Producers of News Note that about 75% of this information is in French.The rest (25%) is in English because there is some news that is provided by English and American producers.We are interested in first time in processing news in French language. We have a XML file containing news since 20/06/2009 and until today.This news is accompanied by descriptive metadata (identifier, date, producer, etc.). In what follows, we focus on analysing the content of this XML file.In particular, we describe the process of its building and its size and its content. News evolve every day and are updated regularly.Two ways to recover news: Communications of the IBIMA 4 We receive information in the News ML form 1 . We collect RSS news from some producers. In both cases, if it is new news, it is registered under a new identifier in the database.If it is simply an update of old news, as detected by an identifier existing in the database (news_id), it is registered under the same identifier with updating the modification date. The total corpus size is 87 megabytes; the size of the French text, composed by all the summaries of each item and located in the <summary> tag is 16 megabytes.The average size of content tags is 27.5 words.Naturally, there are <summary> tags that are empty because the news is in video or image or audio form.The size of the longer summary is 5025 bytes. The size of the French text found in the tags <news_title> is 3.2 megabytes. All text is generally clean and well-written and does not contain errors or misspellings. The most frequent words are regular words or connection words (de, la, en, à, etc.). The contents of the 2424actu corpus evolutes by recovering the rest of the text using the web address found in the <URL> tag. Two types of evolution can be distinguished: a static evolution and dynamics evolution. Static Evolution: the static evolution consists to fetch the rest of the text that accompanies the information provided.For example, the text that completes the first tag <summary> is the following: Dynamic Evolution: the dynamic evolution is linked to the contents of the <modification_date> tag.News may evolve having a suite or a relance such as the maritime disaster in USA. If the content of this tag is changed, we can save the update of the news.Unfortunately, at present there is not an incremental backup of the news. The formalisation of the need consists to normalize and to find the characterised criteria.For example, if in a corpus of queries, we constat that since multi-word sequences exist in the corpus, then these terms can be a formalization of a calculable criteria. We formalize our need by an informational measure that gives an idea of the lexical complexity, syntactic complexity, and the richness of the vocabulary of the application corpus. More generically, we want to have a most voluminous and larger corpus under a number of consistency and coherence constraints.We then try to detail the concept of the corpus and its characteristics.We note here the principals notions linked to corpus concept.Nelson, F. W. (1982), defines corpus as: « A collection of texts assumed to be representative of a given language, dialect, or other subset of a language to be used for linguistic analysis ». A corpus is considered as a set of documents (texts, images, videos, etc.) regrouped in a precise optic.We can use corpora in several domains: literary studies, linguistic, scientific, etc. In literature, a corpus is a collection of texts with a common purpose.In science, the corpora are essential tools and valuable in natural language processing.They allow extracting a set of useful information for statistical treatment. From an informative viewpoint, they allow building sets of frequencies of n-grams.From a methodological point of view, they allow necessary objectivity for scientific validation in natural language processing.Information is not empirical; it is verified by the corpus.It is therefore possible to use corpora to generate and verify scientific hypotheses. Several characteristics are important to create a well-formed corpus such as: The Size: corpus must obviously reach a critical size to allow reliable statistical treatment.It is impossible to extract reliable information from a too small corpus. The Language Corpus: a well-formed corpus must necessarily cover a single language, and one variation of that language.For example, there are subtle differences between the French of France and the French spoken in Belgium.It is therefore not possible to derive reliable conclusions from a Franco-Belgian corpus for French in France or for French in Belgium. The Evolution of the Texts over Time: time has an important role in the evolution of language: the French spoken today is not the French spoken 200 years ago or, more subtly, the French spoken 10 years ago, especially because of neologisms.It is a phenomenon to take all languages into account.A corpus should not contain texts written at too long time intervals. The Register: do not mix different registers, a corpus builded from scientific texts cannot be used to extract information from vulgarised texts and a corpus of scientific and vulgarised texts will not allow any conclusion on these two registers. In this work, we try to build a larger and voluminous corpus by the extension of a smaller corpus and respecting the previous characteristics. Extension of Corpus We did not find previous work about extension of application corpus for an objective of extraction and enrichment of terminology.There is other work but in a different context such as JRC team of the European Commission, which work on the calculation of similarity between multilingual documents using as pivot the EUROVOC as in research by Steinberger, R. Pouliquen, et al (2002).In this context, the hybrid approach based on a combination of TTR, likehood, Okapi, distance calculation methods has shown its effectiveness.More details about Okapi are in Robertson, S. E. et al (1994). Lafourcade M. et al ( 2009) has made a webbased game to collect terms by building a lexical network.Their approach consists in having people take part in a collective project by offering them a playful application accessible on the web.From an already existing base of terms, the players themselves thus build the lexical network, by supplying associations, which are validated only by an agreeing pair of users.These typed relations are weighted according to the number of pairs of users who provide them.This game has now about 180,000 relations. Here, we are interested in finding a solution for extending an existing corpus to a larger and voluminous corpus keeping adequacy.We find several problems. The First Problem consists in the matching between two structured or unstructured documents d and D where d is a document from the application corpus and D is a larger document from the corpus automatically acquired, a problem of different logical structures for the pair (d, D), a problem of likelihood of their logical structures, a problem of likelihood of their content, etc. The Second Problem is algorithmic problem; it consists to know how to cross n (thousands) documents of the application corpus with a few m (millions) extensive corpus of documents D. The Third Problem is to know how to clean effectively the extended corpus to optimise the adequacy function with the application corpus. Process of Extension Two cases are presented to expand the existing application corpus: an extension from the same corpus saving some correspondence (alignment in logical structure level) and an extension from another larger corpus without any information about correspondence. The first case consists to enrich the application corpus with query results formed from the corpus itself and with these query, we search an equivalent but most larger corpus (from structure point of view).For example, we identify for each part of the application corpus, the most frequent terms (compound words, multi-words, etc.) and pass them as queries. In the case of our corpus 2424actu, we can get the rest of the data through the URL provided with news. In the case of an extension of the application corpus to a most larger and voluminous corpus with an equivalent structure, we propose an approach, which measures the variation of vocabulary and it is based on a modification of the measure TTR (Type Token Ratio). We called this method LTTR (Local Type Token Ratio) that is calculated locally for each text portion of the document d.In the case of the 2424actu corpus, we calculate the LTTR for each tag content <summary>. In the case of an extension of the application corpus from another foreign corpus without an equivalent structure, we propose a multilevel extension approach.Really, we have a corpus from the Web (2 G.O) and we try to find an approach to get adequate text from the big corpus and to add it to the original corpus. We present now a generic approach which can be used in the too cases (with and without correspondence).Note that here in our case, the two corpus are composed respectively of one document d and D. Extension Level 1: as shown in Fig. 2, we start with an operation of lemmatisation and grammatical tagging, after which, the two documents lose their logical structure, keeping the history and traces of the origin of each term.Then, we detect all lemmas noted types(d, D) which is the intersection of d and D (intersection on lemma and grammatical tagging).For each lemmai of the set types(d,D), we will search all terms noted Terms1(D) and containing the terms T1 T2 … Tm of document D and converging to the lemmai Fig. 2 : Extension Level 1 We add new terms introduced from documents D to the document d and we define the extension coefficient level 1, the ratio of the sum of terms of D noted Terms1(D) which converge to the lemmas of intersection by the sum of terms of d that converge to the same intersection lemmas. Extension Level 2: as shown in Fig. 3, we apply the same steps of the extension level 1 and we pass an extension level 2 by getting the texts that correspond to each term T1 T2 … Tm of document D. Fig. 3 : Extension Level 2 Theoretically, all the texts founded we produce a set of terms Terms2 (D) larger and more voluminous than Terms1 (D) which satisfies the constraint extension level 1. Similarly, we define the extension coefficient level 2, the ratio of the sum terms Terms2 (D) by the sum of terms of d that converge towards the same intersection lemmas. Communications of the IBIMA 8 Extension Level 3: we can get a larger extension of the initial corpus making two types of semantic rapprochement as shown in Fig. 4: Then we continue the same process of extension level 2. This gives us a set of terms larger and more voluminous noted Terms3 (D) for which we define in the same way an extension coefficient level 3. Note that the rapprochement can be at the semantic level.For example, regroupping the term "grippe A" to the term "grippe Z" or "réchauffement du climat" to "réchauffement climatique". Fig.4: Extension Level 3 This approach of multilevel extension can be applied to the first case of the extension with correspondence. Evaluation and Matching Approach We evaluate the result of the acquisition of terminologies by classical methods such as recalling and precision, which can be based on a measurement of terminology distance (Nazarenko et al., 2009). We define the formula (1) which allows us to calculate terminology distance Dtermino from the number of adequate terms k in the candidate corpus.The formula (2) allow de calculate (Dtermino in function of k). In this work, we use formula (1).That means, we start by calculating the number of adequate terms having the same lemma and the same grammatical labels.If not, we limit to have the same lemmatization.We calculate after a F-measure that takes into account the length of the two corpus (reference and candidate). The following are two examples of adequate terms: internationaux and morte are respectively adequate to internationales and mort. Table 1 : F-Mesures and % of Extension We can read Table 1 as follows: for a reference corpus composed by 1000 terms (equivalent to 747 terms "T" after cleaning, and 368 different terms "T ≠" and 322 different Lemmas "≠ L") and a candidate corpus of 1000 terms (equivalent to 766 terms "T" after cleaning, and 446 different terms "T ≠" and 415 different Lemmas "≠ L"), we obtain a F-measure equal to 0.53 and a percentage discovering new terms for level 1 equal to 3% and for level 2 equal to 68%. The experiments consist to detect a maximum F-measure and therefore a maximum percentage of extension for each pair of corpus trying several parameters of splitting (100, 1000, 10000, etc.) depending on the size of corpus.Fig. 6 shows the different values of F-measure of Table 1 for sizes equal to 1000, 10 000 and 100 000 terms.Obviously, we obtain the same experimental values of F-measure for both levels of extension.Thus, the two curves overlap.Fig. 7 shows an example of the number of new terms added to the reference corpus.For example, we extended the 368 different terms of the reference corpus to 379 different terms using the extension level 1 and to 618 different terms using the extension level 2. Conclusion We have presented the extension problem of an application corpus to a larger and voluminous corpus which is the first step to acquire a list of appropriate terminology. We have analysed this problem by showing that there are two different cases of extension: an extension with or without logical structure correspondence. We have proposed a generic method that can be applied in both cases.It consists on a multilevel extension of a small application corpus from a larger corpus based on the calculation of intersection of the two corpus having the same lemmatisation and grammatical tagging.Hence having a good result of lemmatisation and grammatical tagging is very important. We experimented the two levels of extension and we got good results of extension, which allow us in the future to experiment with larger data. The advantage of this approach is that it is multilevel and multilingual.Indeed, it can be applied to languages other than French.It provides a configuration for quality and / or quantity of the new data by adjusting the size of the paramerters to split initial corpus (in small or large blocks). Direct rapprochement: it consists to regroup some terms of D that are not in the intersection set types (d,D) to certain terms of d.Indirect rapprochement: it consists to regroup some terms of D that are not in the intersection set types (d,D) to certain terms of D which are in types (d,D).
5,091.8
2011-02-05T00:00:00.000
[ "Computer Science" ]
Ballistic impact on concrete slabs: An experimental and numerical study The ballistic perforation resistance of 50 mm thick concrete slabs impacted by 20 mm diameter ogive-nose steel projectiles is investigated experimentally and numerically. Three commercially produced concretes with nominal unconfined compressive strengths of 35, 75 and 110 MPa were used to cast material test specimens and slabs. After curing, ballistic impact tests were carried out to determine the ballistic limit curve and velocity for each slab quality. Material tests instrumented with digital image correlation (DIC) were conducted along the ballistic impact tests. DIC measurements were used to establish engineering stress-strain curves for calibration of a modified version of the Holmquist-Johnson-Cook concrete model. Finite element simulations of the impact tests gave good conservative predictions. Introduction Concrete is the most widely used construction material in the world and is often used in protective structures exposed to extreme loads such as explosions or ballistic impact. Studies on ballistic impact typically concern either deep penetration or perforation. For the former, where a high level of confinement is present, the compressive strength of the concrete has been shown to be the primary variable [1]. For the latter, the process is more complex and the concrete experiences multiple damage and failure mechanisms, including spalling, tunnelling, and scabbing [2]. The various damage and failure mechanisms are discussed in [3]. Experiments on reinforced concrete slabs with unconfined compressive strengths fcyl of 48 and 140 MPa have for instance shown that tripling fcyl increased the ballistic limit by less than 20 % [4]. Slabs with strengths ranging from fcyl = 25 MPa to 200 MPa produced similar results [5] -a roughly linear increase in ballistic limit of 50% when increasing fcyl by a factor of 8. While fcyl is important in many respects, the tensile strength seems to be a more adequate material parameter for the perforation resistance of thin concrete slabs [6]. Analytical and/or empirical approaches have been used extensively for modelling problems of this kind, but with the rapid increase in computational power, more and more studies are now numerical (see e.g. Ref. [7]). Several concrete models are available [8], each with their strengths and weaknesses. A model may be easy to calibrate but lack the sophistication to capture all relevant phenomena accurately (or vice versa). Polanco-Loria et al. [3] proposed some modifications to the Holmquist-Johnson-Cook (HJC) model [9] as an engineering compromise. The modified HJC (or MHJC) model was only validated against some data from the literature [3]. This study aims to reveal the accuracy of the MHJC model in predicting the ballistic perforation resistance of concrete slabs impacted by ogive-nose steel projectiles using standard material tests to calibrate the constitutive relation. Material tests Cylinders with diameter 100 mm and height 200 mm and 100 mm cubes were used. Three different concretes were tested, dubbed C35, C75 and C110, where the number refers to nominal unconfined compressive strength in MPa. All specimens were painted with a speckled pattern (see Fig. 1(a)) for use with DIC. The engineering strain e was estimated by tracking the relative vertical displacement Δℓ of five opposing subset pairs throughout the test and dividing by the initial distance ℓ0 between the subsets as illustrated in Fig. 1(a). The engineering stress was obtained by dividing the force (synchronised with the images) by the initial loaded area. All resulting curves are plotted in Fig. 1 where each curve is the average of three tests. Table 1 summarises the results, where ρ0 is the density, fcyl the cylinder compressive strength, fcube the cube compressive strength, and ft the tensile splitting strength. Component tests 3.1 Setup Custom-made wooden moulds were used to cast slabs with nominal dimensions 625 mm × 625 mm × 50 mm (see Fig. 2(a) and (b)). 12 slabs were cast for each concrete quality and numbered C35-1 to C35-12 (and correspondingly for C75 and C110). Plastic tubes were inserted through 12 equally spaced cut-outs for bolt holes. A reinforcement bar with diameter 8 mm looping on the outside of the bolt holes was added to provide a lifting point and to restrain shrinkage. Thus, the central part of the slabs to be impacted by projectiles were plain concrete. In the test rig, the slabs were fixed with four massive clamps although holes for a bolted connection was available (for subsequent work). While the boundary conditions are important for distributed loads like shock waves [11], they are thought to be of minor importance for ballistic impact if the in-plane distance between individual shots is large enough [12]. Here, only one shot in the centre of the slab is allowed before it is replaced. The 196 g projectiles were manufactured from Arne tool steel and heat treated to 53 Rockwell C after machining (geometry shown in Fig. 2(c)). The ogive-nose projectiles with critical-radius-head 3 were mounted in a sabot and launched by a compressed air gun [10]. Two synchronised Phantom v2511 high-speed cameras filmed the tests. Accurate optical measurements based on the high-speed images gave the initial velocity vi and the residual velocity vr. This also gives the velocity-time curve of the projectile when it is not obscured. Results A typical perforated slab is shown in Fig. 2(d), exemplified by slab C75-11. The sliced crosssection shows that there is hardly any tunnelling region between the two craters. The tests further show that the spalling craters on the entry side were smaller than the scabbing craters on the exit side. This was consistent for all slabs and all concretes and in line with previous work [2]. The crater sizes and mass loss after impact increased with increasing concrete strength, also observed in [13]. The average mass loss for all tests was 0.5 kg, 0.6 kg and 1.0 kg for the C35, C75 and C110 concretes, respectively. Higher strength means more brittle in this case, and thus more fragmentation. This result may also be seen in Fig. 3, where the dust cloud and number of debris appear largest for the C110 concrete for roughly equal vi. Based on the measured vi and vr, ballistic limit curves were estimated. This was done by least squares fits of the model constants in the generalised Recht-Ipson model [14] ( ) where a and p are considered empirical constants. Due to scatter in the experimental data at impact velocities close to the ballistic limit -especially for the C110 concrete -it was hard to determine the ballistic limit velocities vbl exactly. Thus, vbl was taken as the highest impact velocity not giving complete perforation (only scabbing), while a and p were fitted to the experimental data. The results of the least squares fits are given in Table 1 for all three concretes, and the curves are plotted in Fig. 4(a)-(c) along with the experimental data points. A modest linear increase in ballistic limit velocity with increase in compressive strength is observed. Fig. 4(d) shows a plot of the ballistic limit velocity versus cube compressive strength cc . Here, a tripling of the compressive strength increased the ballistic limit velocity by approximately 27 %. One possible reason for this result is that the tensile splitting strength ft increased by only 55% from C35 to C110. The pitch angle was less than 1.5° in all tests. Constitutive relation and calibration All numerical simulations presented in this study use the MHJC model to describe the constitutive behaviour of the concretes. A complete description of the model is given in [3]. The cylinder tests in Fig. 1(b) were simulated using an axisymmetric model in the explicit finite element solver LS-DYNA [15]. The model shown in Fig. 5(a) has rigid top and bottom plates and a concrete cylinder with 4-node elements (50 × 200 elements, 1.0 mm element size, element type 15, hourglass type 6) with 2D automatic surface-to-surface contact with coefficient of friction of 0.4. The top plate was given a constant velocity vp and the force and the engineering strain e were logged exactly as in the experiments. Time scaling with the strain rate sensitivity parameter C = 0 was used, and the energy ratio remained close to unity for all simulations. The material constants for density, unconfined compressive strength and tensile strength were taken from Table 1. The reference strain rate was 10 −5 s −1 based on the material tests, while the crush limit Pcrush was assumed equal to fcyl/3 and the volumetric crushing strain equal to fcyl/(3K1) where K1 is the initial bulk modulus. The pressure-volumetric strain constants K1, K2, K3, Plock and µlock were taken from Ref. [9]. The shear modulus G, the pressure hardening constants B and N, the damage constants α and β, and the minimum strain to failure εfp,min were all obtained by inverse modelling of the cylinder compression tests and are given in Table 2. This approach has been successful in previous work [16]. The resulting engineering stress-strain curves match the experiments well and are plotted as dashed lines in Fig. 1(b). Setup of impact simulations To minimise mesh dependency, the same element type and size was used for the impact simulations in a computational cell approach [17]. The elements were set to erode when the equivalent strain exceeded 1.0 or the time step dropped below 1/1000 th of the initial value, thus preventing mesh distortion. The concretes were represented by the calibrated MHJC model with C = 0.04, and the projectile an elastic-plastic model (*MAT_003 in LS-DYNA) with linear isotropic hardening. The projectile had a Young's modulus of 204000 MPa, a Poisson ratio of 0.33, a yield stress of 1900 MPa and a tangent modulus equal to 15000 MPa. The setup is sketched in Fig. 5(b). Results Results in terms of ballistic limit curves are shown in Fig. 4(a), (b) and (c) for C35, C75 and C110, respectively. The relative differences between the concrete qualities were maintained in the simulation results, and the ballistic limit velocities were determined by a least squares fitting of a, p and vbl in the Recht-Ipson model. The values are listed in Table 1. The results are as seen reasonably accurate given the rather uncomplicated model. The qualitative results are also satisfying, as illustrated by Fig. 5(c) where the volumetric strain is shown as fringe plots at various instants during the penetration process. A parametric study was carried out on the C110 concrete, where fcyl and ft were doubled and halved, changing only one parameter at the time. Doubling fcyl from 112.5 MPa to 225 MPa increased the ballistic limit by a meagre 1.6 %, while halving it reduced vbl by 11.6 % from 140.3 m/s. The corresponding number for doubling and halving ft were 20.8 % and 21.6 %, respectively. This result suggests that the tensile strength is far more influential on the perforation resistance of thin concrete slabs than the compressive strength. The modest increase in the experimental results in Fig. 4(d) support this. Further, omitting strain rate sensitivity in the material by setting C = 0.0 reduced vbl by 10.9 %, while using C = 0.08 increased vbl by 3.4 %. Finally, increasing the friction coefficient from 0.0 to 0.4 increased the ballistic limit by 14.0 %. Concluding remarks The material test results were in line with expectation for the C35, C75 and C110 concretes. The DIC measurements enabled reasonable material parameters to be obtained, as illustrated by the experimental and numerical comparison in Fig. 1(b). The ballistic limit of the slabs increased almost linearly with concrete strength, where tripling fcyl increased vbl by only 27 %. For these thin slabs, the tensile strength appears to be the most dominant material parameter. This was also suggested by the finite element simulations, where ft was found to be the by far most influential of the parameters studied. In general, the simulations gave good conservative results. The MHJC model is easy to calibrate and implement, and it provides a good and reliable alternative to more advanced concrete models for which accurate material parameter sets may be difficult to obtain.
2,786
2021-01-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Detecting and Mitigating Smart Insider Jamming Attacks in MANETs Using Reputation-Based Coalition Game Security in mobile ad hoc networks (MANETs) is challenging due to the ability of adversaries to gather necessary intelligence to launch insider jamming attacks. The solutions to prevent external attacks onMANET are not applicable for defense against insider jamming attacks. There is a need for a formal framework to characterize the information required by adversaries to launch insider jamming attacks. In this paper, we propose a novel reputation-based coalition game in MANETs to detect and mitigate insider jamming attacks. Since there is no centralized controller inMANETs, the nodes rely heavily on availability of transmission rates and a reputation for each individual node in the coalition to detect the presence of internal jamming node.The nodes will form a stable grand coalition in order to make a strategic security defense decision, maintain the grand coalition based on node reputation, and exclude anymalicious node based on reputation value. Simulation results show that our approach provides a framework to quantify information needed by adversaries to launch insider attacks.The proposed approach will improveMANET’s defense against insider attacks, while also reducing incorrect classification of legitimate nodes as jammers. Introduction Mobile ad hoc networks (MANETs) are self-organized networks which require distributed, reliable, and flexible networks which provide interdependency and rational decisionmaking.MANETs are vulnerable to jamming attacks due to the shared nature of the wireless medium.There are two main categories of jamming attacks: external jamming and internal/ insider jamming.Several research efforts [1][2][3][4] have focused on external jamming attacks.This type of attack is launched by foreign adversary that is not privy to network secrets such as the network's cryptographic credentials and the transmission capabilities of individual nodes of the network.These types of attacks could be relatively easier to counter through some cryptography based techniques, some spread spectrum methodology such as Frequency-Hopping Spread Spectrum (FHSS) [5] and Direct Sequence Spread Spectrum (DSSS) [5,6], Ultrawide Band Technology (UWB) [7], Antenna Polarization, and directional transmission methods [8]. Smart insider attacks on the other hand are much more sophisticated in nature because they are launched from a compromised node that belongs to the network.The attacker exploits the knowledge of network secrets it has gathered to adaptively target critical network functions.This makes it very hard for legitimate nodes to restore a new communication channel securely. Owing to the manner of interaction between nodes in a network, game theory has been extensively used to solve interesting research problems facing MANETs.This game is broadly categorized as cooperative and noncooperative games.A cooperative game is played between nodes who have mutual relationship with each other while the noncooperative game is played between nodes that do not seem to coexist mutually.There have been several efforts on using noncooperative games to model security in wireless networks [9][10][11][12].To the best of our knowledge, little work has been done in using cooperative or coalitional games to ensure security in MANETs.Coalition game is a form of cooperative game that is formed when more than two nodes agree to form an alliance in order to achieve a better probability of success.The cooperation of nodes in the network is dependent on individual node's experience and previous history records it has gathered.Individual nodes in themselves tend to be weak against attacks but could achieve higher level of security when they form a coalition. In this paper, we present a reputation-based coalition game-theoretic approach to detect and mitigate insider attacks on MANETs.In our approach, nodes implement reputation mechanism based on transmission rates.Reputation of a node is the collection of ratings maintained by other nodes about the given node [13].The reputation mechanism can be first hand or second hand depending on whether the reputation values are collected directly or relayed.The choice of first hand versus second hand will impact the reliability of the reputation values.We adopt first-hand reputation because nodes within the transmission range are best equipped to provide reliable information [13,14]. Different from existing works [15,16] which made use of an alibi-based protocol and a self-healing protocol, respectively, to either detect or recover from a jamming attack, we make use of a reputation-based coalition game to ensure security in the network.These approaches are too generalized and might not be implementable for a mobile ad hoc network for which our system is modeled.Our model, instead, follows a game-theoretic approach by (1) implementing a coalition formation algorithm, (2) maintaining the coalition via a reputation mechanism, (3) identifying the insider attackers by setting up a reputation threshold, and (4) excluding the attackers from the coalition by rerouting their paths and randomly changing their channel of transmission.This method is fully distributed and does not rely on any trusted central entity to operate at optimal performance.The rest of this paper is organized as follows: in Section 2, we presented relevant works that are closely related to our approach; in Section 3, we presented the network and jammer model; Section 4 describes the proposed defense model; in Section 5, we provide the simulation and result of the model; and, finally in Section 6, we conclude and present future work. Related Work Previous researches have devoted great efforts to security in mobile ad hoc networks.There is a plethora of works that have used other techniques besides game theory to prevent security attacks in MANETs.Li et al. [16] designed a protocol to protect self-healing wireless networks from insider jamming attacks.The protocol is not applicable to MANET as the pairwise key design in the protocol works best in a centralized system.Some other works have only focused on node selfishness and not on intentional malicious acts or jamming attacks. Marti et al. [17] categorized nodes according to a dynamically measured behavior; a watchdog mechanism identifies the misbehaving nodes and a path-rater mechanism helps the routing protocols avoid these nodes.The research showed that the two mechanisms make it possible to maintain the total throughput of the network at an acceptable level, even in the presence of a high amount of misbehaving nodes.However, the operation of the watchdog is based on an assumption which is not always true, the promiscuous node of the wireless interface.Also, the selfishness of the node does not seem to be castigated by both the watchdog and pathrater mechanisms; in other words, the misbehaving nodes still enjoy the possibility of generating and receiving traffic. Also, Michiardi and Molva [18] have used a reputation mechanism they termed CORE which is an acronym for collaborative reputation mechanism.They suggested a generic mechanism based on reputation to enforce cooperation among the nodes of a MANET to prevent selfish behavior.The only challenge with this mechanism is that it would only work for node selfishness whereas there is a greater risk of service denial in malicious nodes attacks.Furthermore, reputation mechanism was also used by Cheng and Friedman in P2P networks where the notion of Sybil proofness was formalized using static graph formulation of reputation [19].According to the authors, this model cannot be generalized because reputation functions did not depend on the state of the network at previous time steps as well as the current state of the network.Buchegger and Le Boudec [20] described the use of a self-policing mechanism based on reputation to enable mobile ad hoc networks to keep functioning despite the presence of misbehaving nodes.They explained how secondhand information is used while mitigating contamination by spurious ratings.Their survey pointed out that a reputation system is effective as long as the number of misbehaving nodes is not too large. Other works have used noncooperative games to model security scenarios as well as the corresponding defense strategies to such attacks [13,[21][22][23][24][25].Most of these works focused on two-player games where all legitimate nodes are modeled as a single node and attacker nodes are also modeled as a single node too; this is only valid for centralized networks, whereas MANETs are self-organized networks.Thamilarasu and Sridhar formulated jamming as a two-player, noncooperative game to analyze the interaction between attackers and monitoring nodes in the network.The mixed strategy Nash Equilibrium was computed while the optimal attack and detection strategies were derived [22]. Researchers have also used cooperative game theory in the form of coalition game to ensure security in MANETs.Majority of their works have only focused on node selfishness and not on intentional malicious acts or jamming attacks.Yu and Liu presented a joint analysis of cooperation stimulation and security in autonomous mobile ad hoc networks under a game-theoretic framework [26].Their results however show that the proposed strategies would only stimulate cooperation among selfish nodes in autonomous mobile ad hoc networks under noise and attacks which does not properly address intentional malicious attacks.Han and Poor [27] used coalition game in which boundary nodes used cooperative transmission to help backbone nodes in the middle of the network and in return the backbone nodes would be willing to forward the boundary nodes' packets. Saghezchi et al. [28] proposed a credit scheme based on coalitional game model; the authors provided credit to the cooperative nodes proportional to the core solution of the game, and this distributes the common utility among the players in a way that all players are satisfied.Mathur et al. [29] studied the stability of the grand coalition when users in a wireless network are allowed to cooperate while maximizing their own rates which serve as their utility function. Our approach is unique in that (1) each node in the MANET is defined by a security characteristic function for the coalition formation, (2) each node uses a reputation mechanism to accurately detect insider jamming attack, (3) each node maintains a history of transmission rates for nodes in the coalition, and (4) the combination of transmission rates and reputation values for nodes in the coalition is used to detect insider attacker and exclude it from the coalition. Network and Jammer Model 3.1.Network Model.We consider a model for the system as a reputation-based coalition game with imperfect information.The game will be repeated at each iteration until the nodes arrive at their destination.The model will consist of (1, 2, . . ., ) numbers of nodes and (0, 1, . . ., (/2) − 1) numbers of attackers, where the number of attackers would not exceed the number of legitimate nodes.The attacker would be able to join the coalition because it acts like a regular node at the beginning, which permits it to become a member of the coalition.On joining the coalition, a new node has a reputation value of zero and would start cooperating by sharing its transmission rate to all the nodes in its range of transmission.Each node builds and maintains two tables.The tables contain an accumulative history of the entire transmission rate and reputation of all neighboring nodes based on their willingness to share their transmission rate with their neighbors.The transmission rate is broadcast periodically during time interval, .This transmission rate is then stored according to our AFAT algorithm [30].Nodes that share their transmission rates with neighboring nodes will receive a positive reputation from those neighbors and hence update their reputation table about the node.Nodes that refuse to share their transmission rate will receive a negative reputation.A node whose negative reputation value exceeds a preset threshold will be tagged as an attacker and excluded from the coalition. Coalition Formation Model. A coalition game is an ordered pair ⟨, V⟩, where = (1, 2, . . ., ) is the set of players and V is the characteristic function.Any subset of is called a coalition, and the set involving all players is called the grand coalition.The characteristic function V : 2 → assigns any coalition ⊂ a real number V(), which is called the worth of coalition .By convention, V() = 0, where denotes the empty set [31]. Let ≥ 2 denote the number of players in the game, numbered from 1 to , and let denote the set of players, = (1, 2, . . ., ).A coalition, , is defined to be a subset of , ⊂ , and the set of all coalitions is denoted by 2 .The set is also a coalition, called the grand coalition.For example, if there are just two players, = 2, then there are four coalitions, (, 1, 2, ).If there are 3 players, there are 8 coalitions, (, (1), ( 2), ( 3), (1, 2), (1, 3), (2, 3), ).For players, the set of coalitions, 2, has 2 elements.A game with transferrable utility (TU) is a game which involves a universal currency that can be freely exchanged among the players.A game which lacks this kind of currency is called a game with nontransferrable utility (NTU) [31].In addition, = (, V) is called a superadditive game if, ∀, ⊂ and ∩ ̸ = ; then, A payoff vector is called feasible if it distributes the worth of grand coalition among the players completely [31]; that is, A payoff vector is called individually rational if it offers players more payoff than what they can obtain individually [31]; that is, The coalition formation process starts with nodes forming small disjoint coalition with neighboring nodes in their range of transmission and then gradually grows until the grand coalition is formed with the testimony of intersecting nodes.The final outcome of the coalition formation process is to form a stable grand coalition which comprises all nodes in the network.Forming a grand coalition implies that all the smaller coalitions formed would be merged by the presence of these intersecting nodes which would belong to more than one coalition at a time.Our coalition formation process depends on the transmission rate table that has been stored according to the previous work done by [30]. In [30], an accumulative feedback adaptation transmission (AFAT) rate was proposed; this design follows a decentralized approach which ensures the communication of transmission rates between neighboring nodes in a network.This crucial knowledge helps a node to adjust its own rate accordingly [30].In other words, AFAT ensures maximum transmission rates for the nodes in order to meet the specific application bandwidth requirements [30].According to AFAT, the transmission rates of the nodes are adjusted based on the history of neighbors' transmission rates.A list of the transmission rates has been built into the transmission rate table and is updated periodically [30]. The final outcome of the coalition formation process is to form a stable grand coalition which comprises all nodes in the network.The intersecting nodes would be very key to the formation of the grand coalition because they belong to the smaller coalitions that would be merged into a single coalition. Our network model involves a characteristic function and a coalition formation model described in [31,32].Our security characteristic function consists of three parameters capturing the node mobility in the MANET.The support rate is the neighbors in the node's transmission range.The maximum transmission rate in the coalition is provided by AFAT.The maximal admitting probability or cooperation probability is unchanged. Nodes can testify for each other so that the coalition has integrity compared to individuals.Any node that does not (1) Start for all nodes, (2) Begin the 1st round of formation (3) Pick a node with the highest V () (4) Broadcast forming option to the neighboring nodes in the network (5) if V () is beyond threshold and ≥2 nodes match then (6) Form a small coalition ( 7) else (8) Do not pick any node (9) end if (10) Update transmission rate table in AFAT [30] with the rate of newest members (11) Begin the 2nd round (12) Pick a node with the highest security value, V () (13) if the first option has been matched successfully then (14) Pick the next best option available ( 15) else (16) Broadcast the forming option to the neighbors again (17) end if (18) if there is an intersecting node-nodes that belongs to more than one small coalition then (19) Merge the small coalitions ( 20) else (21) Re-broadcast forming option again to the network (22) end if (23) belong to the coalition would not be seen to be trustworthy.There are nodes in the network; for any coalition, ∈ 2 .The number of nodes in it is ||; any node in the coalition would have || − 1 nodes that can testify for it.Let | | be the set of nodes in a transmission range.Therefore, at time slot , the support rate for a node, , is The transmission rate, (), of coalition at time would also be a part of the security function.The nodes' sharing of their transmission rate is very key to their admittance into the small coalition.In other words, to form a coalition with any node, there is a need to know the maximum available transmission rate.The maximum transmission rate ensures that the nodes match the best nodes in terms of transmission rate before settling for the next best option as seen in the coalition formation algorithm.The maximum transmission rate is given by The larger the transmission rate of a node is, the more probable it is for such a node to quickly find a match.These transmission rates are stored according to AFAT [30]. The third parameter for the characteristic function is the maximal admitting probability, because nodes in the network have different admitting probabilities and it would be necessary to pick the highest probability which would be used as a reference for the whole coalition.Every node in the coalition formed was admitted with a certain probability.The nodes having different admitting probability engender the need to assign a maximal admitting probability as the cooperation probability of the whole coalition.Hence, a larger coalition size ensures a higher cooperation probability. The maximal admitting probability is given by Algorithm 1 shows the coalition formation steps.The coalition formation is a dynamic process; it is performed in an iterative manner until all nodes belong to the coalition.No matter the location of a node in the network, it still has neighbors that can testify about it.From the coalition formation algorithm, we can see that, at each round of formation, every coalition member tries to find a partner.The convergence time of formation is short, thereby increasing the speed of coalition formation.The grand coalition is eventually formed when two conditions are met: presence of an intersecting node to aid the merging and whether V() is at least greater than the individual payoff of any disjoint smaller coalition. A coalition approach is needed to detect insider attacks.As stated earlier, we are interested in a singular coalition, called the grand coalition as shown in Figure 1.In the grand Transmission range of nodes A grand coalition Node at the beginning of the coalition coalition, all nodes in the network should belong to this single coalition. From the coalition formation algorithm, we can see that, at each round of formation, every coalition member tries to find a partner.Therefore, the speed of coalition formation is fast, which means the convergence time of formation is short.And the size will keep growing until a grand coalition is reached or all misbehaving nodes are identified.It is important to explain how large the size of the coalition would be.The grand coalition is eventually formed from merging the smaller coalitions that have the same members.These intersecting nodes will be a condition to form a grand coalition between the smaller coalitions.The maximal admitting probability is the cooperation probability of the whole coalition, because the larger the coalition size is, the more tolerant and robust the coalition is, and the coalition can therefore have a higher cooperation probability.Each node has no limit on the number of neighbors in its range because they are all moving (as the name implies mobile ad hoc networks).In other words, there are no fixed numbers of neighbors to a particular node.From our proposed model, the size of the grand coalition could be any size of three nodes and above as would be seen in the simulation section which have three cases, where each case consists of different numbers of legitimate and malicious nodes.For any node ∈ , || > 1, its security payoff share is defined as The coalition game definitely has a core; a core exists only if the sum of payoff shares of all the members for each coalition is larger than the value of that coalition.From ( 3) and ( 4), we can deduce that The game has a core because it satisfies the concept of core of the coalition game [31]. Admitting a Node into the Grand Coalition. A new node would be accepted into the grand coalition based on its ranking in the smaller coalition.To be admitted to a grand coalition, the node should build up good reputation while it is a part of the small coalition.It is possible for a new node to be denied access to the grand coalition even when it was a part of the smaller coalition.This is possible when the new node is temporarily out of range from the intersecting node as at the time its smaller coalition is forming a grand coalition.So, in essence, the new node is not totally new to some nodes in the coalition.This process could continue while there are intersecting nodes to testify about the new node.This would make the grand coalition get bigger which would help provide more robust security in the network as we stated earlier. Incorporating these three parameters, we can write the characteristic function by weighing each parameter.The characteristic function proposed is then , , and are weight parameters and These weight parameters can be used to help provide variability for the characteristic function of the nodes.Due to the mobility factor in our model, it is important to keep track of the neighbors of any node at a given time; helps to weigh the support rate parameter which is responsible for the number of neighbors of a node.Our assumption is that the nodes are slow-moving and there cannot be a rapid change of neighbors. provides a weight value for the maximal admitting probability.The value assigned to depends on the size of the coalition; if the coalition size is very big (say about 100 nodes), then it could be important to make it bigger than the other parameters. The transmission rate is affected by two major factors: propagation environment and the degree of congestion.Depending on these two factors, we could assign a weight value for the maximum transmission rate using .Therefore, the three main parameters that affect the payoff are the support rate, cooperative probability, and transmission rates of the nodes.That is according to the dynamism of those variables.If the coalition refuses to admit some nodes, that means that these nodes did not meet the requirements for joining the coalition regardless of whether it is a malicious node or not. Network Assumptions. We assume mobile nodes with attackers, where is less than /2 (i.e., the number of attackers would not exceed the number of legitimate nodes).The following are the assumptions under which we present our work: (i) Nodes cannot easily generate identities which can be exploited to launch a Sybil attack; hence, we do not consider the possibility of Sybil attacks in this paper. (ii) All players (or nodes) are rational (i.e., they would always choose the strategy that benefits them the most). (iii) Individual nodes have weak security and would jointly have higher security by joining a coalition. (iv) There is no hierarchy, leader-follower, or centralized mechanism in this system. (v) The goal of the game is to form a stable grand coalition where any node that is unable to join this grand coalition would be designated as a malicious node. (vi) The nodes are moving slowly because fast movement brings about a frequent change in the node's neighbors which may affect the reputation of the nodes adversely. (vii) A node's continuous membership of the grand coalition is dependent on its reputation value. Jammer Model. Liao et al. have classified attacks on wireless ad hoc networks; they classified attacks as palpable and subtle, with palpable attacks being attacks resulting in conspicuous impact on network functions which results in intolerable impacts on the users.On the other hand, they defined subtle attacks as attacks that lead to invisible damage in a vaguer way.According to them, palpable attacks include jamming, traffic manipulating, blackhole, and flooding attacks, while subtle attacks include eavesdropping, traffic monitoring, grayhole, wormhole, and Sybil attacks [33].The jammer starts out by being a member of a smaller coalition and as such has earned a good reputation from its neighboring nodes.We would recollect that the grand coalition is formed only when there is an intersecting node from the other smaller coalitions (i.e., the intersecting node or nodes belong to more than one coalition according to the coalition formation algorithm explained in the coalition formation process).The intersecting node would serve as a referee for the other nodes.The attacker who has met all the criteria to be a part of the coalition would be seen to start out as an eavesdropper by passively monitoring the network and even participating in sharing its transmission rate with all the neighbors in its range of transmission in the coalition.At this stage, the attacker would still partake of the crucial network assignments like routing and packet forwarding and in turn gain a good reputation for itself.After gathering information about what channel its neighbors are transmitting on, the attacker stops sharing its own transmission rate and at this point its reputation starts reducing at every time slot. The jammer would then launch its palpable attack by intentionally sending a high-powered interference signal to the channel that has a lot of traffic on it, thereby attempting to disrupt communication.As can be seen from the jammer model above, the jammer is an intelligent jammer who has acted as an "undercover agent" in the coalition.The jammer would start to initiate its attack right after it has enough information in its history table.The most important requirement is that the jammer must gather information about the transmission rates that have been shared by the other nodes in its range of transmission.It is also monitoring the communication in the coalition as well as initially participating in the network functions before launching its attack.The aim of jamming a selected channel is to disable the functionality of the channel in question thereby causing a jamming attack to all the nodes in the coalition.The complexity of the jamming can be seen in the fact that the movement of the jammers may hinder the detection capability of the coalition.The jammers distinctive attack would be different from a normal interference or noise in that it would send a high-powered signal to disrupt communication in a selected channel it has enough information on. Smaller coalition Legitimate nodes Attacker nodes Transmission range of nodes A grand coalition Figure 2 shows the presence of two jammers in a coalition of ten nodes.The jammers first became a part of two smaller coalitions which in turn merged to become a grand coalition.The node marked by the yellow color will be the intersecting node for both coalitions.It can be seen that the first jammer has three other legitimate nodes in its range of transmission; it has the capability of jamming the channels at which they are broadcasting their transmission rate.The second jammer on the other hand has two legitimate nodes in its transmission range.The scenario painted below shows that there could be a case of more than one jammer and subsequently our simulation results would show how these malicious nodes are excluded from the coalition. Maintaining the Coalition through Reputation. Here, we present a maintenance method that employs the node reputation to track all the history of each node's cooperation as they broadcast their transmission rate.Reputation, in the context of cooperation, is defined as the goodness of a node as perceived by other nodes in a network.A higher value of reputation indicates that the node is cooperative while a smaller value indicates misbehavior.The reputation of a node is maintained by its neighbors who monitor the nodes behavior and (1) Assign values for and (2) Start for all nodes (3) Node checks its transmission rate table to assign reputation value for neighbor .(4) if shares its transmission rate then (5) compute reputation value according to: (6) V , () = , (7) else (8) Set V , () = 0 if / , ≤ [34] (9) end if (10) if refuses to share its transmission rate then (11) update its reputation accordingly.We define a good behavior as the timely broadcast of transmission rate and misbehavior as refusal to broadcast transmission rate at any time slot.Every node monitors and is in turn monitored by its neighbors.A new node that joins the network is neither trusted nor mistrusted but is assigned a neutral reputation .All reputations are valid for a time period, V .There is an upper threshold, , and a lower threshold, , where < < . Reputation is increased at the rate of and decreased at the rate of , where , < 1 and are both real numbers.Both and need to be chosen carefully; this is because if is very large when compared to , a node may cooperate and build high reputation in a short time span and then consequently refuse to share its transmission rate for a long time; also, it may lack the motivation to continue cooperating after reaching the upper threshold, , due to the high rate of increment.On the other hand, if is reduced at a low rate, a node can stay in the coalition long enough to exploit the network infrastructure; decreasing at a very high rate also causes an unjust punishment for a node that misbehaves because of network congestion.It is possible to set equal to , as this would make the reputation increase and decrease at the same rate to ensure fairness.Algorithm 2 shows the monitoring process and how the reputation is either increased or decreased depending on the node's behavior. is the number of observations made by node about node 's refusal to share its transmission rate. is the tolerance of the network, that is, per reputation value before reducing reputation of a node. is the number of observations made by node when node shares its transmission range in the time period . is the broadcast factor of the network. Jammer's Exclusion from the Coalition. The exclusion of jammer from the coalition should factor in false positive which results when a legitimate node is classified as a jammer when it is unable to share transmission rates due to impairing wireless environment.False positive could also happen when a node fails to broadcast its transmission range at a particular time slot due to being in an out-ofrange location.This situation often arises in a mobile system where nodes are constantly in motion.We adopt reputation management to encourage trustworthy behavior from nodes in the coalition.In addition, reputation profiles are predictive of node's actions.The implementation of reputation systems is of particular importance in games where repeated interactions between multiple players are probable.Furthermore, because of the nature of the attack which includes carefully monitoring the network and then turning against the network when enough information has been gathered, it is necessary to drum up support from all nodes in the coalition to be able to properly exclude any malicious node. As it has been explained in Section 4.1, each node starts out with the same reputation value and these values will increase as the nodes continue to cooperate and reduce as well when they refuse to cooperate.When a node joins a small coalition, it would start with a reputation value of zero.The reputation is updated according to (10).Nodes that belong to the coalition have a monitor for observations and reputation records for first-hand information about routing and forwarding behavior of other nodes, nodes publishing of their transmission rates, and a path manager to adapt their behavior according to reputation and to take action against any misbehavior.The coalition excludes the jammer by following Algorithm 3. (2) Node is tolerated until its reputation falls below (3) Classify misbehaving nodes according to: jammer, if , < regular, if , ≥ (4) if , is below then (5) Node sends an alarm message (6) All nodes change their channel of transmission (7) Accused node's payoff reduces due to bad testimony (8) Node attempts to jam the communication channel that has the best transmission rate.(9) Jammer records little or no success because of the proactive step taken by the coalition.(10) Neighbors of node , blacklist him and exclude him from their small coalition.(11) Nodes with reputation greater than regroup again.( 12) else (13) No alarm is sent and nodes continue their transmission (14) end if (15) Nodes with , greater than are retained (16) Continue transmission Algorithm 3: Jammer exclusion from the coalition. The jammer prevention algorithm aims to reduce the number of false positives.False positive occurs when a legitimate node is classified as a jammer when a node fails to broadcast its transmission rate at a particular time slot due to being out of range, which is typical of mobile networks.The implementation of reputation systems is of particular importance in games where repeated interactions between multiple players are probable.Nodes that belong to the coalition have a monitor for observations and reputation records for firsthand information about the degree of cooperation of their neighbors as regards sharing their transmission rates.The coalition excludes the jammer by Algorithm 3. A malicious node that has been excluded from the coalition cannot be redeemed.Algorithm 3 provides the needed self-dependency and self-organization that are usually required in mobile ad hoc networks. Simulation Scenarios and Parameters.We implemented our approach using NS2 simulator.The results will show three different scenarios.The first scenario focuses on network throughput and delay; in this scenario, we show how the coalition size affects these two parameters.The second scenario shows how varying the reputation parameters can affect the performance of the jammer.The third scenario focuses on the varying of the weights (, , ) of the security characteristic function.The parameters for the simulation are shown in Table 1. Scenario One: Network Throughput and Delay.For this scenario, we show the network throughput and the delay with respect to time for three cases of different coalition sizes (5,10,20).This is done in order to show that delay would reduce significantly as the coalition size increases in a very short period of time. The network throughput and delay for the first case are discussed here.The first case consists of five nodes (1, 2, 3, 4, 5): four of them are legitimate nodes and one is the jammer.Figure 3 shows the throughput for this case; from the results as shown in Figure 3, we see that, owing to the small ratio of jammer to legitimate node, the throughput of the jammer is still considerably high until after about 3 ms when it decreases sharply.After 3 ms, the jammer has been excluded from the coalition and hence its throughput takes a nosedive. Figure 4 shows the network delay for the first case when the coalition is under attack.There is a spike at the beginning of the attack which indicates the sharp increase in the delay due to the jamming attack launched by the jammer.The delay is seen to improve as the coalition regroups again after excluding the jammer.For the second case, we also discuss the network throughput and delay with respect to time.In this case, there are ten nodes (1, 2, 3, . . ., 10): eight of them are legitimate nodes and two are jammers.Figure 5 displays the throughput of the jammer and the network during the attack.The throughput of the jammers reduced sharply right after 1 ms.This is because we have a larger number of neighboring nodes that could observe the activities of the jammer.After 1.5 ms, the jammer having been excluded from the coalition still seeks to continue jamming the network but its throughput is soon reduced to the barest minimum. Figure 6 shows the network delay for the second case when the coalition is under attack.Even though we still notice a spike at the beginning of the attack, the network delay has been greatly reduced.The reason for this is that the coalition has more nodes than the previous scenario which help to provide a more robust defense to attacks.After some time, we see that the delay is reduced to zero, which is the ideal delay that is expected in any network. In the third case, the network throughput and delay with respect to time are also shown.In this case, there are twenty nodes (1, 2, 3, . . ., 20): sixteen of them are legitimate nodes and four are attackers.From the results as shown in Figure 7, we see that the throughput of the attacker reduces after 0.5 ms.It can be seen that if we keep increasing the number of nodes in the coalition, the value of the network throughput improves tremendously.This occurs because our system relies on reputation value assigned by a node's neighbor and the more neighbors a node has, the better an alert would be raised when it crosses the threshold value for its reputation.Figure 8 shows the network delay for the third case when the coalition, again, is under attack.As can be observed, the spike has been reduced by more than 200 percent of the second case.This proves that the more nodes we have in the coalition, the better results we get. Scenario Two: Reputation.For this scenario, we show how reputation can affect different aspects for both legitimate nodes and jammers and show how reputation can be a major issue for classifying nodes and detecting jammers. In Figure 9, we show the comparison between the reputations of both regular and jammer nodes.A regular node retains its reputation value by sharing its transmission rate at every time slot while the reputation value for the insider jammer reduces when it stops cooperating.The nearest neighbor of the jammer node computes the reputation at every time slot.The computation follows ( 9) and (10) in Algorithm 2. Figure 10 shows the number of observations made by the nodes for cooperative, suspicious, and malicious nodes.A node is observed as suspicious if its reputation value is close to the lower threshold for the reputation.As seen in the figure, the number of observations made increases with increase in coalition size.This figure particularly shows the importance of the support rate parameter, as only the neighbors of a node can make a genuine observation about its activities in the coalition. Figure 11 shows the average payoff of the insider jammer after detection with different decreasing reputation values . From the figure, it can be seen that if we keep increasing , the punishment for a jammer is increased by a large decrease in its reputation score; this, in turn, reduces the average payoff of the jammer.A value of = 0.7 shows a great reduction in the payoff of the jammer. Scenario Three: Security Characteristic Function. This scenario shows outputs for different value assigned for security characteristic function weight and shows how these weights affect their respective parameters. Figure 12 illustrates network overhead when support rate parameter is varied for different coalition size.Overhead is any combination of excess or indirect computation time, memory, bandwidth, or other resources that are required to attain a particular goal.The goal for us here is to have as many neighbors as possible to testify for a node.Due to this goal, the network overhead needs to be reduced as much as possible.This is reduced by specifying a suitable value for .Here, the network overhead slightly changes with an increase in the number of neighbors. Figure 13 illustrates admitting probability for different coalition size and values.When is increased, the probability of admitting a node into a coalition is also increased which has a tendency of allowing more malicious nodes to gain access to the coalition.It is important to state that this parameter needs to be carefully chosen as well.For optimum results, it is better to set this value to 0.3.The value can however be chosen based on the peculiarity of the network. Figure 14 illustrates the degree of congestion when transmission rate is varied for different coalition size and values where is the value maximum transmission rate factor.When there are more nodes in the network, there is a tendency that the network would get congested when they start communicating.With an increase in , the degree of congestion for the network slowly increases as seen in Figure 14.The highest degree of congestion is seen when is set to 0.8 for a coalition size of 80 nodes. Conclusion and Future Work We have been able to show through simulation that a reputation-based coalitional game can help prevent insider attacks in a mobile ad hoc network.We discussed a coalition formation algorithm and showed how nodes can be admitted into a coalition using a modified security characteristic function.We came up with a unique mechanism that keeps track of the transmission rates and reputation of individual nodes in the network.Also, we showed how the jammers action can be prevented and how it is excluded from the coalition.In the future, we would like to show through simulations and experiments that this model can be scaled up to include thousands of nodes and this would further show that the algorithm would work best when there are so many nodes in the coalition.We would also like to investigate a case of cooperative attacks that could occur when the excluded nodes form a coalition with the aim of jamming communication in their previous coalition.Factor responsible for increasing reputation value , (): Notations Factor responsible for reducing reputation value , , : Lower, neutral, and upper threshold value, respectively , : Tolerance factor of the network and broadcast factor , : Rate of increase and decrease of reputation value. Figure 1 : Figure 1: A coalition of ten (10) nodes with no malicious node. Figure 9 : Figure 9: Reputation of both regular and jammer node over time. Figure 10 :Figure 11 : Figure 10: Number of observations made for all nodes. Figure 12 : Figure 12: System overhead percentage with different numbers of neighboring nodes. Figure 13 : Figure 13: Admitting probability for different coalition size and beta values. Figure 14 : Figure 14: Degree of congestion when transmission rate is varied. Table 1 : Parameters for simulation. Payoffshareofnode , : Reputation value of node by node * , : Previous reputation value of node by node , : Reputation value of node by node V , ():
9,843.4
2016-04-01T00:00:00.000
[ "Computer Science", "Engineering" ]
K-cut on paths and some trees We define the (random) $k$-cut number of a rooted graph to model the difficulty of the destruction of a resilient network. The process is as the cut model of Meir and Moon except now a node must be cut $k$ times before it is destroyed. The first order terms of the expectation and variance of $\mathcal{X}_{n}$, the $k$-cut number of a path of length $n$, are proved. We also show that $\mathcal{X}_{n}$, after rescaling, converges in distribution to a limit $\mathcal{B}_{k}$, which has a complicated representation. The paper then briefly discusses the $k$-cut number of some trees and general graphs. We conclude by some analytic results which may be of interest. We call the (random) total number of cuts needed to end this procedure the k-cut number and denote it by K(G n ). (Note that in traditional cutting models, nodes are removed as soon as they are cut once, i.e., k = 1. But in our model, a node is only removed after being cut k times.) One can also define an edge version of this process. Instead of cutting nodes, each time we choose an edge uniformly at random from the component that contains the root and cut it once. If the edge has been cut k-times then we remove it. The process stops when the root is isolated. We let K e (G n ) denote the number of cuts needed for the process to end. Throughout human history, various secret societies of very different structures existed [14]. Nonetheless, most such societies have a few leaders who are critical for the organizations to function properly. The k-cut process can be seen as a simplified model of the destruction process of a resilient secret network. The graph G n represents the structure of the network and the root node represents the leader. We assume that active members of the network are chosen uniformly at random to be investigated by the authority, and that a member stops operating after having been investigated k times. The network completely breaks down when the root (leader) stops working. Thus the random number K(G n ) models how much effort it takes to destroy the network. Remark 1. A model of similar flavor was introduced for the destruction of terrorist cells with tree-like structures [15]. In this model, each node of a tree is removed in one step, independently at random with some fixed probability. The quantity being studied is the probability that the root node (leader) is separated from all the leaves (operatives). This model has been studied for deterministic trees [6] and for conditioned Galton-Watson trees [10]. Our model can also be applied to botnets, i.e., malicious computer networks consisting of compromised machines which are often used in spamming or attacks. The nodes in G n represent the computers in a botnet, and the root represent the bot-master. The effectiveness of a botnet can be measured using the size of the component containing the root, which indicates the resources available to the botmaster [8]. To take down a botnet means to reduce the size of this root component as much as possible. If we assume that we target infected computers uniformly at random and it takes at least k attempts to fix a computer, then the k-cut number measures how difficult it is to completely isolate the bot-master. The case k = 1 and G n being a rooted tree has aroused great interests among mathematicians in the past few decades. The edge version of one-cut was first introduced by Meir and Moon [29] for the uniform random Cayley tree. Janson [23,24] noticed the equivalence between one-cuts and records in trees and studied them in binary trees and conditional Galton-Watson trees. Later Addario-Berry, Broutin, and Holmgren [1] gave a simpler proof for the limit distribution of onecuts in conditional Galton-Watson trees. For one-cuts in random recursive trees, see Meir and Moon [30], Iksanov and Möhle [22], and Drmota, Iksanov, Moehle, and Roesler [12]. For binary search trees and split trees, see Holmgren [19,20]. The k-cut number of a tree One of the most interesting cases is when G n = T n , where T n is a rooted tree with n nodes. There is an equivalent way to define K(T n ). Imagine that each node is given an alarm clock. At time zero, the alarm clock of node j is set to ring at time T 1,j , where (T i,j ) i≥1,j≥1 are i.i.d. (independent and identically distributed) Exp(1) random variables. After the alarm clock of node j rings the i-th time, we set it to ring again at time T i+1,j . Due to the memoryless property of exponential random variables (see [13, pp. 134]), at any moment, which alarm clock rings next is always uniformly distributed. Thus if we cut a node that is still in the tree when its alarm clock rings, and remove the node with its descendants if it has already been cut k-times, then we get exactly the k-cut model. How can we tell if a node is still in the tree? When node j's alarm clock rings for the r-th time for some r ≤ k, and no node above j has already rung k times, we say j has become an r-record. And when a node becomes an r-record, it must still be in the tree. Thus summing the number of r-records over r ∈ {1, . . . , k}, we again get the k-cut number K(T n ). One node can be a 1-record, a 2-record, etc., at the same time, so it can be counted multiple times. Note that if a node is an r-record, then it must also be a j-record for j ∈ {1, . . . , r − 1}. To be more precise, we define K(T n ) as a function of (T i,j ) i≥1,j≥1 . Let i.e., G r,j is the moment when the alarm clock of node j rings for the r-th time. Then G r,j has a gamma distribution with parameters (r, 1) (see [13,Thm. 2.1.12]), which we denote by Gamma(r). In other words, G r,j has the density function, where Γ(z) denotes the gamma function [11, 5.2.1]. Let where · denotes the Iverson bracket, i.e., S = 1 if the statement S is true and S = 0 otherwise. In other words, I r,j is the indicator random variable for node j being an r-record. Let Then K r (T n ) is the number of r-records and K(T n ) is the total number of records. The k-cut number of a path Let P n be a one-ary tree (a path) consisting of n nodes labeled 1, . . . , n from the root to the leaf. Let X n def = K(P n ) and X r n = K r (P n ). In this paper, we mainly consider X n and we let k ≥ 2 be a fixed integer. The first motivation of this choice is that, as shown in section 5, P n is the fastest to cut among all graphs. (We make this statement precise in Lemma 12.) Thus X n provides a universal stochastic lower bound for K(G n ). Moreover, our results on X n can be immediately extended to some trees of simple structures: see Section 5. Finally, as shown below, X n generalizes the well-known record number in permutations and has very different behavior when k = 1, the usual cut-model, and k ≥ 2, our extended model. The name record comes from the classic definition of record in random permutations. Let σ 1 , . . . , σ n be a uniform random permutation of {1, . . . , n}. If σ i < min 1≤j<i σ j , then i is called a (strictly lower) record. Let K n denote the number of records in σ 1 , . . . , σ n . Let W 1 , . . . , W n be i.i.d. random variables with a common continuous distribution. Since the relative order of W 1 , . . . , W n also gives a uniform random permutation, we can equivalently define σ i as the rank of W i . As gamma distributions are continuous, we can in fact let W i = G k,i . Thus being a record in a uniform permutation is equivalent to being a k-record and K n L = X k n . Moreover, when k = 1, K n L = X n . Starting from Chandler's article [7] in 1952, the theory of records has been widely studied due to its applications in statistics, computer science, and physics. For more recent surveys on this topic, see [3], [4], [32] and [5]. A well-known and surprising result of K n by Rényi [33] is that (I k,j ) 1≤j≤n are mutually independent. It follows easily that where N (0, 1) denotes the standard normal distribution [13, pp. 111]. The following theorem shows that only one-records actually matter. here constants η k,r are defined by Therefore In particular, when k = 2 The previous two theorems imply that the correct rescaling parameter should be n 1− 1 k . However, unlike the record number in permutations, the limit distribution of X n /n 1− 1 k has a rather complicated representation. Theorem 3. Let (U j , E j ) j≥1 be mutually independent random variables with E j L = Exp (1) and U j L = Unif[0, 1]. We define the k-cut distribution B k by (1.8) (We use the convention that an empty product equals one.) Then Remark 2. An equivalent recursive definition of S p is Remark 3. It is easy to see that X e n+1 def = K e (P n+1 ) L = X n by treating each edge on a length n + 1 path as a node on a length n path. Outline In section 2 3 and 4, we prove Theorem 1, 2, and 3 respectively. In section 5, we discuss some easy extensions of our results to other graphs including binary trees, split trees and Galton-Watson trees. Finally, in section 6, we collect some auxiliary results used in our proofs. the expectation In this section we prove Theorem 1. Since E X k n ∼ log(n) is well-known, we only prove (1.4) for r < k. Throughout this paper, we use the notation O( f ( z)) to denote a function g( z) such that for all z in a given set S, there exists a constant C > 0, |g( z)| ≤ C f ( z). Sometimes we do not explicitly state the set S when it is clear from the context. Proof. Conditioning on G r,i+1 = x ≥ 0, for I r,i+1 = 1, i.e., for node i + 1 to be an r-record, we need to have G k,1 , . . . , G k,i all greater than G r,i+1 . Since these i random variables are i.i.d. Gamma(k), the probability of this event equals P {Gamma(k) > x} i . Since G r,i+1 L = Gamma(r), using its density function (1.2), where the estimation of the integral comes from Lemma 16. where η k,r is defined in (1.5). the variance In this section we prove Theorem 2. First we estimate E I 1,i+1 I 1,j+1 for j > i ≥ 0. For the moment we condition on G 1,i+1 = x and G 1,j+1 = y. For I 1,i+1 I 1,j+1 = 1 to happen, both node i + 1 and node j + 1 must be one-records. Recalling the definition of one-records in (1.3), this event can be written as , and all these random variables are independent, we have It follows from G 1,i+1 and G 1,j+1 having Exp(1) distribution that Proof. In this case, max(x, y) = y, thus by (3.1) Let Poi(c) denote a Poisson distribution with mean c. Using the connection between Poisson processes and gamma distributions [13, Section 3.6.3], the inner integral equals It follows from Lemma 16 that where the last step uses the fact that only the summand for ℓ = 0 matters. . Let x 1 = a β /k! and y 1 = b β /k!. Then for all a ≥ 1 and b ≥ 1, Proof. In the case of A 1,i,j , max(x, y) = x and y − x < 0. Thus by (3.1), It follows from Lemma 16 that where Γ(ℓ, z) denotes the upper incomplete gamma function [11, 8.2.2]. Let S be the integral area of (3.3). We will choose S 0 , an appropriate subset of S, in which the integrand of (3.3) can be well approximated by exp − ax k +by k k! . Then we will show that the part outside S 0 can be absorbed by the error term in (3.2). More precisely, let x 0 = a −α and y 0 = b −α where α = 1 In other words, S is the integration area of (3.3) and S 0 is the part of S in which x < x 0 and y < y 0 . Note that the shape of S 0 is different when a < b and a > b, see Figure 1. We split the integral into two parts Throughout the proof of this lemma, the O( f (a, b)) notation applies to (a, b) ∈ [1, ∞) 2 . For A 1,1 , i.e., the part inside S 0 , we can well approximate the integrand using Lemma 15 and get Assume for now that a > b, i.e., case (i) in Figure 1. Since exp − ax k +by k k! is monotonically decreasing in both x and y, where the last step uses ξ k a 2 , b = O(1) by Lemma 18, and that ax k 0 /k! = x 1 . For the case (ii) in Figure 1, we can further divide S \ S 0 into two parts, as depicted by the dotted line, to show that Together with (3.5), we can see that (3.6) is valid regardless of the order of a and b. Putting (3.4) and (3.6) together, we have Note that Γ(k, x)/Γ(k) is monotonically decreasing in x. Therefore, if x > x 0 , then by Lemma 15, and that Γ(k, 0) = Γ(k), Thus when a > b, i.e., case (i) in Figure 1, where we again use Lemma 15. Together with a similar analysis for a < b, we can see that, regardless of the order of a and b, It follows from (3.7) and (3.8) that from Lemma 18 and that e − x 1 2 and e − y 1 2 are exponentially small. Now we are ready to finish the proof of Theorem 2. Since we have Thus by Lemma 2, where last step follows from Lemma 19. Also, in the above computation, we can approximate the double sum by an integral because ξ k (a, b) is monotonically decreasing in both a and b (see Lemma 18). Plug (3.10), (3.11) into (3.9), where γ k and λ k are defined in Theorem 2. convergence to the k-cut distribution By Theorem 1 and Markov's inequality [13, Thm. 1.6.4], X r n /n 1− 1 k p → 0 for r ∈ {2, . . . , k}. So instead of proving Theorem 3 for X n , it suffices to prove it for X 1 n . The idea of the proof is to condition on the positions and values of the krecords, and study the distribution of the number of one-records between two consecutive k-records. We use (R n,j ) j≥1 to denote the k-record values and (P n,j ) j≥1 the positions of these k-records. To define them more precisely, recall that G r,j is the moment when the alarm clock of j rings for the r-th time, see (1.1). Let R n,0 def = 0, and P n,0 otherwise let P n,p = 1 and R n,p = ∞. Note that R n,1 is simply the minimum of n i.i.d. Gamma(k) random variables. Recall that S = 1 if S is true and S = 0 otherwise. According to (P n,j ) j≥1 , we can split X 1 n into the following sum where I 1,j is the indicator for j being a one-record and we also use the fact that a k-record must be a one record. Figure 2 gives an example of (B n,p ) p≥1 for n = 12. It depicts the positions of the k-records and the one-records. It also shows the values and the summation ranges for (B n,p ) p≥1 . 0 1 2 3 4 5 6 7 8 9 10 11 12 13 P n,3 P n,2 P n,1 n P n,0 Recall that T i,j is the lapse of time between the alarm clock of j rings for the (i − 1)-th time and the i-th time, see (1.1). Conditioning on (P n,j ) j≥1 and (R n,j ) j≥1 , for j ∈ (P n,p , P n,p−1 ), we must have ∑ 1≤i≤k T i,j < R n,p−1 . (Otherwise j would have become a k-record.) And for j to be a one-record, we need T 1,j < R n,p . Since (T i,j ) i≥1,j≥1 are i.i.d. Exp(1) before the conditioning, we have where the last step uses Lemma 15. Then the distribution of B n,p is just where Bin(m, p) denotes a binomial (m, p) distribution. When R n,p−1 is small and P n,p−1 − P n,p is large, this is roughly Therefore, to simplify the computations, we first study a slightly modified model. We say a node j is an alt-one-record if I * j = 1. As in (4.2), we can write Then conditioning on (R n,j , P n,j ) n≥1,j≥1 , B * n,p has exactly the distribution as (4.3). Figure 3 gives an example of (B * n,p ) p≥1 for n = 12. It shows the positions of alt-onerecords, as well as the values and the summation ranges of(B * n,p ) p≥1 . Note that the positions and the number of one-records and alt-one-records are not necessarily the same. The most part of this section is devoted to showing that We will argue at the end of this section that X 1 n /n 1− 1 k and X * n /n 1− 1 k converge to the same limit. Then we show that if we choose p large enough, then the leftovers, i.e., ∑ p<j B j and ∑ p<j B * n,j /n 1− 1 k , are negligible. Proof of Proposition 1 The first step to prove (4.7) is to construct a coupling by defining all the random variables that we are studying in one probability space. Let since P n,0 = n + 1. And we do not change the definition of any other random variables. Recall that R m,1 is the minimum of m independent Gamma(k) random vari- In other words, M(m, t) has the distribution of R m,1 conditioned on R m,1 > t. Moreover, the density function of H m converges point-wise to the density function of H. The lemma also holds if we replace H m by Proof. We only prove the lemma for H m . Similar argument works for H ′ m . We show that for all fixed x ≥ t, P {H m > x} converges to Let y m = x/r Thus we have the point-wise convergence of density functions. We define the auxiliary random variables The second step is to show that where (S p ) p≥1 are defined by (1.8) in Theorem 3. Moreover, the joint density function of (S n,1 , . . . , S n,p ) converges point-wise to the joint density function of (S 1 , . . . , S p ). The lemma also holds if we replace S n,j by S * n,j . Proof. We only prove the lemma for S n,j . The same argument works for S * n,j . Let F = σ((U j ) j≥1 ) denote the sigma algebra generated by (U j ) j≥1 . To prove Lemma 5, we will condition on F and treat (U p , P n,p , L * n,p , L n,p ) p≥0,n≥1 as deterministic numbers. If we can show the convergence of distribution in (4.7) conditioning on F , i.e., if for all (x 1 , . . . , x p ) ∈ R p , P S n,1 > x 1 , . . . , S n,p > x p F − P S 1 > x 1 , . . . , S p > x p F → 0, then we have, for all fixed (x 1 , . . . , x p ), Recall that R n,1 is the minimum of n i.i.d. Gamma(k) random variables and P n,0 = n + 1, see (4.1). Then Let f n,1 (·) and f 1 (·) denote the density functions of S n,1 and S 1 respectively. It follows from Lemma 4 that where (E j ) j≥1 are i.i.d. Exp(1) random variables, and for all y 1 ∈ R f n,1 (y 1 ) → f p (y 1 ). (4.13) For p > 1, we condition on S p−1 = y p−1 ∈ [0, ∞). We will apply Lemma 4 to P n,p , by taking Recall that R n,p is the minimum of (P p−1 − 1) i.i.d. Gamma(k) random variables restricted to (R n,p−1 , ∞), see (4.1). Thus where we use the definition of L * n,p and S n,p in (4.11) to get Also note that by (4.9), Let f n,p (·|y p−1 ) and f p (·|y p−1 ) denote the density function of S n,p |S n,p−1 = y p−1 , and S p |S p−1 = y p−1 respectively. It follows from Lemma 4 that and for all y p ∈ [0, ∞) f n,p (y p |y p−1 ) → f p y p |y p−1 . (4.14) Then by (4.13) and (4.14), for all y 1 , . . . , y p ∈ [0, ∞) p , g n,p (y 1 , . . . , y p ) def = f n,p (y p |y p−1 ) f n,p−1 (y p−1 |y p−2 ) . . . f n,1 (y 1 ) In other words, the joint density function of (S n,1 , . . . , S n,p ) converges point-wise to the joint density function of (S 1 , . . . , S p ). Thus by Scheffé's lemma [17, pp. 227], we have the convergence in distribution in (4.12). Now it is quite easy to finish the proof of Proposition 1 using the following lemma Proof. For all fixed ε > 0, where the second step uses Chernoff's bound [31, pp. 43]. We will apply Lemma 6 to B * n,j by taking m = P n,j−1 − P n,j , ℓ m = L * n,j , p m = y j L * n,j , c = y j . Note that by (4.9) and L * n,p It follows from Lemma 6 that conditioning on A y 1 , . . . , y p defined in (4.15) Combining the two above expressions, Let g * n,p (y 1 , y 2 , . . . , y p ) and g * p (y 1 , y 2 , . . . , y p ) be the joint density functions of (S * n,1 , . . . , S * n,p ) and (S 1 , . . . , S p ) respectively. Then for all (x 1 , . . . , In other words, jointly, conditioning on F = σ((U j ) j≥1 ), and the convergence also holds without conditioning on F by the same argument for Lemma 5. Thus we are done proving Proposition 1. The leftovers In this section, we show that for p large enough, ∑ s>p B s , ∑ s>p B * n,s /n 1− 1 k , and ∑ s>p B n,s /n 1− 1 k are all negligible. Lemma 7. For all ε > 0 and δ > 0, there exists an p ∈ N such that We are done since To deal with ∑ s>p B * n,s and ∑ s>p B n,s , the next lemma allows us to choose an appropriate p. Lemma 8. Uniformly for all p ∈ N and n ∈ N, P P n,p n ∈ e −3p/2 , e −p/2 ∩ P n,p R k n,p < k! where the last steps uses (4.18) in Lemma 7. On the other hand, by Lemma 5, Recall that denotes stochastically smaller than. Then by the definition of S p (see (1.8)), S k p k!W p where W p L = Gamma(p). It follows from (4.18) that P P n,p R k n,p ≥ k! Lemma 9. For all ε > 0 and δ > 0, there exists p ∈ N and n 0 ∈ N such that for all n > n 0 , Proof. Let A p denote the event in (4.19) for a p chosen later. We condition on for (m, y) satisfying (m, y) ∈ S * def = (m, y) ∈ R 2 : ne −3p/2 ≤ m ≤ ne −p/2 , my k ≤ k! 3p 2 . (4.23) (If (m, y) / ∈ S * , the event A p (m, y) is empty.) Note that this changes the distribution of G r,j for j < m, from Gamma(k) to Gamma(k) restricted to (y, ∞). Thus by the definition of I * j in (4.4) where the last step follows from Lemma 15. Thus for n large enough, for all (m, y) ∈ S * , where the second inequality uses [11, 8.10.11]. Together with (2.1) in Lemma 1, Thus for n large enough, by Theorem 1, for all (m, y) ∈ S * , if we take p large enough. This implies that there exists an p ∈ N and n 0 such that for all n > n 0 , Now we are done since by Lemma 8, P A c p = O p −2 . Lemma 10. For all ε > 0 and δ > 0, there exists p ∈ N and n 0 ∈ N such that for all n > n 0 , Proof. We again condition on A p (m, y), as defined by (4.22) in Lemma 9, for (m, y) satisfying (4.23). Let (E ′ s ) s≥1 be i.i.d. Exp(1) random variables. By our conditioning, for j < m, the distribution of T 1,j+1 has changed from E ′ 1 to E ′ 1 conditioning on E ′ 1 + · · · + E ′ k > y. Let f (x) be the density function of T 1,j+1 conditioning on A p (m, y). Then by Lemma 15, for x ≥ y, and for x < y By (4.23), my k < k!3p/2 and m ≥ e −3p/2 n. So for n large enough y < 1/2. Thus by [11, 8.10.11], In other words f (x) ≤ 2e −x . Thus by (4.24) in Lemma 9. From now on the proof simply follows the same argument as Lemma 9. Finishing the proof Theorem of 3 By Proposition 1 and Lemma 9, for all x > 0 and δ > 0, there exists ε > 0, p ∈ N and n 0 ∈ N, such that for all n ≥ n 0 , On the other hand, we can choose ε small enough such that And by Lemma 7, we can choose p such that P ∑ p<j B j ≥ ε < δ 3 . Thus by Proposition 1, In other words, we have Now we fill the gap between X * n and X 1 n as we promised. Lemma 11. There exists a coupling such that Note that in the following proof, we construct (P n,j , R n,j ) j≥0 as in (4.1). In other words, we do not use the coupling constructed in subsection 4.1. Proof. Recall that (T * i,j ) i≥1,j≥1 are i.i.d. Exp(1) random variables that we used, together with (P n,j , R n,j ) j≥0 to define X * n . Now we modify (T i,j ) i≥1,j≥1 by letting T i,j = T * i,j for all i ∈ N and j ∈ {P n,j } j≥0 , unless there is a discrepancy, i.e., if for some p ≥ 1, P n,p−1 < j < P n,p , and This may change the value of (B n,j ) j≥1 but not its distribution. Let J n,p denote the number of discrepancies between P n,p−1 and P n,p , i.e., Recall that (see (4.2) and (4.5)) Then with the above coupling, for all fixed p ∈ N, By Theorem 1, we have X k n /n 1− 1 k p → 0. It follows from Lemma 9 and Lemma 10 that by choosing p large enough, the last two terms of (4.26) divided by n 1− 1 k are all negligible. Thus, it suffices to only consider ∑ 1≤j≤p J n,j . Therefore, it follows from Lemma 15 and the series expansion of the incomplete gamma function (see (6.1)) that ≤ P n,p−1 R k n,p ≤ 2 P n,p−1 (L * n,p ) k (L * n,p R n,p ) k a.s. for n large enough, where denotes stochastically smaller than. In other words, for all fixed p ∈ N, sup n≥1 E J n,p < ∞. Thus ∑ 1≤i≤p J n,i Then by (4.26) we are done. Remark 4. It is not obvious how to directly compute E [B k ] from its representation B k = ∑ 1≤p B p (see (1.6)). In fact, it is not difficult to show that Thus ((X 1 n /n 1− 1 k ) 2 ) n≥1 is uniformly integrable and by Theorem 2 We leave the details to the reader. some extensions In this section we briefly discuss some easy implications of our main results. A lower bound and an upper bound for general graphs Let G n be the set of rooted graphs with n nodes. Recall that for G n ∈ G n , K(G n ) denotes the k-cut number of G n . The following lemma shows that P n , a path of length n with one endpoint as the root, is the easiest to break down among all graphs in G n . Lemma 12. For all G n ∈ G n , i.e., the left hand side is stochastically dominated by the right hand side [18, pp. 68]. Therefore, min where η k,1 is as in Theorem 1. Proof. Let T n be an arbitrary spanning tree of G n with the root of G n marked as the root of T n . It is not difficult to see that K(T n ) K(G n ) -adding edges to the graph certainly would not decrease the k-cut number. Consider the simple case when T n is a tree that consists of only two paths connected to the root. If we disconnect one of these two paths from the root and connect it to the leaf of the other path, we can only decrease the number of records in the tree. In other words, we have a coupling which implies K(P n ) K(T n ). For a more complicated T n , we can repeat the above transformation for subtrees that consist of a root and two paths connected to the root, until the whole tree becomes a path. In other words, for all trees P n ∈ G n , K(P n ) K(T n ). This proves (5.1). The second result of the lemma follows trivially from Theorem 1 (see e.g., [18,Theorem 2.15,pp. 71]). The most resilient graph is obviously K n , the complete graph with n vertices. Thus we have the following upper bound. Proof. Let S n be the tree of n nodes with one root and n − 1 leaves. Obviously K(K n ) L = K(S n ). So we can prove the lemma for K(S n ) instead. Let Y be the time when the alarm clock of the root rings for the k-th time. Let W 1,n , . . . , W n−1,n be the number of cuts each leaf receives. Conditioning on the event Y = y, W 1,n , . . . , W n−1,n are i.i.d. with the same distribution max(Poi(y), k) (each node can receive at most k cuts). In other words, conditioning on Y = y, by the law of large numbers, Therefore, (5.2) follows. Since K(S n )/n ≤ k, i.e., it is bounded, we also have where we omit the computation for the last step. Path-like graphs If a graph G n consists of only long paths, then the limit distribution K(G n ) should be related to B k , the limit distribution of K(P n )/n 1− 1 k (see Theorem 3). We give a few simple examples in this section whose details we leave to the reader. Example 1 (Long path). Let (G n ) n≥1 be a sequence of rooted graphs such that G n contains a path of length m(n) starting from the root with n − m(n) = o(n 1− 1 k ). Since it takes at most k(n − m(n)) cuts to remove all the nodes outside the long path, K(P m(n) ) K(G n ) K(P m(n) ) + ko n 1−1/k . Together with Lemma 12, this implies that K(G n )/n 1− 1 k converges in distribution to B k . n be a tree with n nodes. If T [ℓ] n consists of only a path connected to the root such that ℓ − 1 leaves are attached to each node on this path, except the last one which may have between 1 and ℓ leaves attached, then we call T [ℓ] n an ℓ-caterpillar. We call this path of length n/ℓ + O(1) the spine. It is easy to see that the number of one-records in T [ℓ] n is about ℓ times of the number of one-records in P ⌈n/ℓ⌉ . Therefore, it is not difficult to show that K(T Example 3 (Curtain). Let ℓ ≥ 2 be a fixed integer. Let T (ℓ) n be a graph that contains of only ℓ paths connected to the root, with the first ℓ − 1 of them having length n−1 ℓ . We call T (ℓ) n an ℓ-curtain. It is easy to see that cutting T (ℓ) n is very similar to cutting ℓ separated paths of length n ℓ . Therefore, we can show that k are i.i.d. copies of B k . Deterministic and random trees The approximation given in Lemma 1 can be used to compute the expectation of k-cut numbers in many deterministic or random trees. We give three examples: complete binary trees, split trees, and Galton-Watson trees. Complete binary trees Let T bi n be a complete binary tree of with n = 2 m+1 − 1 nodes, i.e., its height is m. Observe that for a node at depth i, i.e., at distance i to the root, to be an r-record we require that it has been cut r times before any of the i nodes above it have been cut k times. This has exactly the same probability for the i + 1-th node in a path to be an r-record. Hence the random variable I r,i+1 in Lemma 1 is also the indicator that a node at depth i is an r-record. Then by Lemma 1, for r ≤ k, Thus only the one-records matter as in the case of P n and Split trees Split trees were first defined by Devroye [9] to encompass many families of trees that are frequently used in algorithm analysis, e.g., binary search trees. The random split tree T sp n has parameters b, s, s 0 , s 1 , V and n which are required to satisfy the inequalities To define the random split tree consider an infinite b-ary tree U . The split tree T sp n is constructed by distributing n balls among nodes of U . For a node u, let n u be the number of balls stored in the subtree rooted at u. Once n u are all decided, we take T sp n to be the largest subtree of U such that n u > 0 for all u ∈ T sp n . Let V u = (V u,1 , . . . , V u,b ) be the independent copy of V assigned to u. Let u 1 , . . . , u b be the child nodes of u. Conditioning on n u and V u , if n u ≤ s, then n u i = 0 for all i; if n u > s, then (n u 1 , . . . , n u b ) ∼ Mult(n − s 0 − bs 1 , V u,1 , . . . , V u,b ) + (s 1 , s 1 , . . . , s 1 ), where Mult denotes multinomial distribution, and b, s, s 0 , s 1 are integers satisfying (5.3). In the setup of split trees (and other random trees), we obtain K(T sp n ) by picking a random tree T sp n and a random k-cut of it. We let K r (T sp n ) be the total number of r-records, just as we did for fixed trees. In the study of split trees, the following condition is often assumed: Let N be the number of nodes in T Holmgren [20,Thm. 1.1] also showed that for k = 1, condition A implies that K(T sp n ) converges to a weakly 1-stable distribution after normalization, and that Lemma 14. Let T sp n be a split tree defined as above. Assuming condition A, we have (1 ≤ r ≤ k), (5.5) otherwise we say it is bad. Let B sp n be the number of bad nodes in T sp n . It is known that there are not so many bad nodes. More specifically, by [21][Thm. Let X sp n be the number of r-records that are also good nodes. By (5.6), which is negligible. Thus it suffices to prove the lemma for X sp n . By Lemma 1 and the definition of good nodes, we have Taking expectations, we get (5.5). Galton-Watson trees A Galton-Watson tree T gw is a random tree that starts with the root node and recursively attaches a random number of children to each node in the tree, where the numbers of children are drawn independently from the same distribution ξ. A conditional Galton-Watson tree T gw n is T gw restricted to size n. Conditional Galton-Watson trees have been well-studied, see, e.g., [25]. We assume throughout that Eξ = 1 and σ 2 def = Var (ξ) ∈ (0, ∞). Let Z i (T gw n ) be the number of nodes of depth i (at distance i to the root). Let H(T gw n ) be the height of T gw n . Then, by Lemma 1, conditioning on T gw n , It has been shown that [24, Theorem 1.13] uniformly for all i ≥ 1 and n ≥ 1. It is also well-known that H(T gw n ) is of the order √ n. More precisely, there exist constants C ′ and c ′ such that uniformly for all n ≥ 1. In fact, we conjecture that n 1− 1 2k is actually the right order of EK(T gw n ). Let v 0 , . . . , v n−1 be the nodes of T n in depth first order. (In other words, v 0 is the root of the tree. Assuming that v 0 has d subtrees T 1 , . . . , T d attached to it, then v 1 , . . . , v n−1 are the nodes of T 1 in depth first order, followed by nodes in T 2 in depth first order, and so on, until T d . This continues recursively.) Let D n (i) be the depth of v i . As is well-known, when Eξ = 1 and σ 2 def = Var (ξ) ∈ (0, ∞), and e(t) is a Brownian excursion. See [28] for details. Therefore, the expected number of k-records in T gw n conditioned on T gw n satisfies Thus it is natural to expect that This has indeed been proved with other methods [24, Theorem A.1]. As a result, we conjecture that for r ∈ {1, . . . , k}, and as a result, we further conjecture that some auxiliary results In this section, we collect some lemmas that are used in previous sections. Lemma 18. For a > 0, b > 0 and k ≥ 2, Moreover, ξ k (a, b) is monotonically decreasing in both a and b.
9,762.2
2018-04-09T00:00:00.000
[ "Mathematics" ]
Estimates for Unimodular Multipliers on Modulation Hardy Spaces Δ x is the Laplacian and e α/2 is the multiplier operator with symbol e α (see [1] for its definition). The cases α = 1, 2, 3 are of particular interest because they correspond to the (half-) wave equation, the Schrödinger equation, and (essentially) the Airy equation, respectively. Unimodular Fouriermultipliers generally do not preserve any Lebesgue space L, except for p = 2. The L-spaces are not the appropriate function spaces for the study of these operators and the so-called modulation spaces are good alternative classes for the study of unimodular Fourier multipliers.Themodulation spacesM p,q (R)were first introduced by Feichtinger [2–4] to measure smoothness of a function or distribution in a way different from L spaces, and they are now recognized as a useful tool for studying pseudodifferential operators [5–7]. We will recall the precise definition of modulation spaces in Section 2 below. Recently, the boundedness of unimodular Fourier multipliers e α/2 on the modulation spaces has been investigated in [1, 8–15]. Particularly, one has the following results. Introduction A Fourier multiplier is a linear operator whose action on a test function on R is formally defined by The function is called the symbol or multiplier of . In this paper, we will study the unimodular Fourier multipliers with symbol | | for ∈ R + . They arise when one solves the Cauchy problem for dispersive equations. For example, for the solution ( , ) of the Cauchy problem + |Δ| /2 = 0, ( , ) ∈ R + × R , (0, ) = 0 ( ) , (2) we have the formula ( , ) = ( |Δ| /2 0 )( ). Here Δ = Δ is the Laplacian and |Δ| /2 is the multiplier operator with symbol | | (see [1] for its definition). The cases = 1, 2, 3 are of particular interest because they correspond to the (half-) wave equation, the Schrödinger equation, and (essentially) the Airy equation, respectively. Unimodular Fourier multipliers generally do not preserve any Lebesgue space , except for = 2. The -spaces are not the appropriate function spaces for the study of these operators and the so-called modulation spaces are good alternative classes for the study of unimodular Fourier multipliers. The modulation spaces , (R ) were first introduced by Feichtinger [2][3][4] to measure smoothness of a function or distribution in a way different from spaces, and they are now recognized as a useful tool for studying pseudodifferential operators [5][6][7]. We will recall the precise definition of modulation spaces in Section 2 below. Theorem A (see [11]). Let ∈ R,1 ≤ , ≤ ∞, > 1/2 and ̸ = 1. One has, for ≥ 1, Here (and throughout this paper), we use the notation ⪯ to mean that there is a positive constant independent of all essential variables such that ≤ . Theorem B (see [15]). Let 0 < < 1, 0 < ≤ ∞, > (1/ − 1) and ∈ R. Then |Δ| /2 is bounded from , to , if and only if In this paper, we use a different method from [15] to prove the following theorem, which, in particular, uses the modulation Hardy spaces , that will be later defined in Section 2. Particularly, the above inequality holds for all > 0 if is a positive even number. We want to make a few remarks on Theorem 1. First, (iii) in Theorem 1 says that when = 1, compared to the case ≥ 2 in (i), one obtains a larger range of and a smaller range of . We do not know if there is a unified formula regarding and for all dimension ≥ 1. Second, in the proof we will see that, in the low frequency parts of the definition of , , the fractional Schrödinger semigroup has a growth |1/ −1/2| when is growing, but it gains an arbitrary regularity. In the high frequency part, the semigroup can be controlled by at each piece of its decomposition with frequency . This phenomenon was also more precisely observed in [1,15] (see also [11]). Thirdly, the case = 1 was studied in [8,16]. Since the norm is dominated by the norm and the Riesz transforms are bounded on , by the Riesz transform characterization of the (see Section 2), we easily obtain the following corollary. In the next theorem, we state some mixed norm estimates. We consider the following linear Cauchy problem with negative power: We give the grow-up rate of the solution to the above Cauchy problem in the modulation spaces. Preliminaries 2.1. The Definitions. The modulation space is originally defined by Feichtinper in 1983 on the locally compact Ablian groups . When = R , the modulation space can be equivalently defined by using the unit-cube decomposition to the frequency space (see Appendix A in [13], also [14,17]). The following definition is based on the unit-cube decomposition introduced in [13]. Let be a fixed nonnegative-valued function in S(R ) with support in the cube [−4/5, 4/5] and satisfy ( ) = 1 for any in the cube [−2/5, 2/5] . By a standard constructive method, we may assume that for all ∈ R , where is the -shift of that is defined by For each ∈ Z , we use ( ) as its symbol of a smooth projection on the frequency space. Precisely, for any ∈ S (R ), we havê=̂. Let be a Banach space of measurable functions on R with quasi-norm ‖ ⋅ ‖ . We define the modulation space where ( ,ℓ ) = ⟨ ⟩ ( ) ℓ , By definition, we have the inclusion It is known that the definition of the modulation space ( , ℓ ) is independent of the choice of functions . In this paper, we are particularly interested in the cases = and = , where is the Lebesgue space and is the real Hardy space. For all 0 < , ≤ ∞, we call ( , ℓ ) The operator |Δ| /2 is a convolution. We have Also it is well known that is bounded on spaces for any 0 < < ∞. Lemma 8. Let 0 < < ∞ and ≥ 1. Suppose that there is an integer > 0, such that for all test functions for | | < and for | | ≥ . Here 1 ≥ 2 ≥ 0 and is a real number. Then for ∈ ∩ 2 , one has where is an arbitrary positive number. Proof. The case ≥ 1 is proved in [11]. It suffices to show the lemma for 0 < ≤ 1. By the Riesz transform characterization of , for | | ≥ , we have By checking the Fourier transform, we have the identity So for | | ≥ − 1, one has A similar argument shows that for | | < − 1, for any ∈ ∩ 2 . The rest of the lemma easily follows from the definition of the modulation spaces. Lemma 9 (see [18,19]). Let ⊂ R denote an open set and ∈ ∞ 0 ( ). If ∈ ∞ ( ) and the rank of the matrix is at least > 0 for all ∈ supp( ), then (52) Proof. The case = 2 is known [20]. It then suffices to show that for 0 < < 2, for large . Let Γ be a standard bump radial function supported in the set and satisfying, for all ̸ = 0, Noting the support condition of̃, we write where the sets , = 1, 2, 3 are defined by For ∈ 1 , we use polar coordinates to write where is the induced Lebesgue measure on the unit sphere −1 . When is even, taking integration by parts for /2 times on the inside integral, we obtain When is odd, we use integration by parts for /2+1/2 times on the inside integral, Again we obtain that for odd , For ∈ 2 , without loss of generality, we assume | 1 | ≥ (| |/ ). Perform integration by parts on the 1 variable for suitable amount of times. We similarly obtain For ∈ 3 , invoking Lemma 9, we obtain Noting that 3 contains no more than (10/|1 − |) + log 2 numbers of , it is easy to check The lemma is proved. This lemma can be found in Section 4.2 of [11]. Proof of Theorem 1 The operator is a convolution operator with the symbol ( − ) | | . This symbol is a ∞ function on R \{0} with compact support. Clearly for any > 0 and ̸ = 0, we have that for | | ≤ 10, So Lemma 11 implies the following estimate. Proof. The proof uses the same idea used in proving the case ≥ 1 which was represented in [11]. For the convenience of the reader, we present its proof. Let Ω be the kernel of Then By Lemma 14 and (46), we have Thus to prove the proposition, it suffices to show For simplicity, we prove the case = 2. The proof for ≥ 3, is tedious but shares the same idea as that for = 2. First we study the case | | −2 > 1. For = 1, 2, and = ( 1 , 2 ), if ≥ 2 we denote If < 2, we denote Journal of Function Spaces and Applications 7 Also, for = 1, 2 and ∈ N, we define sets It is easy to check We have for = 1, 2, Write where It is easy to check that if | | −2 ≥ 1 and ∈ supp( ), the phase function So by Lemma 9, we have Observe the easy fact that if ∈ 1, and ∈ supp ( ), for any integer , Perform integration by parts on 1 and 2 variables both for times such that > 1. An easy computation shows that The estimates for 2 and 3 are exactly the same. We only estimate 2 . Take integration by parts on 1 variable for times with > 1. Again, a simple computation shows that if we chose a suitably large . These estimates on , = 1, 2, 3, 4, indicate provided | | −2 ≥ 1. Journal of Function Spaces and Applications We now turn to show the case | | −2 < 1. For = 1, 2, and = ( 1 , 2 ), let ( ) ( ) be the numbers defined above. For = 1, 2 and ∈ N, we define sets It is easy to check Let Thus, Using the same argument as we used before, we can show We complete the proof of Proposition 18. We are now in a position to prove Theorem 1. Proof. By an argument involving interpolation and duality, it suffices to show the case ≤ 1. Using Proposition 18, the inequality in (76) and the definition of the modulation spaces, we easily obtain (ii) in Theorem 1. To show (i) and (iii) in Theorem 1, by Proposition 18 and the definition of the modulation spaces, it suffices to show Again, by Lemma 14, the proof of the inequality in (101) can be reduced to show that for ≥ 1, We show (iii) first. The proof of = 1 may illustrate the method. When = 1 By Hölder's inequality and the Plancherel theorem, the first term above For the second term, performing integration by parts, we obtain Journal of Function Spaces and Applications 9 Now we return to show (i) of Theorem 1. We will prove only the case < 1. Write Using Hölder's inequality and the Plancherel theorem, we obtain For , ∈ {1, 2, . . . , }, we denote sets We now write To show (102), it now suffices to show that for each , Using the Leibniz rule, for any positive integer , we have Here, an easy induction argument shows that, for ≥ 1, is a homogeneous function of degree − for each . We now write We first estimate each , 0 ≤ ≤ − 1. Recall that we assume ∈ (0, 2). Let = 2/ , so = 2 /(2 − ). By the choice of and the assumption it is easy to see > . Therefore, by Hölder's inequality, we obtain For each = 1, 2, . . . , −1, by the choice of , the assumption on , and an easy computation, it is not difficult to see that we may obtain a number in the interval [0, /2) satisfying − > ( 1 − 1 2 ) , + − > − 2 , By Hölder's inequality and Pitt's theorem, for each , we obtain Combining all the estimates, we have It remains to estimate . It is easy to see that the choice of and the condition in the theorem imply − > − /2. So, by Hölder's inequality and Pitt's theorem again, we obtain This completes the proof of (102). When = 2, 4, . . ., we have for any integer , where ( ) is a ∞ function. Thus it is trivial to see that for all > 0. This proves (i) in Theorem 1. Proof of Theorem 3 Recall that the function chosen in the definition of the modulation space is flexible. We may choose define a function Φ on R by and an ∈ (R ) by = F −1 (Φ). Let ∈ Z . It is easy to see that where ( ) = 1 if = 0 and ( ) = 0 if ̸ = 0. Similarly we have Suppose that we have some such that By the choice of , we have Journal of Function Spaces and Applications 11 On the other hand, where the phase function ( , ) is defined by Since The critical point of ( , ) is at * = − 2 . (137) Thus, by the stationary phase method (see [19,Proposition 3, page 334]), an easy computation gives that, as → ∞, Thus the inequality implies This shows the conclusion. Operators , ( ). Let ̸ = 1 and ∈ R. In [23], to investigate the absolute convergence for multiple Fourier series, Wainger studied the oscillating multipliers , (t) with symbol In [24], Miyachi proved that in the case > 0 and ̸ = 1, for 0 < < ∞, if and only if By Theorem 1 and its proof, we not only obtain the boundedness of , ( ) on , (R ) for any ∈ R, but also gain a regularity of if > 0. Theorem 19. Let 0 < < ∞ and ̸ = 1. One has for ≥ 1 Proof. The proof of Theorem 19 is the same as the proof of Theorem 1. We skip it. Journal of Function Spaces and Applications Let 1 ≤ ≤ ∞. For ∈ 1 , we have This shows that for | | ≤ 10, Now if 0 < ≤ 1, we obtain that for all 1 ≤ ≤ ∞, The last inequality is from Lemma 13. As these discussions, we obtain the following corollary. Proof. By the definition of the modulation space, we know If / ≥ 1, write By the Minkowski inequality, we have When | | ≤ 10, When | | ≥ 9, This shows that if ≥ , then If / ≤ 1, then As in the previous case, for | | ≤ 10, and This shows that The theorem is proved. Schrödinger Equation. Consider the Cauchy problem of the linear free Schrödinger equation The formal solution to this equation is given by By Theorem 1, we obtain the growth rate as → ∞ for the solution to the linear free Schrödinger equation. Linear Cauchy Problem with Negative Power. We start with the following linear Cauchy problem with negative power: The formal solution to this equation is given by Proposition 26. Let > 0 and 1 ≤ ≤ 2. One has where Proof. We only prove the case of odd , since the proof for even is similar. For any fixed we write where is the Riesz potential of order . The kernel of We first show To this end, by Young's inequality, we need to show It suffices to show the case ≥ 1. As the same argument in the proof of Theorem 1, with the Schwarz inequality we have Performing integration by parts ( + 1)/2 times on the second term, without loss of generality, we may write wherẽ( ) is a ∞ function supported in [−1, 1] and ( ) is a function satisfying Choose a small > 0 such Thus by Schwarz's inequality and the Pitt's theorem, we obtain Combining these estimates, we have 0 − |Δ| − /2 where 1 − denotes the 1 Sobolev space of order − . On the other hand, we have the easy energy estimate An interpolation yields that for all 1 ≤ ≤ 2, Since is an arbitrary number larger than ([ /2] + 1), the proposition now follows from Lemma 13. Proof. The almost orthogonality (Identity (46)) and the energy estimate give Thus the lemma follows from Lemma 13. Now we are ready to give the proof of Theorem 5. Proof. The proof of (i) can be obtained from Propositions 18 and 26, and the definition and the modulation spaces. Similarly, the proof of (ii) follows by Proposition 18, Lemma 27 and the definition of the modulation spaces. Our proof will follow the same method used in [8], or, more precisely, the idea introduced in an earlier paper [13]. Now we give the proof of Theorem 6. Proof. In the proof, the letters , = 1, 2, 3 denote some positive constants that are independent of all essential variables. We write the Cauchy problem in the equivalent form The last inequality is because Fix an > 0 such that
4,197.2
2013-03-18T00:00:00.000
[ "Mathematics" ]
STARE velocities: importance of offorthogonality and ion motions, Ann Abstract. A 3.5-h morning event of joint EISCAT/STARE observations is considered and the differences between the observed STARE velocities and the electron drift components (EISCAT) are studied. We find that the STARE-Finland radar velocity was larger than the EISCAT convec-tion component for a prolonged period of time. In addition, a moderate 5–20° offset between the EISCAT convection azimuth and the corresponding STARE estimate was observed. We show that both the STARE-Finland radar velocity "over-speed" and the offset in the azimuth can be explained by fluid plasma theory, if the ion drift contribution to the irregularity phase velocity is taken into account under the condition of a moderate backscatter off-orthogonality. We call such an explanation the off-orthogonal fluid approach (OOFA). In general terms, we found that the azimuth of the maxi-mum irregularity phase velocity V ph is not collinear with the V E × B electron flow direction, but differs by 5–15°. Such an azimuth offset is the key factor, not only for the explanation of the Finland velocity overspeed, but also for the revisions of the velocity cosine rule, traditionally accepted in the STARE method at large flow angles. We argue that such a rule is only a rough approximation. The application of the OOFA to the STARE l-o-s velocities gives a reasonable agreement with the EISCAT convection data, implying that ion motions and the non-orthogonality of backscatter are important to consider for VHF auroral echoes. The data set discussed had the STARE velocity magnitudes, which were 1.5–2 times smaller than the electron V E × B velocities, as was found earlier by Nielsen and Schlegel (1983). Key words. Ionospheric irregularities; plasma waves and instabilities; auroral ionosphere Introduction Auroral coherent radars have proven to be useful instruments for the monitoring of plasma convection in the high-latitude ionosphere.Currently, the Super Dual Auroral Radar Network (SuperDARN) of HF radars is widely used for the mapping of convection on a global scale (Greenwald et al., 1995).These radars use information on the Doppler velocity of the F-region coherent echoes.The Scandinavian Twin Auroral Radar Experiment (STARE) VHF radars (Greenwald et al., 1978;Nielsen, 1989) represent another coherent system that is also in use for convection studies, e.g.Kosch and Nielsen (2001).The STARE measurements are limited to a portion of the auroral oval over northern Scandinavia.STARE radars rely on velocity measurements of the E-region echoes. The temporal and spatial resolutions of the STARE radars are superior to the SuperDARN radars' resolutions.However, there is a fundamental difficulty within the STARE method, stemming from the fact that E-region plasma wave irregularities do not propagate at the E × B/B 2 velocity (below we call it V E×B velocity) along the flow but, it seems, they are rather "limited" in their velocity around the ionacoustic speed of the medium, C s .Nielsen andSchlegel (1983, 1985) attempted to surmount this problem through "calibration" of the observed VHF velocities by using the true electron drifts measured independently by the EISCAT incoherent scatter facility.It was shown that the proposed semi-empirical method of the convection estimate, termed the ion-acoustic approach (IAA), performs reasonably well most of the time. However, a more thorough examination of the Nielsen and Schlegel's (1985) data shows that for some individual measurements the IAA predictions are relatively poor.The reasons for such disagreements have not been analysed yet, though several more recent publications give some clues to the problem.For example, Haldoupis and Schlegel (1990), see their Figs.8 and 9, reported on a rather complicated re-lationship between the STARE line-of-sight (l-o-s) velocity along electron flow and the ion-acoustic speed.Nielsen et al. (2002) found that l-o-s velocity along the electrojet can be larger than the ion-acoustic speed, and, moreover, it changes with the flow angle (the flow angle is the angle between V E×B and the radar wave vector), even within the cone of the unstable Farley-Buneman (F-B) waves.For observations at large flow angles, Kustov and Haldoupis (1992), Koustov et al. (2002) reported that STARE velocities can be less than the plasma convection component, which also might cause errors in convection estimates.To further refine the STARE method, a more thorough investigation of the relationship between the E-region irregularity velocity and the plasma convection is required. In this study we consider one joint STARE and EISCAT event for which the IAA reduction agrees somewhat reasonably with the EISCAT measurements, but examination of the Finland radar l-o-s velocity, observed at large flow angles shows that it was quite often larger than the cosine component of the plasma drift measured by EISCAT.We call this phenomenon the Finland velocity "overspeed".The discovered overspeed effect is highly unexpected and inconsistent with the assumptions of the IAA method.We attempt to interpret the Finland velocity data from a different point of view; namely, we explore the possibility of echo reception from larger E-layer heights (larger than the height of exact orthogonality), where the backscatter is effectively nonorthogonal and where the ion motions may contribute to the irregularity velocity significantly.We argue that the observed Doppler velocity, V (k) ph , is only a component of the maximum possible irregularity phase velocity V ph .The latter vector has an offset from the V E×B velocity vector 5-15 • , depending on height.This V ph -to-V E×B azimuth offset is a key idea that allows us to explain the Finland velocity overspeed phenomenon.We also argue that the velocity cosine rule at large flow angles, traditionally assumed in the STARE measurements, is good only as a first order approximation.The V ph -to-V E×B azimuth offset can lead to velocity overspeed in some cases and to velocity underspeed in others, depending on the relative orientation of the vectors and the radar beam.Moreover, we expect that, under certain observational conditions, the measured Doppler velocity and the V E×B component along the radar beam can be of opposite sign. Basics of STARE methodology VHF coherent radars are sensitive to the meter-scale electrojet irregularities.In the original STARE method, it was assumed that the velocity of these irregularities along a specific radar beam is simply the component of the plasma V E×B drift (Greenwald et al., 1978;Reinleitner and Nielsen, 1985).So, by merging l-o-s velocities from two different directions (radar beams), one can infer the total plasma convection vector (stereoscopic technique). On the other hand, it is well known that the electrojet irregularities can be either of Type 1 or Type 2. The Type 1 irregu-larities are quite strong plasma fluctuations excited along the electron flow direction within a limited cone of aspect and flow angles (in-cone irregularities).These irregularities are excited when plasma drift exceeds the Farley-Buneman instability threshold of 300-400 m/s (the ion-acoustic speed at the E-region heights).It is generally accepted that Type 1 irregularities move approximately with the ion-acoustic speed, so that one cannot directly use Doppler measurements from such directions for the stereoscopic derivation of plasma convection.The Type 2 (out-of-cone) irregularities are relatively weak plasma fluctuations that can be seen at large flow angles and/or at increased off-orthogonal angles, and it is widely accepted that their velocity is close to the "cosine" component of the V E×B electron drift along the radar beam.For the STARE experiment, the Norway radar quite often sees Type 1 irregularities, while the Finland radar sees typically Type 2 irregularities, since the former radar observes close to the L-shell directions while the latter one observes perpendicular to the L-shell directions. It is well established now that the plasma temperatures and thus the ion-acoustic speed in the E-region increases with the ambient electric field.A number of authors (e.g.St.-Maurice et al., 1981;Robinson, 1986;Robinson and Honary, 1990;St.-Maurice, 1990) suggested that plasma heating and the VHF velocity limitation are product of enhanced F-B plasma fluctuations.Experimental data on Type 1 velocities, and electron and ion temperatures confirm this idea, but only to some extent (see, e.g.Haldoupis and Schlegel, 1990;Haldoupis et al., 1993).Nielsen and Schlegel (1985) carefully established the V ph -to-V E×B relationship for observations along electrojet.Their parabolic regression formula is well in line with the philosophy of plasma wave heating (though it does not deny other mechanisms) but, more importantly, it allows us to estimate the plasma convection component along the flow, even if Type 1 echoes occur.Nielsen and Schlegel (1985) proposed a new approach for convection estimation in the case of fast flows, the IAA method.In this approach, an estimated convection component along the flow from the empirical V ph -to-V E×B relationship is merged with a velocity component from the other radar that simultaneously observes echoes at large flow angles (there is typically a ∼60 • difference in the azimuths of radar's wave vectors k).It is important to stress that the IAA method assumes that the large flow angle Type 2 velocity is the cosine-component of plasma convection V E×B . It is clear that uncertainties in the STARE convection estimates may potentially arise from a lack of precise knowledge of the relationship between the velocity of Type 1 and Type 2 waves and the convection, and the violation of the "cosine" rule for the Type 2 irregularities.Both of these questions require further investigation to refine and expand the IAA method.In this study we focus on the irregularity phase velocity at large flow angles. Experimental setup We consider data gathered by the STARE radars (operating frequencies 143.8 and 140 MHz for the Norway and Finland radars, respectively) between 00:00 and 04:00 UT on 12 February 1999.Figure 1 shows the orientations of the Finland beam 3 and Norway beam 4. Data from these beams were selected for the simple reason that their intersection at the E-layer altitudes covers the magnetic flux tubes where EISCAT measurements of the electric field (the large dot in Fig. 1) were available.The lines crossing the STARE beams indicate ranges of 600 and 900 km, assuming a mean backscatter altitude of 110 km.The distances from the STARE radar site at Hankasalmi, Finland and Midtsandan, Norway, to the EISCAT E-layer collecting area are 885 km (bin 26) and 750 km (bin 17), respectively.The oneway 3-dB STARE antenna beam width is 3.2 • .During the event, both radars were collecting data with 15-km range resolution, covering the range interval of 495-1245 km.The STARE velocity and power were measured using the standard single-to-double pulse pattern (Greenwald et al., 1978) with 20-s averaging. The EISCAT UHF radar was run in the CP-1K mode with the Tromso antenna being pointed along the local magnetic field flux line and the Kiruna and Sodankyla receiver beams being oriented toward a common volume at a height of ∼250 km.Such configuration of the EISCAT beams allowed us to perform tri-static electric field measurements.The diameter of the EISCAT beam spot is ∼1 km in the E-layer and ∼2.6 km in the F-layer, meaning that the E-layer (F-layer) horizontal projection of the EISCAT scattering volume has the area of about 3 orders (2 orders) of magnitude smaller than the collecting areas of the STARE radars. Since only large-scale variations of the electric field are mapped up to the F-region heights from the E-region (the parallel attenuation length in the electric field mapping, Kelley, 1989), the EISCAT measured electric field actually corresponds to a larger "effective" area (with roughly the same velocities than the EISCAT spot) by perhaps 1.5-2 times.This is in contrast with the electron density and temperature measurements that correspond exactly to the EISCAT spot in the E-region.This means that the EISCAT F-layer velocity data are more appropriate for direct comparison with the STARE data than the E-layer electron density and temperature data. The electron density and electron/ion temperature measurements were also made by EISCAT in both E-and Fregions.The altitude resolution of the density and temperature measurements was ∼3.1 km below ∼180 km, and ∼22 km above ∼180 km.The EISCAT convection data were available with 1-min resolution, while the electron density and temperature data had 2-min averaging.In our presentation below we adopted a common 4-min averaging for all data (with the exception that 10-min N (h) profiles were used for calculation presented in Fig. 4). Event overview The early morning of 12 February 1999 was a moderately disturbed period.The local magnetic perturbations over Scandinavia detected by the IMAGE magnetometers were ∼100 nT prior to 01:00 UT and stronger, ∼350-400 nT, afterwards, between 02:00 and 03:00 UT.Both STARE radars detected backscatter in a broad band of ranges covering the EISCAT spot and stretching all the way to the E-layer radio horizon. Figure 2 shows STARE Norway and Finland data (for the ranges of their intersection) and the ionospheric parameters measured by EISCAT for the whole period under study.Panel (a) illustrates the STARE Finland (green) and Norway (light blue) echo SNRs in beams 3 and 4, respectively.The Norway SNRs were decreased by 2.1 dB to account for the difference in the radar distances to the scattering point (assuming R −3 factor of power attenuation).Orange open circles exhibit the mean EISCAT electron density between 103 and 123-km, the height interval of the largest volume of cross sections (see description of Fig. 4 below).We presented the electron densities in logarithmic units adjusted to the values of SNR, so that if the echo power variations would be only a product of electron density changes (SNR∝ N 2 ), one would see this relationship directly.A 20-dB SNR corresponds to the density of 0.23×10 11 m −3 .A doubling (halving) of electron density would make a 6-dB positive (negative) change at SNR scale. There are two Norway SNR enhancements around 01:00-01:15 and 02:15-02;40 UT.The overall SNR increase from the first to the second event is ∼10 dB.It corresponds well to the electron density increase by a factor ∼3 (see the mean EISCAT electron densities at 01:03-01:15 and 02:15-02:40 UT).Such a correlation in the electron density and SNR is well known, e.g.Williams et al. (1999).It can be clearly seen under the condition of strong plasma flow (Oksman et al., 1986;Nielsen et al., 1988), which is the case for the considered event. The two short (5-7 min) drops in ionisation centred at 00:55 and 02:10 UT are not reflected in SNR.These were also not detected by the IMAGE magnetometers (data are not presented here).We suggest that these density drops were very localised and they were not seen in SNR due to the STARE radars' 3 orders of magnitude larger collecting areas, as compared to the EISCAT collecting area.Possible exotic refraction effects, which one might expect in the area of decreased structured ionisation, were not seen, in our opinion, due to the prevailing backscatter from the surrounding background plasma. The SNR's variations show some correlation with the Efield (through a change in the F-B/GD turbulence level, see Nielsen et al., 1988).For example, between 00:30 and 00:45 UT, panel (a), when there was no significant changes in the electron density, both Norway and Finland SNRs show a gradual increase in response to the E-field increase (see the V E×B velocity in panel (e), dark blue line).The SNR decrease after 02:40 UT correlates well with the E-field decrease.One can also notice that Norway SNRs (observations along the flow) are not so sensitive to the E-field variations between 01:15 and 01:45 UT as the Finland SNRs (observations perpendicular to the flow), which is expected (Nielsen at al., 1988). In panels (b) and (c) we show STARE l-o-s velocities (again with green and light blue for the Finland beam 3 and Norway beam 4, respectively), together with the EISCAT convection components along each beam (dark blue).The black open circles in panels (b) and (c) are the ion-acoustic speed as estimated from the EISCAT temperatures T e and T i at 111 km, assuming the electron and ion specific heat ratio of 1.If one assumes that the electrons are adiabatic with 3 degrees of freedom (heat ratio of 5/3, Farley and Providakes, 1989) then the ion-acoustic velocity becomes ∼15% higher for the cases when T e ∼ T i , and ∼30% higher for the cases when T e T i (these values are not shown in Fig. 2).Panel (d) shows the azimuth of plasma flow according to EISCAT (dark blue lines) and according to the STARE "stereoscopic" cosine-rule method.Panel (e) exhibits behaviour of the total EISCAT and total STARE velocities (dark blue and green lines). The Finland velocities (Fig. 2b) were positive at all times, typically smaller in magnitude than the Norway velocities and smaller than the ion-acoustic speed (black open circles).The maximum of the Finland velocity of ∼700 m/s was achieved between 02:10 and 02:15 UT.One can conclude that the Finland radar observed backscatter from outof-cone irregularities. A striking feature of the Finland data is that the velocities were almost never smaller than the EISCAT velocity component, and they significantly exceeded the EISCAT convection component between 00:45 UT and 01:15 UT (being still less than the ion-acoustic speed).The difference reached a remarkable factor of 2. We call this effect the Finland velocity "overspeed".It is important to note that the STARE and EIS-CAT flow azimuths show the greatest deviations of ∼ 20 • (panel d) during the times of the Finland velocity overspeed.The data for these periods disagree with a notion that outside the F-B instability cone, the Doppler velocity is simply the cosine component of the electron flow.This effect will be explored later. The STARE Norway velocities, Fig. 2c, were negative all the time and well above the expected nominal F-B instability threshold of 400 m/s (Nielsen and Schlegel, 1985).Velocities reached unusually large values of 800 m/s at ∼01:00 UT Fig. 3.The EISCAT E-layer electron density contours in units of 10 10 m −3 .Vertical lines limit two intervals of data which were used in the modelling. and even larger values (∼1100 m/s) at ∼02:10 UT.In spite of large magnitudes, the Norway velocities were close to the "isothermal" ion-acoustic speed at 111 km (however, the velocities were less than the "adiabatic" ion-acoustic speed) and smaller than the EISCAT velocity component along this beam.According to EISCAT, the electron flow was mostly eastward (azimuth of 70-75 • ), which gives the flow angle of 37-42 • for this radar.One can conclude that the Norway radar observed the in-cone irregularities most of the time, if electrons were "isothermal", and out-of-cone irregularities if the electrons were "adiabatic" ones. As a whole, the Norway data are consistent with observations of Nielsen and Schlegel (1985), except with much stronger electron drifts and STARE velocities in our case (EISCAT drifts were as large as 3000 m/s). The EISCAT electron density distribution in the E-layer for the entire event is given in Fig. 3.An obvious feature here is two ∼10-min lift-ups of the E-layer around ∼01:00 and ∼02:10 UT, seen as "holes" in the electron density contours.For the first event, the E-layer height increase was around 10 km.For the second event, the density behaviour was more complicated.The electron density holes most probably corresponded to narrow, zonally-oriented structures which may be associated with weak auroral arcs.Due to cloudiness, no good optical data were available at the FMI all-sky camera network at KIL, KEV and ABK, but keograms show some weak luminosity enhancements at these times.The E-region height increase around 01:00 and 02:10 UT will be a support-ing point in the explanation of the Finland velocity overspeed effect.The vertical lines centred at 00:50 and 02:25 UT indicate two intervals for which the electron density data were selected for modelling purposes.These periods correspond to the depleted and background ionospheres, respectively. Convection estimates from STARE data One might think that unusually large velocities of both STARE radars would lead to serious errors in electron drift estimates.Figures 2e, d show the irregularity drift velocity magnitude and azimuth derived through the standard STARE merging method (green lines) and the electron flow velocity magnitude and azimuth through the IAA method (red line).For IAA, the parabolic formula with limited velocity of 400 m/s for Norway data (Nielsen and Schlegel, 1985) and measured Finland (out-of-cone) velocity V ph (k) were used.Also shown in panel (e) is the magnitude of the EIS-CAT V E×B velocity (dark blue line).One can clearly see that the standard STARE data merging gives a reasonable estimate of the flow azimuth with some (5-20 • ) clockwise offset with respect to the EISCAT azimuth, Fig. 2d.In terms of the magnitude, the merged STARE velocity (green line) is smaller than the EISCAT velocity most of the time.The IAA method (red line) gives velocity azimuths closer to the EISCAT measurements, by 6-7 • , as expected (Nielsen and Schlegel, 1985).One can conclude that the strong Finland radar velocities did not affect the IAA convection estimates in a significant way, due to the Norway radar velocity always being the largest component in determining the resultant estimate. 6 Off-Orthogonal Fluid Approach (OOFA) In an attempt to understand the reasons for the observed differences between the EISCAT convection and the Finland velocity (Fig. 2b) and the IAA convection estimates (Figs.2d, e), we consider the potential impact of the STARE signal collection from various heights on the irregularity drift velocity and the velocity reaction to the electron density redistribution in the ionosphere.We assume that for the out-of-cone irregularities (large flow angles), the linear fluid formula for the irregularity phase velocity V ph is appropriate (Fejer and Kelley, 1980), (1) Here R = R 0 (cos 2 ψ + ( 2 e /v 2 e ) sin 2 ψ and R 0 = v e v i / e i , v e,i and e,i are the electron and ion collision frequencies with neutrals and the gyrofrequencies, V e,i are the electron and ion drift velocities, and ψ is the offorthogonal (or aspect) angle.A coherent radar measures the component of this velocity, V (k) ph , along specific beam direction, i.e. the radar wave vector k In the past, various researchers assumed that the STARE aspect angles over the EISCAT spot are around zero, so that the factor R in Eq. ( 1a) is small and the ion term is negligible.We argue here that such approximations are not always good enough since electrojet irregularities can be excited within an extended range of heights ∼95-125 km (e.g.Pfaff et al., 1984) so that the effective backscatter layer can be 15-20 km thick.For this reason, we propose to call our approach of auroral echo velocity interpretation the off-orthogonal fluid approach (OOFA). One can define the effective aspect angle and the effective backscatter height of observations as a power normalised aspect angle and height, respectively, (2) h eff = P (h) h dh / P (h) dh . (3) OOFA modelling In these formulas P (h) ∝ (δN/N) 2 (N(h)/N max ) 2 exp −a 2 tan 2 ψ(h) is the relative backscatter power (or the relative volume cross section) at a specific height, where the local aspect angle, ψ(h), assumes a certain value.Power depends on the fractional electron density fluctuation amplitude (δN/N ) 2 1/2 (Oksman et al., 1986), which, for simplicity, is assumed to be height independent (as in the measurements, for example, by Pfaff et al., 1984).Power also depends on the E-layer electron density N (h).The parameter a defines a strength of power attenuation with the aspect angle.We assume a ∼ 50, which, for aspect angles between 0 and 3 • , corresponds to the mean attenuation of ∼10 dB/ • , in agreement with experimental data (Fejer and Kelley, 1980).We suggest that the aspect angle function is independent of wavelength (Farley et al., 1981). In model calculations we use two electron density profiles observed by EISCAT around 02:25 and 00:50 UT (Fig. 3) for the regular and depleted ionospheres, respectively.The smoothed EISCAT N (h)-profiles (labelled as (1) and ( 2)) are shown in both panels of Fig. 4 (green lines).Here the upper panel is for the Finland (F) radar while the lower is for the Norway (N) radar.Also, we adopt a linear variation of the aspect angle with height with gradient dψ/dh of ∼0.07 • /km and ∼0.08 • /km for the Finland and Norway radars, respectively (dψ/dh ∼ 1/(R E sin v), where R E is the Earth's radius and v is the angle between the vectors from the Earth's centre to the radar site and to the backscatter point, respectively; Uspensky et al., 1986).The height of zero aspect angle was assumed to be 100 km.For the selected ψ(h) and N (h) profiles one can determine the P (h) and then the effective aspect angle and height ψ eff and h eff from Eqs. ( 2) and (3). The blue lines (1) and (2) in Fig. 4 show the relative volume cross section profiles for both radars.The differences between the Norway and Finland curves are not large.The obtained magnitudes of the effective aspect angle and effective height are indicated in the lower right of each panel.According to Fig. 4, the effective aspect angle for both radars in the regular ionosphere (profiles 1) is ∼0.8 • , with a mean backscatter altitude of ∼111-112 km.For the depleted ionosphere (profiles 2), the effective aspect angle is ∼1 • and the mean backscatter altitude is ∼114 km.One can conclude that in spite of assuming exact geometric orthogonality at 100 km, the effective aspect angles are not zero, although not too far from the aspect angle instability cone predicted by the linear fluid theory.The mean backscatter altitude is 111-114 km, which is noticeably higher than the height of the zero aspect angle of 100 km.One more feature is that the higher location of the depleted N (h) profile, ∼10 km, with respect to the regular one (see Fig. 3), leads to only a small increase in the height of the cross section profile, by 2.5-3.5 km. Figure 5a illustrates the effective aspect angle as a function of the assumed height of the electron density profile maximum in the range of 95-125 km (this is the height interval where irregularities can exist).Dashed lines are absolute values of the geometric aspect angle at the EISCAT spot for the Finland (F) and Norway (N) radars, with the assumed exact orthogonality at 100 km.Solid lines are the effective aspect angles, which are ∼0.35• , in the best case, with N(h) profile maximum at the altitude with the perfect orthogonality.A conclusion that can be drawn from Figs. 4 and 5 is that in real conditions with exact orthogonality at certain height, the auroral backscatter can never be treated as perfectly orthogonal.The effective aspect angle gradually approaches the geometrical aspect angle once the latter is more than 0.5-0.7 • . The non-orthogonality of backscatter can have a significant impact on measured velocity.An important effect is that the phase velocity changes with the height (Fig. 5b), since the ion term in Eq. (1) becomes more significant at the top of the electrojet layer.This term contributes more significantly if the aspect angle is non-zero (Makarevtich et al., 2002), which is exactly the case for the Finland radar observations over the EISCAT spot.The ion motion is responsible for a shift in the direction for the maximum irregularity velocity from the V E×B direction (see three red lines at Fig. 5b for the aspect angles of 0.5, 1.0 and 1.5 • ).This effect can be as large as 20 • .We also show in Fig. 5b that the growth rate of the F-B instability changes with height (blue line), and the direction of the preferential instability excitation rotates with height in opposite direction.However, this effect is not a concern for this study.For the calculations of Fig. 5b, the semi-empirical model for ion-neutral collision frequencies of Huuskonen et al. (1989) was used.Electron collision frequencies were computed using the approach of Schlegel (1983). In this section we attempt to predict temporal variations of the Finland and Norway velocities by adopting the OOFA concept.For the modelling, we assumed that the backscat- Fig. 7. (a) The standard merged STARE irregularity drift velocity and EISCAT V E×B velocity azimuth, blue and green lines, respectively, together with the IAA-predicted electron flow azimuth, dark red dashed line, and the OOFA-predicted electron flow azimuth, grey line; dotted line is the IAA(Norway)/OOFA(Finland) predicted electron flow azimuth, (b) The standard merged STARE and EISCAT flow velocity magnitude, green and blue lines, together with overlapped the IAA-predicted electron velocity 7 magnitude, dark red dashed line, the OOFApredicted electron velocity magnitude, grey line, and the IAA/OOFA-predicted electron velocity magnitude, dotted line. ter altitude and the effective aspect angle vary with time, as shown in Figs.6a and b.We select ψ eff and h eff in such a way that the lowest and highest altitudes and aspect angles are matched (in magnitude and time) with calculations presented in Fig. 4 for the background and depleted ionospheres.Gradual temporal changes in the curves are exponential.For the assumed parameters we found the relative azimuth turn of the irregularity phase velocity vector V ph with respect to the mean azimuth of V E×B ∼75 • , dashed line, Fig. 6c, and the ratio between the vector magnitudes 6d, which is close to 0.5.Rather minor changes in |V ph |/|V E×B | with the aspect angle recogniz-able at this diagram are due to two competing factors.Indeed, the V ph magnitude decreases with the aspect angle through the denominator in Eq. (1).At the same time, the ion term in the numerator increases the V ph magnitude.One might think that the expected low phase velocities for the Finland and Norway radars would lead to a serious underestimation of the total velocity. Does OOFA give reasonable convection estimates? We now try to predict the magnitude and azimuth of V E×B from the original STARE velocities within the OOFA method, to show that one can still obtain reasonable convection estimates.Figures 7a and b show the EISCAT V E×B electron flow azimuth and magnitude (blue lines) and the stereoscopic STARE V ph convection estimates (green lines).The overlaid red lines at Figs. 7a and b are the convection estimates according to the IAA method.We show here the OOFApredicted electron flow azimuth and magnitude by grey lines.In spite of different physics, both the IAA and OOFA methods correspond to the EISCAT V E×B data reasonably well, with the IAA method slightly underestimating the magnitude.Note also the differences between the EISCAT magnitudes and the IAA-predicted magnitudes around 00:40 and after 02:35 UT (with no such differences for the OOFA predictions), when the measured Norway velocity drops below the suggested morning limited velocity of 400 m/s or less (Nielsen and Schlegel, 1985).Dotted lines in Figs.7a and b show the results of merging the IAA velocity estimates for the Norway beam and the OOFA velocity estimates for the Finland beam.The latter case improves the prediction of the electron velocity magnitude, but increases the offset for the electron flow azimuth.All three methods give slightly different but reasonable estimates for the convection magnitude and azimuth.It is a surprise to the authors that OOFA gives good velocity estimates; previous attempts with a more simple approach were not so successful (e.g.Kustov et al., 1989;Kustov and Haldoupis, 1992).One can say that the STARE phase velocity underestimation found by Nielsen andSchlegel (1983, 1985) can be explained, to a significant extent, by simple linear fluid theory without involving the ve-locity saturation at the ion-acoustic velocity. Can the Finland l-o-s velocity be above the convection velocity? We believe that the Finland velocity overspeed is a product of the non-collinearity of the V E×B and V ph vectors.Figures 8a-d explain our idea.In Fig. 8a one can see that V ph is a result of the electron and ion drift vector contributions, V e and RV i .The resultant vector is reduced by a factor of ∼2, due to the 1/(1+R) term in Eq. ( 1).The relationship between the observed Doppler velocity, the maximum irregularity phase velocity and the plasma drift component along a specific beam depends strongly on the beam orientation.Figures 8b-d show three different situations with the STARE Finland beam 3.Here the maximum irregularity phase velocity V ph and the component along beam 3 are shown, together with the corresponding EISCAT V E×B velocity and component along the beam 3. Figure 8b illustrates a more typical situation when the EISCAT V E×B component is larger than the expected velocity of electrojet irregularities.Here the observations are not very close to the perpendicular to the V E×B flow direction (or not close to the E-field direction).However, if observations are performed much closer to the electric field direction, as shown in Fig. 8c, it is possible that the irregularity phase velocity component (i.e.observed Doppler velocity) is larger than the V E×B component.Figure 8d illustrates an even more exotic case when the convection V E×B component and the V ph component are of different signs.One needs to have the beam oriented very close to the electric field direction for this case; this certainly may not happen very often nor for a long period of time. Inspection of Fig. 2d and/or Fig. 7a shows that the Finland Doppler velocity overspeed around 01:00 UT was seen when the EISCAT V E×B vector was only 5-7 • from the beam 3 normal (dotted horizontal line in Fig. 2d).This is in full agreement with the expectation of Fig. 8c.During the second electric field enhancement around 02:10 UT, the EIS-CAT V E×B vector was ∼10-11 • from the beam 3 normal.It means that both V ph and V E×B vectors were slightly rotated, although there was still a situation similar to the one shown in Fig. 8c.We see only a slight STARE overspeed and/or nearly the cosine-rule relation of the velocity components.We will discuss this feature later. Discussion In this study we first of all reconfirmed that STARE convection estimates show significant electron flow underestimates if the standard stereoscopic technique (using the simplest fluid plasma theory conclusions) is applied.We then showed that the IAA reduction technique of Nielsen and Schlegel (1985) gives a reasonable improvement of the convection estimates, both in terms of the magnitude and direction, but still there were some differences that needed explanation."underspeed" areas for the positive (+) and negative (-) Doppler velocity components and an expected range of the azimuth offsets between V E×B and V ph vectors; the darker shading is the "overspeed" area and the darkest shading is the area with opposite signs of the phase velocity and electron flow velocity component. Our point in pushing a different approach is that some details within the IAA model are purely empirical and their justification often does not exist.For example, the approach assumes a slightly varying limiting velocity in parabolic regression formula.If this velocity is related to the threshold of the FB plasma instability, it is not clear why it changes so much.The ion-acoustic speed according to EISCAT can be 600 m/s or even larger (Fig. 2), and this is not reflected in the IAA methodology. Generally, several effects (pointed in the literature earlier) can contribute to deviation of the radar-observed Doppler velocity from the simplest linear fluid formula for the irregularity phase velocity.They are: (a) The V ph (k) velocity saturation to the ion-acoustic speed for directions close to the V E×B velocity (Nielsen andSchlegel, 1983, 1985;Robinson, 1986;Robinson, 1993). (b) Kinetic effects that in addition allow the largest growth rates to be slightly at off-orthogonal directions (Wang and Tsunoda, 1975;Schlegel, 1983), so that the irregularity phase velocity can be depressed due to offorthogonality. (c) The echo collection from a range of heights (Uspensky, 1985; Uspensky and Williams, 1988;Kustov et al., 1989Kustov et al., , 1990Kustov et al., , 1994)).The effect can be described quantitatively by the altitude integration approach (AIA); the AIA model predicts some phase velocity decrease, even at ranges with zero aspect angles at a certain height (Uspensky et al., 1994).The off-orthogonal fluid approach (OOFA) described in this study is a further improvement of the AIA approach. (d) The flow angle saturation of the irregularity power spectrum (Janhunen, 1994a, b), where it is suggested that the macroscopic (i.e.radar-observed) irregularity velocity is to be less than the V E×B velocity, due to a strong turbulence development.In such an environment the radar picks up echoes from those parts of the backscatter volume where the turbulent electric field happens to have a favourable direction for a quick growth of the observed unstable waves.Oppenheim et al. (1996) also simulated the F-B instability and described related saturation effects.They found that the saturated wave phase velocity was less than the one predicted by the linear theory but above the acoustic speed.Similarly to Janhunen (1994b), they found that the dominant direction of saturated wave propagation obeys k • E < 0 and thus is shifted counterclockwise (when viewed from above) from the V E×B vector, i.e. in the direction of the maximum in the linear growth rate, as shown in our Fig.5b.Note that the OOFA method discussed in this study predicts the opposite direction of rotation, a clockwise V ph rotation from the V E×B electron flow direction. (e) Neutral wind effects, which can modify the irregularity velocity through the ion term in Eq. ( 1).Two cases of the velocity contribution are possible.The first one is if there is pure backscatter orthogonality over the EIS-CAT spot.For this scenario, the neutral wind contribution to the Doppler velocity can take only ∼1% (R 0 term, Eq. 1).For the second case, with non-zero effective aspect angles, an increase in R raises the neutral wind contribution to ∼50% of the wind velocity magnitude.For the event under study, there was no neutral wind measurements in the area of interest.However if one assumes that the southeastward neutral wind was of the order of 200 m/s (e.g.Tsunoda, 1988), its contribution to the irregularity would ∼100 m/s.Such a wind velocity addition can be very important if the convection velocities were moderate or small.In our case of the fast flows with convection velocities of ∼2000 m/s, a positive contribution to the irregularity velocity cannot be significant, perhaps less than a few percents in the |V ph |/|V E×B | ratio, Fig. 6d, and less than ∼10 • increase in the irregularity velocity azimuth, Fig. 6c.The latter worsens the mutual agreement of the EISCAT azimuth and the OOFA-predicted flow azimuth.Due to a reasonable agreement of the EISCAT and OOFA flow azimuth, we suggest here that the real neutral wind was not so strong or the neutral wind height profile was below (higher) the backscatter volume cross section height profiles. For observations at large flow angles with the Finland radar, we discovered that its velocity was larger than the cosine component of the electron drift for the extended period of time.The effect is important to be focused on, since nonlinear dissipative mechanisms cannot push irregularities faster than their plasma convection velocity, the driving factor for electrojet instabilities.Thus, we suggest that the overspeed effect signals a violation of the cosine rule for the irregularity phase velocity.A similar conclusion can be achieved from the data of Nielsen et al. (2002) for low electron drifts.In our case, the effect was observed during unusually strong plasma flows.(It is a puzzle that a similar velocity overspeed can be seen for some observations of F-layer backscatter; Davies et al., 1999, their Fig. 4.) We found that the period of strong STARE-Finland velocity overspeed coincided with the times of the E-region lifting up and argued that the STARE echoes were coming from greater heights.At these heights, the ion contribution to the velocity of E-region irregularities is increased, especially in view of the fact that the aspect angles of observations are also larger here.The significance of the ion motions for the velocity of E-region decametre irregularities have been discussed recently by Uspensky et al. (2001) and Makarevitch et al. (2002). We also argued that for a proper interpretation of STARE velocities, the non-orthogonality of backscatter should always be considered.In the past, there were attempts to include this effect into consideration.For example, Ogawa et al. (1982), Nielsen (1986) and Makarevitch et al. (2002) expressed the velocity decrease with the aspect angle in terms of the linear fluid theory formula by replacing the electron-neutral collision frequency with the increased anomalous frequency.In spite of the generally accepted possibility of the non-orthogonal backscatter, the STARE echoes were often treated as received at zero aspect angles.Contrary to this, we assumed in this study that any auroral radar (even if it has a height with a zero aspect angle) can receive a lot of power from neighbouring heights, so that the measured velocity and the effective height of the scatter do not correspond to the height of the perfect aspect angle.To illustrate the effect we considered the measured electron density profiles and assumed aspect angles, together with the known magnetic anisotropy of the auroral radar backscatter, Figs.4a, b and Fig. 5a.We demonstrated that the effective aspect angle of measurements can be between 0.4 and 1.0 • and the effective height can be 10-15 km above the height of zero aspect angle. The OOFA method is helpful in understanding other previously published data.Nielsen (1986) reported on the change in Doppler velocity with the aspect angle and interpreted this change in terms of anomalous collision theory.If one assumes that the scatter is actually slightly off-orthogonal, then for a geometrical aspect angle of 0 • (as assumed by Nielsen, 1986), one should assign the 0.77 • (see our Fig.4a) and for the geometrical aspect angle of 0.8 • , one should assign 1 • of off-orthogonality.Then the velocity decrease of 80-85% reported by Nielsen (1986) is in agreement with our calculations presented in Fig. 6, where the aspect angle change from of 0.8 to 1 • corresponds to the V ph change by a factor of ∼0.85.In rocket measurements of Bahnsen et al. (1978) and Primdahl and Bahnsen (1985), it was found that the wave phase velocity was ∼460 m/s for an electric field of ∼70 mV/m.These data can be simply explained if one assumes that the bulk of the unstable waves are mainly offorthogonal waves at 0.5-1 • .This suggestion does not seem unreasonable, since the rocket instrument detects waves at various aspect angles. We would like to mention that from the plasma physics point of view, waves are generally more difficult to excite at off-orthogonal directions.However, it is well known that the F-B instability grows faster at ∼0.1-0.5 • of aspect angle (Wang and Tsunoda, 1975;Schlegel 1983).A similar conclusion was reached by Janhunen (1994a, b) for a marginal flow angle in his 2-D and 3-D F-B instability simulations. One general conclusion from our consideration is that the observed Finland Doppler velocity can be smaller or larger (or even can have opposite sign) than the EISCAT V E×B electron drift component, depending on the V E×B azimuth with respect to the radar beam.In Fig. 9 we show the azimuths of observations for which we should have the overspeed effect, the regular underspeed relationship and for which the polarities of the Doppler velocity and the convection component should be different.For computation, we used the ratio of |V ph |/|V E×B | = 0.5, which is close to our estimates.The light grey shading shows a range of possible angles between V ph and V E×B in the ionosphere.The solid line that starts at the point of 90 • and runs into the RHS quadrant shows the directions for which is the exact cosine dependence, i.e.V ph = V E×B cos ( is the flow angle, see definition in Sect.1).There are no other points on the whole plot where the exact cosine dependence would be in effect.Another line which also starts from the point of 90 • and runs into the LHS quadrant reflects the situation with exact equality of the two component magnitudes, but of opposite signs.In the RHS quadrant with the light grey shading, both measured components of V ph and V E×B are positive, and the Doppler velocity should be smaller than the V E×B component.In the LHS quadrant with light grey shading, both V ph and V E×B components are negative, and the Doppler velocity magnitude should be smaller than the V E×B component magnitude.The darker shaded area is the one where the Doppler velocity is stronger than the V E×B component, and the very dark shading corresponds to the area where the Doppler velocity and the V E×B component have opposite signs. One can conclude that, strictly speaking, there is no cosine-rule relationship between the Doppler velocity and the plasma convection, since V ph and V E×B never coincide in direction.In practise, this might be of secondary importance for many cases, but the effect is very essential for the F-B irregularity physics.For example, it can explain why Finland velocities can be much smaller than the EISCATmeasured convection component, as reported by Koustov et al. (2002).The fact of non-collinearity between V ph and V E×B vectors can also be used for interpretation of morning data by Haldoupis and Schlegel (1990).(See their Fig.6a and morning data by Nielsen and Schlegel, 1985.)Application of this ideology in the OOFA electron flow predictions showed reasonable results (note, the OOFA-predicted flow azimuth is the electron contribution only from V ph ).It is rather a surprise that the ion drift can softly contribute and control the direction of the irregularity drift velocity vector for a situation where the largest linear growth rate is nearly 50 • off the direction, Fig. 5b.The importance of the ion motion effect was stressed in a recent paper by Uspensky et al. (2001), where the authors found an evening clockwise turn of the irregularity drift velocity maximum with the height increase. Effective off-orthogonality of auroral backscatter might be a factor for some F-region echoes.The reasonable agreement between the F-region l-o-s velocities and the electron drift velocities (e.g.Davies et al., 2000) probably means that the aspect angle dependence of the F-region phase velocity is much weaker than the aspect angle power dependence. For the E-region irregularities we have rather the opposite case; the power changes with aspect angle strongly, but not as strong as the velocity changes with aspect angle. In this short morning case we have found a reasonable agreement between the EISCAT high-velocity electron flow data and the predictions of the convection from the OOFA method.Nevertheless, we are left with the impression that there still exists other linear and nonlinear effects open for studies, which can allow the standard STARE stereoscopic velocity reduction to be modified for successful predictions of plasma convection. Conclusions In this study we found that: 1.The standard STARE data reduction based on the linear fluid plasma theory (with assumed zero aspect angles) gives a reasonable plasma drift azimuth estimate and underestimates the plasma drift magnitude, as was first discovered by Nielsen andSchlegel (1983, 1985). 2. The ion-acoustic approach with the fixed F-B threshold of 400 m/s applied to the same STARE data gives reasonable (slightly underestimated) values of the electron flow magnitude and ∼10 • offset in direction. 3. The considered event reveals that the velocity of the outof-cone irregularities measured by the STARE-Finland radar is not always the cosine component of the plasma convection.At some moments, the velocity was significantly larger than the electron flow velocity, the "overspeed" effect. 4. The Finland radar velocity overspeed can be explained by the fluid plasma theory arguments, if the ion drift contribution to the irregularity velocity and moderate off-orthogonality of backscatter are both taken into account. 5. The ion drift contribution (as predicted by the linear fluid theory) is more pronounced in the upper part of the auroral E-layer for observations nearly orthogonal to the flow. 6. Merging of STARE velocities by assuming that they are a product of the off-orthogonal scatter can give reasonable estimates of the true electron velocity magnitude and azimuth. 7. The 5-15 • angle between the V E×B plasma convection and the V ph flow (magnitude and direction of the largest irregularity drift velocity) means that the cosine relationship between V ph and V E×B is only a rough, first approximation in data statistics. 8. A possible neutral wind contribution to the irregularity phase velocity was not significant in our case, due to rather strong convection velocities. Fig. 2 . Fig. 2. The STARE (Finland radar beam 3 and the Norway radar beam 4) and EISCAT parameters: (a) SNR, green line for Finland and light blue line for Norway; orange open circles are mean electron densities between 103 and 123 km (in logarithmic scale); (b) Finland line-of-sight velocity, green line, and matched EISCAT V E×B velocity component, dark blue line; open black circles are the ion-acoustic speed at 111 km according to EISCAT; (c) the same as in (b) but for the STARE-Norway beam 4, light blue line is the Norway l-o-s flow velocity; red line is the STARE-predicted electron velocity component, according to the IAA method; (d) the standard STARE merged flow velocity azimuth, green line, and the EISCAT V E×B electron flow azimuth, blue line, red line is the IAA STARE electron velocity azimuth, (e) the total EISCAT and STARE flow velocity, blue and green lines, together with the IAA-predicted electron flow velocity, red line. Fig. 5 . Fig. 5. (a)Rectilinear aspect angles (absolute values) for Finland (F) and Norway (N) radars at the EISCAT spot, dashed lines, and the effective aspect angles found from Eq. (2) as a function of altitude for a model parabolic N(h)-profile, (b) Azimuth of the largest phase velocity V ph with respect to the V E×B azimuth, red lines, aspect angles of 0.5, 1.0 and 1.5 • , and the azimuth of the fastest F-B instability growth, blue line, derived from the linear fluid theory. Fig. 8 . Fig. 8.A sketch illustrating the relationship between the V E×B and V ph velocity components projected onto the STARE beam 3: (a) A vector diagram showing the electron drift and ion drift contributions to the irregularity phase velocity V ph , according to the linear fluid theory, (b) A case when the V ph l-o-s component is less than the V E×B l-o-s component, (c) A case of "overspeed" when the V ph l-o-s component is larger than the V E×B l-o-s component and (d) a case when the V ph l-o-s component has different polarity with the V E×B l-o-s component. Fig. 9 . Fig.9.Areas of applicability of the cosine rule for a EISCAT/STARE velocity ratio of |V ph |/|V E×B | = 0.5; the light shading covers two "underspeed" areas for the positive (+) and negative (-) Doppler velocity components and an expected range of the azimuth offsets between V E×B and V ph vectors; the darker shading is the "overspeed" area and the darkest shading is the area with opposite signs of the phase velocity and electron flow velocity component.
11,734.8
2003-03-31T00:00:00.000
[ "Physics" ]
A Fast Algorithm for Selective Signal Extrapolation with Arbitrary Basis Functions Signal extrapolation is an important task in digital signal processing for extending known signals into unknown areas. The Selective Extrapolation is a very effective algorithm to achieve this. Thereby, the extrapolation is obtained by generating a model of the signal to be extrapolated as weighted superposition of basis functions. Unfortunately, this algorithm is computationally very expensive and, up to now, efficient implementations exist only for basis function sets that emanate from discrete transforms. Within the scope of this contribution, a novel efficient solution for Selective Extrapolation is presented for utilization with arbitrary basis functions. The proposed algorithm mathematically behaves identically to the original Selective Extrapolation, but is several decades faster. Furthermore, it is able to outperform existent fast transform domain algorithms which are limited to basis function sets that belong to the corresponding transform. With that, the novel algorithm allows for an efficient use of arbitrary basis functions, even if they are only numerically defined. I. INTRODUCTION T HE extrapolation of signals is a very important area in digital signal processing, especially in image and video signal processing. Thereby, unknown or not accessible samples are estimated from known surrounding samples. In image and video processing, signal extrapolation tasks arise e. g. in the area of concealment of transmission errors as described in [1] or for prediction in hybrid video coding as shown in [2]. In general, signal extrapolation can be regarded as underdetermined problem as there are infinitely many different solutions for the signal to be estimated, based on the known samples. According to [3], sparsity-based algorithms are well suited for solving underdetermined problems as these algorithms are able to cover important signal characteristics, even if the underlying problem is underdetermined. These algorithms can be applied well to image and video signals, as in general natural signals are sparse [4] in certain domains, meaning that they can be described by only few coefficients. As has been shown in [5], [6], out of the group of sparse algorithms the greedy sparse algorithms are of interest, as these algorithms are able to robustly solve the problem. One algorithm out of this group is e. g. Matching Pursuits from [7]. Another powerful greedy sparse algorithm is the Selective Extrapolation (SE) from [8]. SE iteratively generates a model of the signal to be extrapolated as weighted superposition of basis functions. In the past years, this extrapolation algorithm also has been adopted by several others like [9], [10] to solve extrapolation problems in their contexts. Unfortunately, SE as it exists up to now is computationally very expensive. This holds except for the case that basis func-J. Seiler tion sets are regarded that emanate from discrete transforms. In such a case, the algorithm can be efficiently carried out in the transform domain. The functions of the Discrete Fourier Transform (DFT) [11] are one example for such a basis function set. Using this set, an efficient implementation in the Fourier domain exists by Frequency Selective Extrapolation (FSE) [8]. If basis function sets are regarded that do not emanate from discrete transforms or overcomplete basis function sets or even only numerically defined basis functions, such transform domain algorithms cannot exist. Although Fourier basis functions have proven to form a good set for a wide range of signals, there also exist signals where other basis function sets lead to better extrapolation results. This holds for example for the case that the support area on which the extrapolation is based is very unequal or in the case that very steep signal changes occur as e. g. in artificial signals. Fig. 1 shows three examples for such signals. The left column shows the original signal, the second column shows a distorted signal with the area to be extrapolated marked in black. The signals in the third column result from applying FSE which utilizes Fourier basis functions. In the last column, Selective Extrapolation is carried out with different basis function sets. In the first row, the basis function set results from the union of the functions from the Discrete Cosine Transform (DCT) [12] and the Walsh-Hadamard Transform (WHT) [13]. In the second row, a binarized version of DFT functions is used in order to reconstruct the steep changes in this artificial signal. In the third row, the basis function set emanates from the union of DFT functions and binarized DFT functions. The three examples have in common that the used basis function sets produce significantly better subjective as well as objective results than the Fourier-based extrapolation does. But they have also in common that for such sets no efficient transform domain implementation can exist which would be necessary for a fast implementation. Within the scope of this contribution we want to introduce a novel spatial domain solution for SE which is called Fast Selective Extrapolation (FaSE). This algorithm is able to generate a model of the signal for arbitrary basis functions in the same way as the original SE, even in the case that the basis function set does not possess any structure and the basis functions are only numerically defined or in the case that an overcomplete basis function set is regarded. But at the same time, the algorithm is very fast as it can efficiently trade computational complexity versus memory consumption. The paper is organized as follows: first, SE will be reviewed for the general case of complex-valued basis functions. With that, an overview of the algorithm is given and the computationally most expensive steps are pointed out. After that, the novel Fast Selective Extrapolation is presented in detail and its complexity is compared to SE. Finally, simulation results are given for proving the abilities of the novel algorithm. II. REVIEW OF SELECTIVE EXTRAPOLATION For the presentation of Selective Extrapolation (SE) a scenario as shown in Fig. 2 is regarded. There, signal parts which have to be extrapolated are subsumed in loss area B. For extrapolating the signal, surrounding correctly received signal parts are used. These signal parts form the support area A. The two areas together form the so called extrapolation area L which is of size M × N samples and is depicted by the spatial coordinates m and n. The signal in L is denoted by s [m, n], but is only available in the support area A. The extrapolation of square blocks is used for presentational reasons at this point only. In general, arbitrarily shaped regions can be extrapolated. In addition to that, in general, the used basis functions can as well be larger than the regarded extrapolation area. In such a case, the extrapolation area has to be padded with zeros to be of the same size as the basis functions. But, for presentational reasons we also assume that the extrapolation area and the basis functions have the same size subsequently. As described in [8], SE aims at generating a parametric model g the individual basis functions, one expansion coefficientĉ k is assigned to each basis function ϕ k [m, n]. The challenge is to determine which basis functions to use for the model and to calculate the corresponding weights. SE solves this problem iteratively, at which in every iteration one basis function is selected and the corresponding weight is estimated. This is achieved by successively approximating signal s [m, n] in support area A and identifying the dominant basis functions of the signal. In doing so, the signal can be continued well into area B, if an appropriate set of basis functions is used. Initially, model g (0) [m, n] is set to zero and with that the initial approximation residual is equal to the original signal. At the beginning of each iteration, in general the ν-th iteration, a weighted projection of the residual onto each basis function is conducted. For every basis function, this leads to the projection coefficient which results from the quotient of the weighted scalar product between the residual and the basis function and the weighted scalar product between the basis function and itself. In this context, the weighting function has two tasks. Firstly, it is used to mask area B from the calculation of the scalar product as there is no information available about the signal. Secondly, using the function ρ [m, n] it can control the influence different samples have on the model generation depending on their position. For instance, samples far away from loss area B can get a smaller weight and due to this weaker influence on the model generation compared to the samples close to area B. In [14], an exponentially decreasing weight is proposed withρ controlling the decay. After the projection coefficients have been calculated for all basis functions, one basis function has to be selected to be added to the model in the actual iteration. The choice falls on the basis function that minimizes the weighted distance between the approximation residual r (ν−1) [m, n] and the projection p (ν) k ϕ k [m, n] onto the according basis function. In this process, again weighting function w [m, n] from above is used. Hence, the index u (ν) of the basis function to be added in the ν-th iteration is: (7) Subsequent to the basis function selection, the corresponding weight has to be determined. In this process it has to be noted, that although the basis functions may have been orthogonal with respect to the complete extrapolation area L they cannot be anymore if the scalar products are evaluated in combination with the required weighting function. This effect is not considered in the original paper from [8] and is called orthogonality deficiency and is described in detail in [15]. In [16] fast orthogonality deficiency compensation is proposed to efficiently estimate the expansion coefficient by taking only the fraction γ of the projection coefficient: The factor γ is between zero and one and depends on the extrapolation scenario, as described in detail in [16]. After one basis function has been selected and the corresponding weight has been determined, the model and the residual have to be updated by adding the selected basis function to the model generated so far: The approximation residual can be updated in the same way and results in The above described iterations are repeated until the predefined number of I iterations is reached. Finally, area B is cut out of the model and is used for replacing the lost signal. Alg. 1 shows the pseudo code of SE for giving a compact overview of this algorithm. Regarding this code and taking into account the equations above, the weighted projection onto all the basis functions in every iteration can be identified as computationally most expensive step. To obtain the projection, a weighted scalar product between the residual and every basis function has to be carried out, leading to a large number of multiplications and additions. Compared to this, the actual basis function selection, the expansion coefficient estimation and the model and residual update have a very small complexity. III. FAST SELECTIVE EXTRAPOLATION In order to solve the dilemma of the huge computational complexity of SE, we propose a novel formulation of this algorithm that also operates in the spatial domain but is as fast as transform domain algorithms which have been mentioned at the beginning. With that, the advantages of both approaches are combined: the high speed of transform domain algorithms and the independence from certain basis function sets, offered by the spatial domain SE algorithm. The high speed of the novel algorithm results from the fact that the weighted scalar products only have to be evaluated once, prior to the first iteration. In the successive iterations they can be replaced by a recursive calculation. The novel algorithm is called Fast Selective Extrapolation (FaSE) and is outlined in detail for the general complex-valued scenario subsequently. If only realvalued signals and basis functions are regarded, the conjugate complex operations can just be discarded. Although the principal behavior of FaSE is similar to SE, not the residual r [m, n] in the spatial domain is regarded, but rather the weighted scalar products between the residual and the basis functions. This yields for depicting the weighted scalar product between the residual and the basis function with index k in the ν-th iteration. This scalar product has to be evaluated only once explicitly. This has to be done for the initial step, where the residual is equal to the original signal, leading to After the initial R (0) k have been determined, all subsequent calculations can be carried out with respect to the weighted scalar products and no explicit evaluation of the scalar products is necessary anymore. Using R (ν) k and exploiting the fact that the square root is a monotonic increasing function for positive arguments, the basis function selection from (7) can be simplified to Using expression R for the weighted scalar product between the selected basis function and the residual from the previous iteration, the estimate for the expansion coefficient results tô . (14) Here, again fast orthogonality deficiency compensation is used to derive the estimate for the expansion coefficient from the projection coefficient. Finally, the update of the model in every iteration can be carried out according to (9). For the subsequent iterations the weighted scalar products can be updated by applying definition (11) on the residual update from (10), yielding Obviously, the weighted scalar product between the residual and a certain basis function can be easily updated from one iteration to the other by subtracting the weighted scalar product between the actual basis function and the selected one, further weighted by the estimated expansion coefficient. Since the update only incorporates the weighted scalar product between two basis functions and is independent of the actual residual, it can be carried out very fast by calculating the different weighted scalar products of all basis functions in advance. This novel formulation of the SE algorithm has two advantages. First of all, the residual now does not have to be calculated explicitly in every iteration step anymore, rather the weighted scalar products between the residual and the basis functions are updated. But more important is the fact, that the most complex calculations can be carried out in advance and can be tabulated. Namely, these are the weighted scalar products between every two basis functions and one over the square root of the weighted scalar product between a basis function and itself. This leads to the matrix containing the weighted scalar products between every two basis functions and the vector holding the inverse of the square root of the weighted scalar products. Obviously, C (k,l) and D k are independent of the input signal and the residual. Hence, they only have to be calculated once and do not have to be calculated for every extrapolation process. Thus, they can either be computed at the beginning of the extrapolation process or read from storage. During the whole computation, they are kept in memory. Furthermore, as C (k,l) is of size |D| 2 and D k has length |D|, the memory consumption is manageable without any problems. Here, the expression |D| expresses the cardinality of dictionary D that contains all possible basis functions. Regarding the two equations above, one can see that they both depend on the weighting function. If different weighting functions are used, C (k,l) and D k have to be adapted according to the weighting function. But, regarding typical signal extrapolation tasks as e. g. error concealment or prediction, the same patterns or only a small number of different patterns occur. Therefore, this also is no problem, as C (k,l) and D k can be calculated for the different patterns in advance as well. During the generation of C (k,l) the complex symmetry of this matrix can be exploited and only |D| 2 +|D| 2 weighted scalar products have to be actually calculated. Using these pre-calculated and tabulated values, the basis function selection from (13) can be rewritten as In addition to that, the estimation of the expansion coefficient from (14) can also be expressed very compactly bŷ Furthermore, the update of the weighted scalar products between the residual and all possible basis functions from (15) can also be formulated very efficiently by Regarding the three equations above, one can recognize that instead of evaluating the weighted scalar products in every iteration step explicitly, only one value has to be read from memory for every calculation. Thus, the very high computational load from the original spatial domain SE is traded against an increased memory consumption. But as the memory consumption still is easily manageable this is a quite reasonable exchange. The novel FaSE implementation has the further advantage that no divisions are required. With that, this implementation is suited more for fixed point or integer implementations than the original SE. In such a scenario, D k could be calculated with high accuracy and then quantized to integer or fixed point values. Thus, no expensive divisions have to be carried out within the iteration loop and the effect of error propagation due to a restricted word length can be reduced. Depending on the architecture on which the extrapolation is carried out and the regarded application, it may be preferable to store 1 and to calculate |·| 2 instead of |·|. By using this modification, the complexity could be reduced a little bit more, if the platform on which the extrapolation runs directly supports the relevant operations. Nevertheless, at this point a sufficiently high computational accuracy is assumed for the above outlined calculations. For a hardware implementation or an implementation on a digital signal processor, finite-word length effects have to be considered and further research is necessary for determining the required bit-depth of the tables and the impact of fixed-point arithmetic. In order to give a final overview of FaSE, Alg. 2 and 3 show the pseudo code for generating the tabulated values and for the actual model generation. The table generation is separated from the model generation for emphasizing again that the generation of the tables only has to be carried out once. Regarding the operations that have to be carried out within the iteration loop, one can recognize that only very simple operations have to be performed which can furthermore be processed very fast. The only computational expensive operation is the initial calculation of R (0) k , but compared to the original SE, this complex step only has to be carried out only once instead of in every iteration. IV. COMPLEXITY EVALUATION Regarding the two previous sections, one can recognize that FaSE is able to outperform the original SE since the computational complexity within the iteration loop is reduced and since as many calculations as possible are carried out in advance and are tabulated. To quantify the complexity of SE and FaSE, the number of operations is regarded that is necessary for generating the model by each of the algorithms. In Tab. I for SE, FaSE and the table generation for FaSE, the number of operations is listed, depending on the extent M, N of extrapolation area L, dictionary size |D| and the number of iterations I to be carried out. Here, the operations are separated into three groups, the number of multiplications (MUL), the number of additions (ADD) and the number of Table Generation MUL This plot only shows the overall number of operations, i. e. the sum of MUL, ADD and OTHER, in order to give a rough impression of the overall complexity and compare the different algorithms. The fact that complex operations like divisions require more processing time than a simple multiplication is omitted for this plot. It can be easily recognized that the number of operations that is necessary for generating the model by SE is several decades larger than for FaSE. The plot further shows the number of operations that is required for generating the tabulated C (k,l) and D k , indicated by a rhomb. In addition to that, the number of operations for the table generation is displayed as dashed line over the complete iteration range. It has to be noted that the table generation is independent of the iterations and this illustration is only chosen for comparing the complexity of the table generation with SE. Therewith, it can be recognized that the table generation requires roughly as many operations as 1000 iterations of SE would require. Since the number of iterations for generating the model can easily reach values larger than 200 as has been shown in [16], the expense for generating the tables amortize even after a small number of blocks. Taking into account that in typical scenarios a large number of blocks is extrapolated with the same weighting function, the complexity for generating the tables very soon becomes negligible. V. RESULTS FOR ARBITRARY BASIS FUNCTIONS In order to support the complexity evaluation from the previous section, the processing time for SE and FaSE is further examined. The first results presented are for arbitrary two-dimensional basis functions. In this case, only the original SE and the novel FaSE can be used, as transform domain algorithms like FSE cannot deal with arbitrary basis functions. For the runtime evaluation, the model generation has been implemented in C, compiled with gcc 4.3.2 and optimizations -O3, and the simulations have been carried out on an Intel Core2<EMAIL_ADDRESS>GHz, equipped with 8 GB RAM. In order to reduce the influence from the operating system, multiple runs of the simulations have been conducted and the computation has been limited to the usage of only one single core. For the simulations, a block of size 16×16 samples is extrapolated from its surrounding samples. Furthermore, different sizes of extrapolation area L between 48 × 48 and 96 × 96 samples are regarded. Fig. 4 shows the extrapolation time per block for different numbers of candidate basis functions and for 250 iterations performed for model generation. For this plot, the cardinality of the dictionary is selected to be of the same size as the extrapolation area. Thus, D varies between |D| = 2304 basis functions of size 48 × 48 and |D| = 9216 basis functions of size 96 × 96. Comparing the two curves of SE and FaSE one can easily recognize that FaSE is about 250 times faster than the original SE, independently of the problem size. This is due to the fact, that for FaSE the computationally expensive weighted scalar products only have to be evaluated once, namely prior to the first iteration. In the later iterations, the expensive steps can be avoided by making use of the tabulated values and avoiding the update of the residual. For these evaluations, the calculation time for generating the tabulated values is not considered, as they only have to be computed once and can be stored. The very high computational cost of the weighted scalar products can also Fig. 3 cannot be directly translated into the processing time shown in Fig. 5 since not all regarded operations consume the same processing time and since the analytical evaluation cannot account for optimizations introduced by the compiler. Fig. 6 shows the processing time for generating the tables for different dictionary sizes |D| and for different sizes of extrapolation area L. Comparing these results with the ones shown in Fig. 5 one can recognize that for an extrapolation area of size 64 × 64, a dictionary size of |D| = 4096 and 250 iterations, the table generation only takes as long as SE would roughly need for extrapolating 6 blocks. This corresponds well to the theoretical results presented in the analytical evaluation. The discrepancy follows from the fact that different operations consume unequal amounts of processing time while in the analytical evaluation only the absolute number of operations has been counted. Since the proposed novel spatial domain solution does not affect the model generation principle of SE, still a very high extrapolation quality can be achieved. Due to the acceleration of the algorithm, now very good extrapolation results can be achieved at a manageable complexity for arbitrary basis functions. To prove this, Table II shows the average extrapolation quality in terms of PSNR and the processing time for extrapolating 126 blocks of size 16 × 16 samples in every image from the Kodak image database. For comparison, the Total Variation Image Reconstruction (TV) algorithm from [17], the patch-based algorithm from [18] that uses Stochastic Factor Graphs (SFG) and the simple but very fast Spatial-Domain Interpolation (SDI) from [19] are regarded. The comparison has been carried out in MATLAB R2008b, and again only one core of the above mentioned computer has been used. Apparently, FaSE provides the highest extrapolation quality among the considered algorithms only with SFG coming close. But at the same time, it is second fastest algorithm. VI. MODIFICATIONS FOR TRANSFORM-BASED BASIS FUNCTION SETS As aforementioned, for FaSE the weighted scalar products only have to be evaluated prior to the first iteration. In the case that the regarded basis function set contains a subset of basis functions that emanate from a discrete transform as e. g. functions of the DCT or the DFT, the explicit evaluation of the weighted scalar products can be simplified by replacing the summation over the product between the weighted signal and the basis function by the corresponding transform coefficient of the weighted signal which can be achieved through a fast transform. To give an example, the idea that the basis function set contains some basis functions which emanate from the DFT will be extended. In this case, a basis function is defined by with vertical frequency µ k and horizontal frequency η k . Then, the summation from (12) at frequency µ k , η k . Thus, the weighted scalar products for many basis functions can be efficiently evaluated simultaneously by making use of fast transforms like the Fast Fourier Transform [20] or respectively a fast transform that is appropriate to the regarded basis functions. It has to be noted that the utilization of fast transforms is only reasonable if a large number of transform domain coefficients has to be calculated at the same time. The fast transforms only speed up the parallel calculation of many coefficients. The calculation of just a single coefficient would take as long as the explicit evaluation of the weighted scalar product. The above described property could also be used for speeding up the table generation in (16). Regarding again the example of a subset of DFT basis functions, the product between a basis function and a conjugate complex second one is equal to a basis function where the horizontal and vertical frequency results from the difference of the original frequencies: Hence, (16) can also be expressed by the corresponding coefficients from the DFT. For other transform-based basis function sets similar properties exist. In addition to the results for arbitrary basis functions shown in Section V, the performance of FaSE and SE is compared to a transform domain algorithm. For this, FSE is regarded that utilizes Fourier functions for extrapolation. Here the circumstance has to be considered, that, as described in [8], FSE does not generate a complex-valued model. FSE selects in every iteration step one basis function and its corresponding conjugate complex one, in such a way that the model always is real-valued. Hence, in most cases two basis functions are selected in an iteration, with the exception of the real-valued constant basis function and the function with the highest possible alternation. Thus, the number of iterations has to be doubled for SE and FaSE for a fair comparison as they select only one basis function per iteration. Fig. 7 shows the processing time per block for the different approaches with |D| = 4096 Fourier basis functions of size 64 × 64. For these simulations, the initial scalar products for FaSE are expressed by the transform coefficients according to (22). Although FaSE needs twice the number of iterations as FSE for generating the model, it is still significantly faster than FSE and furthermore several magnitudes faster than the original spatial domain SE. Taking all the results from the two previous sections into account, the following recommendations can be given. In the case that the Selective Extrapolation is carried out with Fourier basis functions or other basis function sets that are based on a discrete transform, one can decide either to use a transform domain algorithm or the novel FaSE. If always the same extrapolation scenario is considered, the tables only have to calculated once and the time gain of FaSE prevails, otherwise the transform domain algorithm is the better choice as no calculation of the tables is necessary. If the extrapolation process is carried out with basis functions for which no transform domain implementation is possible, FaSE should be preferred over the original SE. FaSE is able to efficiently trade computational complexity versus memory consumption as the expensive operations only have to be carried out once. Thus, the actual iterations for generating the model become very simple and very fast. VII. CONCLUSION Within the scope of this contribution, we presented Fast Selective Extrapolation for image and video signal extrapolation. For this, Selective Extrapolation, a powerful signal extrapolation algorithm has been reviewed and its most complex parts have been identified. The novel algorithm behaves mathematically identical to the original algorithm but is able to outspeed the original algorithm by several decades by effectively trading memory consumption versus processing time. Furthermore, the novel algorithm is able to outperform existent fast transform domain extrapolation algorithms which are even limited to certain basis function sets. With that, it opens the door for further research on carrying out the extrapolation with different basis function sets. Up to now, the extrapolation only has been computationally manageable for special basis function sets that are based on discrete transforms. But by using Fast Selective Extrapolation, the extrapolation can be carried out for arbitrary basis functions which may even be only numerically defined. This ability allows for further research on extrapolation with signal adapted basis functions, obtained through the Karhunen-Loève Transform [21], [22], which has not been computationally feasible up to now. Although the algorithm has been introduced only for twodimensional data sets, it can be extended straightforwardly to three dimensions by making use of the ideas from [23] and four dimensions by using [24]. There, a three-dimensional or respectively a four-dimensional model is generated in the same way as described above for two dimensions.
7,088.2
2011-01-27T00:00:00.000
[ "Computer Science", "Engineering" ]
High-exposure-durability, high-quantum-efficiency (>90%) backside-illuminated soft-X-ray CMOS sensor We develop a high-quantum-efficiency, high-exposure-durability backside-illuminated CMOS image sensor for soft-X-ray detection. The backside fabrication process is optimized to reduce the dead-layer thickness, and the Si-layer thickness is increased to 9.5 μm to reduce radiation damage. Our sensor demonstrates a high quantum efficiency of >90% in the photon-energy range of 80–1000 eV. Further, its EUV-regime efficiency is ∼100% because the dead-layer thickness is only 5 nm. The readout noise is as low as 2.5 e− rms and the frame rate as high as 48 fps, which makes the device practical for general soft X-ray experiments. B ackside illumination (BSI)-scheme CMOS image sensors have been preferable for soft-X-ray sensing, 1,2) which have high readout speed compared to CCD image sensor. In this regard, we recently developed a BSI CMOS soft-X-ray image sensor (SBSA) based on a commercial CMOS sensor without anti reflection (AR) coating. 3) This sensor demonstrated a high QE, low readout noise of 2.6 e − rms, and high frame rate of 48 fps, along with a photon-energy resolution of 80-1000 eV. As regards practical application, we demonstrated fluorescence detection from highly oriented graphite and a polymethyl methacrylate film using a simple setup, wherein the samples faced our CMOS sensor without a monochromator. However, the sensor durability after soft-X-ray exposure was unsatisfactory because of the small Si-layer thickness of 3.5 μm. The QE also required further improvement, particularly for the smallabsorption-length regime. In this study, we developed a new CMOS sensor with further improvements to the backside process to afford a thicker Si layer of 9.5 μm; we called this sensor the SP3 sensor. This soft-X-ray/EUV-regime SP3 image sensor is also based on the Gpixel BSI CMOS image sensor, 4,5) GSENSE400SQBSI, which incorporates minor revisions in the peripheral circuits relative to the previous sensor, the GSENSE400BSI. We made two changes to the backside fabrication process for the SP3 relative to the SBSA: the silicon thickness was changed from 3.5 to 9.5 μm to suppress radiation damage, and that the implantation energy was decreased by one digit to reduce the non-sensitive-layer thickness. Our CMOS sensor adopts a rolling shutter and a high dynamic range (HDR) scheme using the doubleconversion gain method, and has 2048 (H) × 2048 (V) 11 μm pixels. In the low-gain mode, the conversion gain is 1/18 that of the high-gain mode, and all saturated electrons can be read out. The high-and low-gain-mode readout time and frame rate are 21 ms and 48 fps, respectively, and the HDR-mode frame rate is 24 fps. Dark current was 3.7 e − /s/pixel at −7°C. The sensor's QE was measured at the BL-10 beamline 6-8) of the NewSUBARU synchrotron facility (University of Hyogo) same as previous paper. 3) We set the exposure time to 2 ms, for which the signal was not saturated even in the high-gain mode. The sensor temperature was 42°C as measured by a vacuum-operating embedded temperature sensor. The SBSA and SP3 sensor QEs are shown in Fig. 1. The standard electron-hole pair creation energy of 3.66 eV was used for QE calculations. 9,10) The high-gainmode conversion gains were calculated as 1.85 and 1.27 digital number (DN)/e − for the SBSA and SP3 sensors, respectively. It can be observed that the SP3 sensor QE is >100% at several photon energies due to measurement errors. Importantly, the 80-1000 eV range QE is >90%. The solid curve denotes the fitting of the simplest dead-layer model omitting the cloud size and transient region. 3) The Si thickness of the SBSA and SP3 sensors were estimated as 3.5 and 9.5 μm, respectively, as per the QE drop above 700 eV and the backside process condition. Importantly, we note that the SP3 sensor exhibits the highest recorded QE for the energy range of 80-1000 eV to the authors' knowledge. [11][12][13] Figure 2 shows the energy resolution measurement results of the SBSA and SP3 sensors. In the measurement, we used the photon-counting event called the single-event, which is widely used for the hard-X-ray regime to obtain the energy resolution. 14,15) The experimental setup was same as that for Content from this work may be used under the terms of the Creative Commons Attribution 4.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. QE measurements. The sensor exposure time was 2 ms, and the irradiated photon number in the central region was made too large (>100 photon/pixel) to record single events. Thus, weak scattered photons around the center were counted as single events. The photon-counting-event threshold was set as 4 e − , corresponding to 7 and 5 DN for the SBSA and SP3 sensors, respectively. This threshold was 1.6 times the readout noise. In some cases, the single events were found to "share" their electrons among neighboring pixels. As the DN of these electron-sharing events indicated smaller energy than the photon-energy, these events were omitted as per the threshold. The vertical axis in Fig. 2 represents the normalized DN frequency of these photons. The single events in ∼100 image frames were counted under each photon-energy condition. The obtained energy resolution was 70 eV at the photon-energy of 1000 eV for both the SBSA and SP3 sensors. We note that below 450 eV, the SBSA-sensor energy distribution exhibits a large low-energy tail. In contrast, that of SP3 exhibits no low-energy tail at 200 eV because the dead-layer thickness is reduced from 27 to 5 nm. Thus, the SP3 sensor is able to resolve low-energy fluorescences corresponding to carbon (284 eV) and boron (188 eV). Here, we note that EUV photons (92 eV) are also resolved by the sensor. Next, we evaluated the sensor durability against EUV and soft-X-ray photons. In general, sensor displacement damage 16,17) can be ignored because the irradiated photonenergy is <1000 eV. Because the peripheral circuits are not X-ray-irradiated and the pixel array has only N-type MOS transistors and no PNPN structure, single-event latch-up is also not a problem. Because there are no memory components and dynamic circuits in the pixel array, the occurrence of a single-event upset 18,19) is also negligible. Therefore, only the total ionizing dose (TID) 16,20,21) needs to be considered. The TID measurement results for various soft X-ray energies are shown in Fig. 3. The dark-current increases with the TID, corresponding to the number of irradiated photons per pixel. Again, in this case, the experimental setup was almost identical to that for QE measurements. The monochromator exit slit was opened to a 1 mm width from 0.001 mm for QE measurement to increase the dosage. The irradiated intensity profile was evaluated using a low-gain-mode image with the minimum sensor exposure time (36 μs). The irradiation time of this durability test was set as 10-2000 s. The TID (photon/ pixel) was calculated using the intensity profile and irradiated time. The applied energies were 92, 108, 600, and 1000 eV. The dark-current increase was measured as a function of the dosage. Here, we note that silicon dioxide and silicon nitride insulator/dielectric layers can undergo charge-up, trapping photogenerated holes. These deplete the Si-SiO 2 layer interface and activate its generation-recombination centers. Subsequently, the dark-current increases along with the dark-current shot noise. This phenomenon can occur at both the Si rear and front sides. Figure 3 shows the durability evaluation results for three CMOS sensors: AR-coated VIS, SBSA, and SP3. Here, we recall that the SBSA and SP3 sensors do not use any AR coating. The Si-layer thicknesses of the SP3, SBSA, and VIS sensors were 9.5, 3.5, and 4 μm, respectively. As per the results, the sensors were classified into 3 groups (Groups 1-3) as a degree of their durability (Fig. 3). The VIS sensor can be classified under Group 1, corresponding to low durability. The SBSA and SP3 sensors at 92 and 1000 eV, respectively, are categorized under Group 2 (medium durability). The SBSA sensor at 108 eV and SP3 sensor at 92-600 eV fall under Group 3 (high durability). In these devices, charge-up occurs in three regions: illuminated-surface layer, MOS-transistor gate insulator, and oxide layer on the photodiode's front side. The VIS sensor used had a thick AR coating (several 100 nm), causing strong charge-up. On the other hand, because the SP3 and SBSA sensors had only a native-oxide layer instead of AR coatings on the illuminated (backside) surface, the illuminatedsurface charge-up is possibly small. The charge-ups at the gate insulator and oxide layer on the photodiode's front side are caused by photons penetrating the silicon layer. These chargeups can be reduced with the use of thicker silicon layers to suppress photon penetration. Figure 4 shows the calculated device transmittances for Si thicknesses of 3.5, 9.5, and 40 μm using IMD software. 22) The transmittances significantly change at ∼100 eV and 1840 eV due to the silicon Land K-absorption edges, respectively. For the Si thickness of 3.5 μm, the Si layer can only block 100-500 eV photons (transmittance of <0.1%). Adversely, photons with energies of <100 and >500 eV can penetrate the silicon layer to cause damage. In contrast, transmittance with the 9.5 μm thick silicon layer is <0.1% below 800 eV, which indicates the possibility of a small amount of charge-up on the front side. The above discussion focuses on the photodiode darkcurrent, but we note that MOS transistor damage also occurs; the reference signal level shifts negatively after X-ray illumination of the Group 1 and 2 devices. The negativeshift amount is ∼10 DN in the SP3 sensor for 1000 eV photons for an irradiation amount of 1.6 × 10 8 photon/pixel. Because these sensors were measured using correlated double sampling (CDS), 23) a simple threshold-voltage shift of the source follower amplifier transistor might have been compensated, and the reference-level shift may have been suppressed. However, if radiation damage slightly degrades the CDS, such a reference-level shift can occur. Here, we emphasize that this reference-level shift was not observed in Group 3 devices. In summing the X-ray TID results, we note that Group 3 devices exhibit a slight dark-current increase due to nativeoxide-layer charge-up. This dark-current increase is negligible for most applications, particularly when the sensor is cooled. The Group 2 devices show a non-negligible darkcurrent increase because the front-surface oxide of the photodiode is charged up by penetrative X-ray photons. The Group 1 devices suffer from a severe dark-current increase due to charge-up of the AR dielectric layers. As regards durability, for EUV ptychography 24,25) for EUV lithography mask inspection for example, a durability of 6 × 10 10 EUV photon/pixel is normally required, assuming that the incident photon number at the sensor center is ∼10 000 photon/pixel/s and that the device is operated 8 h d −1 and 200 d yr −1 . In our study, SP3 exhibited a darkcurrent increase of 300 electron/pixel at 42°C when subjected to 6 × 10 10 EUV photon/pixel (Fig. 3). This dark-current value is acceptable for EUV ptychography. In addition, We also measured the dark-current-temperature dependence of the devices. The dark-current decreased by half for every temperature reduction of 5.7°C. Thus, the dark-current increase at −10°C was <1/560 times smaller than that at 42°C, corresponding to 0.5 electron/pixel/s for 1 y of usage at −10°C; this increase is negligible for most practical applications. In summary, our SP3 sensor exhibits the highest QE among all devices in the 80-1000 eV energy range. The device deadlayer thickness is only 5 nm, and its energy-resolving performance indicates that oxygen, nitrogen, carbon, and boron fluorescences can be identified. EUV photons can also be resolved. The device tolerance to soft-X-ray photons is also improved due to the thick Si layer of 9.5 μm. Importantly, the dark-current increase after 1 y usage for EUV ptychography is also negligible. The SP3 sensor also affords a high frame rate of 48 fps and low readout noise of 2.5 e − rms, which are practical for general soft-X-ray experiments, [26][27][28][29][30][31][32][33][34] indicating a potentially wide range of applications as it affords a high QE in the vacuum-ultraviolet regime and efficient low-energy electron sensing. This sensor could provide high-sensitive soft-X-ray imaging with low-readout-noise and high frame-rate condition. In future, we plan to develop a vacuum cooling system and apply the sensor in actual synchrotron experiments, and evaluate line spread function, which is an important parameter for spatial resolution.
2,871.4
2019-12-04T00:00:00.000
[ "Physics" ]
IL-6 deficiency attenuates p53 protein accumulation in aged male mouse hippocampus Our earlier studies demonstrated slower age-related memory decline in IL-6-deficient than in control mice. Therefore, in the present study we evaluated the effect of IL-6 deficiency and aging on expression of p53, connected with accumulation of age-related cellular damages, in hippocampus of 4- and 24-month-old IL-6-deficient C57BL/6J (IL-6KO) and wild type control (WT) mice. The accumulation of p53 protein in hippocampus of aged IL-6KO mice was significantly lower than in aged WT ones, while p53 mRNA level was significantly higher in IL-6-deficient mice, what indicates that the effect was independent on p53 transcription. Presence of few apoptotic cells in hippocampal dentate gyrus and lack of changes in levels of pro-apoptotic Bax, antiapoptotic Bcl-2, as well as in p21 protein in aged animals of both genotypes, points to low transcriptional activity of p53, especially in aged WT mice. Because the amount of p53 protein did not correlate with the level of Mdm2 protein, its main negative regulator, other than Mdm2-dependent mechanism was involved in p53 build-up. Significantly higher mRNA levels of autophagy-associated genes: Pten, Tsc2, and Dram1 in IL-6KO mice, in conjunction with significantly lower amount of Bcl-2 protein in 4-month-old IL-6KO mice, suggests that lack of IL-6/STAT3/Bcl-2 signaling could account for better autophagy performance in these mice, preventing excessive accumulation of proteins. Taken together, attenuated p53 protein build-up, absence of enhanced apoptosis, and transcriptional up-regulation of autophagy-associated genes imply that IL-6 deficiency may protect hippocampus from age-related accumulation of cellular damages. Electronic supplementary material The online version of this article (10.1007/s10522-019-09841-2) contains supplementary material, which is available to authorized users. Introduction Interleukin 6 (IL-6) is a small signaling glycoprotein with a diverse set of actions that depend on the target cell type. It has been shown that, in the central nervous system (CNS) it regulates neuronal and synaptic functions, as well as behavior (Erta et al. 2012;Gruol 2015). Within the CNS IL-6 is mainly synthesized by astrocytes, and to a lesser extent by microglia and neurons (Gruol 2015). In basal conditions IL-6 mRNA and protein are expressed in limited amounts in several brain regions (Aniszewska et al. 2015;Gadient and Otten 1994;Schobitz et al. 1993). IL-6 exerts biological effects either via the membrane bound IL-6 receptor (classical signaling) or via the soluble form of IL-6 receptor (trans-signaling), and the latter one has been suggested to be of particular importance in the CNS (Erta et al. 2012;Heinrich et al. 2003;Spooren et al. 2011). Both signaling pathways depend on the membrane bound gp130 subunits. Formation of the IL-6/IL-6R/gp130 complex phosphorylates tyrosine kinases of the Janus kinase (JAK) family and triggers a sequence of events resulting in gp130 phosphorylation, followed by activation of the signal transducer and activator of transcription-3 (STAT3). In addition to activation of the STAT3 pathway, the RAS/ mitogen-activated protein kinase (MAPK), phosphatidylinositol-3 kinase (PI3 K) and insulin receptor substrate (IRS) pathways can be activated by the IL-6/ IL-6R/gp130 complex (Boulanger et al. 2003;Erta et al. 2012;Heinrich et al. 2003). Aging is a natural process with gradual decline of many normal biological functions of cells such as: DNA repair, regulation of cell proliferation, and immune response (Feng et al. 2007). Long lived neuronal cells are likely to accumulate mutations in the DNA leading to impaired cellular functions. Moreover, neurons are characterised by high metabolic activity and high consumption of oxygen. Therefore, these cells are exposed to higher levels of oxidative stress in comparison with other cell types (Best 2009). DNA damage that exceeds a threshold is associated with apoptosis or senescence (Best 2009;Brady and Attardi 2010). Importantly, increasing with age and in neurodegenerative diseases IL-6 expression was recognized as an accelerator of senescence (Erschler 1993;Godbout and Johnson 2004;Tha et al. 2000). Under stressful conditions such as inflammation, brain injury and certain CNS diseases IL-6 level significantly raises, both on the periphery and in the CNS (Erta et al. 2012;Gruol 2015). Increased expression of IL-6 in normal aging (Godbout and Johnson 2004;Marsland et al. 2006;Weaver et al. 2002), augmented in certain neurodegenerative diseases, has been shown to interfere with cognitive functions (Bermejo et al. 2008;Cacquevel et al. 2004;Maggio et al. 2006;McAfoose and Baune 2009;Müller et al. 1998;Luterman et al. 2000;Trapero and Cauli 2014). Importantly, GFAP-IL-6 transgenic mice, in which elevated levels of IL-6 in the CNS are produced by astrocytes, exhibited progressive behavioural, physiological, as well as anatomical abnormalities, particularly in hippocampus and cerebellum, developing earlier in homozygote than in heterozygote mice (Campbell et al. 1998;Heyser et al. 1997;Gruol 2015). A tumor suppressor, p53 protein, is long-recognized to suppress cancer through the induction of cellcycle-arrest or apoptosis in response to different cellular stress signals (Brady and Attardi 2010). However, studies have demonstrated that function of p53 extends beyond the capacity to trigger cell-cycle arrest and programmed-cell death, and novel activities, such as the regulation of metabolism, autophagy and the oxidative status of the cell, are emerging (Brady and Attardi 2010;Chumakov 2007;Rufini et al. 2013). The present study expanded our earlier experiments, performed on 4-and 24-month-old IL-6deficient mice, assessing the influence of IL-6 deficiency on cognitive processes. In the previous study we demonstrated the attenuation of learning ability in Morris water maze , as well as the attenuation of recognition memory in IL-6KO young adult mice ) in comparison to IL-6 producing mice. However, age-related progression of these alterations was slower in IL-6KO group than in controls. Moreover, IL-6-deficient mice demonstrated better retrieval of acquired information, more pronounced when delay between learning completion and testing was longer . Because age-related accumulation of cellular damages resulting in increased p53 expression (Brady and Attardi 2010) may be involved in agerelated memory decline we investigated the effect of IL-6 deficiency and aging on p53 protein abundance in hippocampus, a key structure for learning and memory processes. Materials and methods All procedures were approved by the Local Animal Ethics Committee in Białystok, Poland and were performed in compliance with the European Communities Council Directive 2010/63/EU. Naïve, male 4-month-old (young adult) and 24-month-old (aged) IL-6-deficient mice C57BL/6J IL-6-/-TMKopf (IL-6KO) and reference wild type (WT) animals (C57BL/6J), originally purchased from the Jackson Laboratory (USA), were obtained from the Centre for Experimental Medicine of the Medical University of Białystok. The mice were maintained in a temperaturecontrolled environment (22 ± 1°C), humidity (45-55%), with a 12 h light-dark cycles beginning at 7 a.m. and were housed in polycarbonate cages, five animals per cage, with water and commercial food (Labofeed H Standard, Morawski, Poland) available ad libitum. Mice were sacrificed by cervical dislocation. No sedation was used. Brains were immediately excised manually and transferred into the 10% phosphate-buffered formalin. Subsequently, brains were processed into the paraffin blocks collected for histological and immunohistological examination. For molecular biology analyses hippocampi taken from mice after excision, under the 9 3-magnifying glass, were immediately placed in sterile Eppendorf tube, subsequently frozen in liquid nitrogen directly and stored in -80°C until further procedure. One, randomly taken left or right hippocampus was used for Western blot, while the other for quantitative Real-Time PCR. Genotype of mice was confirmed by polymerase chain reaction as described previously (Bonda et al. 2013). Immunofluorescence The sections after deparaffinization and rehydration, were pre-treated with proteinase K solution (1:800) and blocked with 10% donkey serum in phosphate buffered saline (PBS), pH 7.4, for 1 h at RT. The primary antibody against GFAP (ab7260, Abcam) was applied at 1:500 dilution in PBS for 90 min at RT. Next, the sections were washed and incubated with secondary antibody conjugated with biotin (Donkey Anti-Rabbit IgG, 711-065-152, Jackson Immuno Research Laboratories) at a 1:200 in PBS dilution for 1 h at RT, followed by washing with PBS containing Tween 20. Subsequently, the sections were incubated with streptavidin-Alexa FluorÒ 488 (S32354, Life Technologies) at a dilution of 1:1000 in PBS for 40 min at RT in the dark, washed and counterstained with HOECHST 33258 (Sigma-Aldrich) in PBS for 2 min. at RT in the dark. Slides were cover slipped in Dako Mounting Medium and evaluated using fluorescence microscope (Olympus BX 41), using Olympus UPlanFLN 40 9/0.75 and Olympus PlanCN 20 9/0.40 objective. TUNEL method After deparaffinization the sections were subjected to the fluorescein Terminal deoxynucleotidyl transferase mediated dUTP-marker Nick-End Labelling (TUNEL) using ApopTagÒ Fluorescein In Situ Apoptosis Detection Kit (Millipore, S7110) according to the manufacturer procedure. Counterstaining of cell nuclei was performed using HOECHST 33258 (Sigma-Aldrich). All slides were analysed and photographed using fluorescence microscope (Olympus BX 41). RNA isolation and quantitative real-time polymerase chain reaction (qRT-PCR) Homogenization of hippocampus was performed using TissueLyser (Qiagen) for 2 min. at 30 Hz with one Stainless Steel Bead, 5 mm (Qiagen, Cat No. 69989). Total RNA was isolated using Rneasy Lipid Tissue Mini Kit (Qiagen, Cat. No. 74804) and QIAcube apparatus (Qiagen) according to the manufacturer's protocols. RNA was dissolved in 40 ll of RNase-free water. Its quantity was evaluated using NanoDrop 2000c Spectrophotometer (Thermo Fisher Scientific, Inc., Wilmington, DE, USA) immediately after isolation. RNA quality, including 28S/18S ratio and RNA integrity number (RIN) was assessed with 2100 Bioanalyzer System (Serial. No. DE72905449) and an RNA 6000 Nano Kit (Agilent Technologies Inc., Santa Clara, CA, USA, cat. no 5067-1511) according to the manufacturer's protocol. 500 ng of total RNA was transcribed into cDNA using RT2 First Strand Kit (Qiagen, Cat. No. 330404) in Labcycler (Model No. 1120240193; SensQuest GmbH, Göttingen, Germany) according to the manufacturer's protocol. cDNA was stored in -80°C until qRT-PCR using RT2 SYBR Green qPCR Mastermix (Qiagen, Cat. No. 330502) and Custom RT2 PCR Array plates (Qiagen, Cat. No. 330171). Information regarding primers used in the assay is presented in Table 1. The amplification reaction was performed in 25-ll reaction mixture. Each sample was run in duplicates. The qRT-PCR cycling conditions were as follows: first cycle-95°C for 10 min, followed by 45 cycles: 95°C for 15 s and 60°C for 1 min. RT-qPCR was performed in Roche LightCycler 480 apparatus with software for evaluation of baseline and cycle threshold (Ct). The presence of a single peak at the melting temperature for each gene was confirmed by melting curves inspection. Expression level was quantified as Ct values normalized for the mean of the two reference control genes transferrin receptor (Tfrc) and phosphoglycerate kinase 1 (Pgk1) (Boda et al. 2009) using equation: DCt = Ct group -Ct ref . Fold-change (FC) in the mRNA level was calculated as FC = 2 -DDCt , where DDCt equals the difference between the normalized expression of the gene in the IL-6KO mice (Ct IL-6KO ) and its normalized expression in the corresponding age-matched WT animal (Ct WT ) (IL-6 vs. WT ones) (Schmittgen and Livak 2008). When the difference in expression within genotype was calculated young adult group of appropriate genotypes was taken as the control (24-vs. 4-month-old). For the statistical analyses, logarithmically transformed FC values were used (log2(FC)). Statistics Statistical analyses were performed using Statistica 13.0 and GraphPad Prism 5. All data were first p53 and Mdm2 expression The amount of p53 protein in hippocampus of young adult groups was low and comparable, while in aged animals it increased only in WT ones (Fig. 1a). Evaluation with Kruskal-Wallis test yielded H(4,24) = 13.78, p \ 0.05 and Dunn's post hoc test revealed significant increase of p53 protein amount in aged WT mice in comparison with 4-month-old WT and 24-month-old IL-6KO mice (p \ 0.01 and p \ 0.05, respectively). In aged IL-6KO mice the level of p53 protein was similar to IL-6KO young adult ones. Analysis of p53 mRNA expression in hippocampal cells revealed higher level of its transcript in 24-month-old IL-6KO mice than in agematched WT animals, but the difference was insignificant. In 4-month-old IL-6KO mice the level of p53 mRNA was only slightly higher than in age-matched WT ones, as well as in both aged groups in comparison with the respective young adult group (Fig. 1c). GLM analysis revealed significant influences of genotype and age on parameters assessed in Western blot and in qRT-PCR. Abundance of p53 protein was both genotype-and age-dependent (Table 2, p = 0.0011 for genotype, p = 0.0283 for age, and p = 0.0099 for genotype*age interaction), while the amount of p53 mRNA transcript turned out to be only genotypedependent (Table 3, p = 0.0359). To determine whether increase in p53 protein level, resulted from diminished action of its main negative regulator, the Mdm2 protein was examined. Analysis of Mdm2 Western blot quantitation revealed no Fig. 1 Amount of p53 protein (a) and its mRNA transcripts (c) in hippocampus of 4-and 24-month-old IL-6-deficient (IL-6KO) and wild type control (WT) mice. Level of mRNA expression was defined as log2(FC), where FC stands for foldchange difference in mRNA level between indicated groups. Bars represent mean ± SEM obtained from six animals in each group. Expression of p53 protein was low and comparable in 4-month-old groups of both genotypes. In 24-month-old WT mice the amount of p53 protein increased significantly in comparison with 4-month-old WT (**p \ 0.01) and 24-monthold IL-6KO (*p \ 0.05) animals (Kruskal-Wallis with Dunn's post hoc test). According to GLM the accumulation of p53 protein was both genotype-and age-dependent (p \ 0.005 and p \ 0.05, respectively). c There were no significant differences in the p53 mRNA levels between four groups of mice, however, according to GLM analysis the p53 mRNA level turned out to be influenced by genotype (p \ 0.05). b Representative immunoblot for p53 protein is shown together with a-tubulin as a loading control. M molecular weight marker differences in its amount between young adult groups (Fig. 2a). In aged mice the level of Mdm2 was moderately decreased in IL-6KO mice and slightly decreased in WT animals (Fig. 2a). ANOVA of Mdm2 protein amount yielded F(3,20) = 4.336, p = 0.0165 and Bonferroni post hoc test revealed significantly decreased Mdm2 protein level in 24-month-old IL-6KO mice in comparison with 4-month-old IL-6deficient ones (p \ 0.05). Comparison of four groups revealed that Mdm2 mRNA expression was higher in both IL-6KO than in respective WT groups, and lower in both aged groups in comparison with genotypematched young adult groups (Fig. 2c). Wilcoxon signed rank test revealed significantly higher level of Mdm2 transcripts in 4-and 24-month-old IL-6KO mice than in age-matched WT animals (p \ 0.05) and significantly lower level of Mdm2 transcripts in 24-month-old than in 4-month-old WT animals (p \ 0.05). When genotype and age effect on Mdm2 abundance was assessed by GLM, significant differences were found for age factor regarding Mdm2 protein levels ( Table 2, p = 0.0063) and for genotype and age regarding Mdm2 mRNA transcript amounts (Table 3, p = 0.0026 and p = 0.0006, respectively). p21 protein Expression of p21 protein, a mediator of p53-dependent cell-cycle arrest, was examined to determine the potential consequences of age-associated increase in p53 protein level. The amount of p21 protein was comparable in all tested groups, indicating that neither IL-6 deficiency, nor aging affected its expression in hippocampal cells (Supplementary material Fig. S1A). GLM analysis showed lack of significant effects of either genotype, age or their interaction on p21 protein level (Table 2). Apoptosis and its markers Because increased level of p53 protein may suggest enhanced apoptosis, the abundance of its markers: Bax and Bcl-2 proteins was evaluated. The amount of both proteins was lower in 4-and 24-month-old IL-6KO mice in comparison with respective WT groups (Supplementary material Fig. S2). ANOVA of Bcl-2 protein amount yielded F(3,20) = 0.0218, p = 4.016, and Bonferroni post hoc test revealed significant difference (p \ 0.05) only between 4-month-old IL-6KO and WT animals (Supplementary material Fig. S2B). Moreover, in both genotypes aging had no effect on Bax and Bcl-2 protein level. Similarly, as for p21, GLM analysis showed lack of significant effects of either genotype, age or their interaction on Bax and Bcl-2 protein levels ( Table 2). To confirm results obtained from molecular studies visualisation of apoptotic cells using TUNEL method was performed on hippocampal slices. Microscopic examination of dentate gyrus in 12 sections (2 from 6 animals in each group) showed presence of only single apoptotic cells in hippocampal dentate gyrus of both aged groups indicating that programmed cell-death was not enhanced (Fig. 3a). GFAP and IL-6 mRNA Because aging is associated with hyperplasia and hypertrophy of glial cells, the major source of IL-6 in CNS, evaluation of glial cell abundance was performed by Western blot protein quantification and by tissue staining with antibody directed against present in astrocytes glial fibrillary acidic protein (GFAP). Although the GFAP protein level was higher in both aged than in respective young adult groups the differences were insignificant (Fig. 4a). Tissue staining of GFAP showed its comparable intensity in hippocampal dentate gyrus of both young adult groups and its mild increase in aged IL-6KO and WT animals (Fig. 4c). GLM evaluation, however, revealed lack of significant influence of age, genotype or their interaction regarding the abundance of GFAP in this brain structure ( Table 2). Analysis of IL-6 mRNA quantitation revealed its significant up-regulation in 24-month-old WT mice in comparison with 4-month-old WT animals (p = 0.0455, Wilcoxon signed rank test). Although, the difference in the mRNA for IL-6 was statistically significant, the amount of this cytokine in aged WT mice increased by about 35% in comparison with younger animals (Fig. 4d). mRNA quantitation of selected p53-dependent genes Results of qRT-PCR are presented on Fig. 5. Statistical evaluation of Phosphatase and tensin homologue (Pten) mRNA level by Wilcoxon signed rank test revealed significantly higher expression of Pten in both 4-and 24-month-old IL-6-deficient mice in comparison with age-matched control WT mice (p = 0.0020 and p = 0.0161, respectively). Aging was associated with down-regulation of Pten mRNA expression, which was statistically significant in 24-month-old IL-6KO mice in comparison with b Fig. 2 Amount of Mdm2 protein (a) and its mRNA transcripts (c) in hippocampus of 4-and 24-month-old IL-6-deficient (IL-6KO) and wild type control (WT) mice. Level of mRNA expression was defined as log2(FC), where FC stands for foldchange difference in mRNA level between indicated groups. Bars represent mean ± SEM obtained from six animals in each group. There were no significant differences in the Mdm2 protein level in young adult mice, and aging was associated with decrease in its amount, which was statistically significant in IL-6-deficient mice in comparison with age-matched WT controls (**p \ 0.01, ANOVA with Bonferroni post hoc). GLM analysis revealed significant influence of age on the Mdm2 protein abundance (p \ 0.001). (c) In both 4-and 24-month-old IL-6deficient mice Mdm2 mRNA level was significantly higher in comparison with respective control WT animals (*p \ 0.05), and aging was associated with significant decrease in Mdm2 mRNA in WT controls (*p \ 0.05, Wilcoxon signed rank test). According to GLM analysis the expression of Mdm2 mRNA was both genotype-and age-dependent (p \ 0.005 and p \ 0.001, respectively). b Representative immunoblot for Mdm2 protein is shown together with a-tubulin as a loading control. M molecular weight marker 4-month-old IL-6KO ones (p = 0.00149, Wilcoxon signed rank test). Analysis of Tuberos sclerosis 2 (Tsc2) mRNA showed significantly higher expression of its transcript in both 4-and 24-month-old IL-6KO mice in comparison with age-matched WT controls (p = 0.0022 and p = 0.0269, respectively, Wilcoxon signed rank test). Evaluation of Tsc2 mRNA expression within Also, Sestrin 1 (Sesn1) mRNA expression was higher in IL-6KO mice than in respective WT groups. Statistical analysis with Wilcoxon signed rank test revealed that significant difference was only between 24-month-old IL-6KO and WT group (p = 0.049). Moreover, aging was associated with decrease in Sesn1 mRNA expression, which was insignificant in IL-6KO and significant in WT mice (p = 0.0098) in comparison with appropriate genotype-matched younger group. Analysis of Damage-regulated autophagy modulator 1 (Dram1) mRNA quantification revealed that in both young adult and aged IL-6KO mice the level of Dram1 transcript was higher in comparison with agematched WT animals, however the difference was significant only between young adult groups of mice (p = 0.0322, Wilcoxon signed rank test). Aging was associated with insignificant decrease in Dram1 mRNA expression in IL-6KO mice, but not in WT ones, in which it remained on similar to 4-month-old WT mice level. GLM analysis revealed significant influence of genotype and/or age on the amount of mRNA transcripts for selected genes ( Table 3). Transcription of Pten (p = 0.0008), Tsc2 (p = 0.007), and Dram1 (p = 0.0385) was influenced by genotype. Moreover, transcription of Pten (p = 0.0076) and Sesn1 (p = 0.0020) was influenced by age factor. Discussion Our study revealed significantly attenuated accumulation of p53 protein in hippocampus of 24-month-old mice with IL-6 deficiency. Because in 4-month-old IL-6KO and age-matched WT mice the p53 protein levels were low and comparable, it may indicate that significant increase in p53 protein abundance only in WT mice was associated with higher level of IL-6 in b Fig. 4 a Amount of glial fibrillary acidic protein (GFAP) in hippocampus of 4-and 24-month-old IL-6-deficient (IL-6KO) and wild type control (WT) mice was comparable in both young adult groups and insignificantly higher in both aged groups. Bars represent mean ± SEM obtained from six animals in each group. b Representative immunoblot for GFAP protein is shown together with a-tubulin as a loading control. M molecular weight marker. c Tissue staining of GFAP, an astrocytic marker (green), showed similar intensity in both 4-month-old groups and its moderate increase in both 24-month-old groups (magnification, 9 20), d IL-6 mRNA in hippocampus was significantly higher in 24-month-old WT mice in comparison with 4-month-old WT ones (*p \ 0.05, Wilcoxon signed rank test). Level of mRNA expression was defined as log2(FC), where FC stands for foldchange difference in mRNA level Bars represent mean ± SEM obtained from six animals in each group. Levels of Pten mRNA and Tsc2 mRNA were significantly higher in both 4-and 24-month-old IL-6KO mice in comparison with age-matched WT controls (**p \ 0.01, and *p \ 0.05, respectively). Aging was associated with significant decrease in Pten mRNA and Tsc2 mRNA levels (**p \ 0.01, and *p \ 0.05, respectively) in IL-6KO animals (Wilcoxon signed rank test). IL-6 deficiency was associated with statistically significant increase in Sesn1 mRNA in aged animals in comparison with respective WT group (*p \ 0.05), while aging was associated with significantly decreased its mRNA transcript in WT control mice (**p \ 0.01, Wilcoxon signed rank test). Deficiency of IL-6 resulted in significantly higher expression of Dram1 mRNA in 4-month-old mice (*p \ 0.05), while aging diminished Dram1 mRNA level, but the effect was insignificant (Wilcoxon signed rank test). GLM analysis revealed influence of genotype on the transcription of Pten, Tsc2, and Dram1 (p \ 0.0005, p\0.0005 and p \0.05, respectively), and influence of age on the transcription of Pten and Dram1 (p \0.01 and p \0.005, respectively) senescent animals. Accumulation of p53 protein in WT animals was significantly higher while in IL-6KO mice it did not change overtime. Since in IL-6KO mice significantly higher expression of p53 mRNA was not accompanied by an increase in the amount of p53 protein in these animals it indicates that attenuation of p53 protein accumulation in aged IL-6KO mice was independent on p53 gene transcription. Similar lack of substantial increase in p53 mRNA levels in aged mice was also described by others. Edwards et al. (2007) evaluated p53 mRNA levels in 5-and 25-month-old C57BL/6J mice in whole brain homogenates and reported lack of significant difference in p53 gene expression between young and old brains. The p53 gene becomes activated in response to myriad cellular stress signals. In cells under potent stress p53 triggers irreversible programs of apoptosis or senescence, while under conditions of mild stress, the same protein elicits protective, pro-survival action to maintain genome integrity and viability in cells with reparable damages. The exact cell fate is dependent on the cell type, environmental milieu and the nature of stress (Brady and Attardi 2010). High level of p53 protein suggests potent stress. Upon DNA damage or other stressors activation of p53 leads to a transient expression the cyclin-dependent kinase inhibitor (CKI) p21. Subsequently, this either triggers G1 cell cycle arrest or leads to a chronic state of senescence or to apoptosis (Georgakilas et al. 2017). Our data demonstrated lack of differences in the amount of p21 between IL-6-deficient and IL-6-producing mice, indicating that increased p53 protein amount in hippocampus of aged WT mice did not affect p21 protein expression. Further evaluation of programmed cell-death markers: pro-apoptotic Bax and anti-apoptotic Bcl-2 revealed that neither Bax, nor Bcl-2 did not show significant differences in the amount of protein when compared between genotypes of aged animals. Observed lack of changes in the apoptotic markers in senescent animals was in accordance with presence of single apoptotic cells detected in hippocampus of aging mice of both genotypes, confirming that increased level of p53 protein was not associated with enhanced apoptosis. Despite higher p53 accumulation in hippocampus of aged WT animals lack of changes in the expression of its down-stream protein targets may suggest the attenuation of its transcriptional activity. Posttranslational modifications such as phosphorylation and acetylation have been associated with increased, while methylation with decreased, p53 protein transcriptional activity (Ivanov et al. 2007). Importantly, in our study increased level of p53 protein observed in aged control WT mice did not affect the expression of p21, Bax, and Bcl-2 proteins. Therefore, lack of significant effects of high p53 protein level on its cellular targets in aged WT mice may indicate that methylated p53 form constituted its majority. Moreover, our previous study evaluating significance of IL-6 deficiency in the mouse myocardium showed suppression of p53 protein accumulation in aged IL-6KO mice in comparison with WT ones, and that increased amount of p53 protein in the cardiomyocytes of aged mice expressing endogenous IL-6 constituted a cytoplasmic pool (Bonda et al. 2019). This may explain why increased accumulation of p53 protein in aged animals was not accompanied by concurrent increase of its transcriptional activity. Significantly lower level of p53 protein in aged IL-6KO than in WT mice, may point to the involvement of IL-6 in mechanisms regulating the amount of p53 protein in hippocampal cells. A major mechanism regulating p53 protein levels involves a reciprocal relationship with murine double minute 2 (Mdm2) protein. Under normal conditions Mdm2 binds with p53 protein blocking its activity and promoting transport for ubiquitin-proteasome degradation. Variety of cellular stressors have been shown to stabilize p53 protein via Mdm2 degradation and transcription of p53 target genes (Engel et al. 2007). In the current study aged IL-6KO mice demonstrated significantly decreased Mdm2 level in comparison with young adult ones, while in aged WT mice the Mdm2 was on the same level when compared with younger controls. Similarly, in our previous study performed on the mouse myocardium the expression of Mdm2 protein was lower in aged IL-6KO in comparison with young adult mice, whereas in WT control animals an agerelated decline in Mdm2 was less pronounced and did not reach the statistical threshold (Bonda et al. 2019). In both studies the abundance of Mdm2 was at the same level in young adult groups. Regarding Mdm2 mRNA, its level was significantly higher in IL-6KO mice than in WT mice, in both age groups but it was not followed by a higher amount of protein product, while aging was associated with decreased Mdm2 mRNA expression in both genotypes, followed by only slight in WT, and significant diminution of its protein amount in IL-6KO animals. Because the absence of endogenous IL-6 was accompanied by lower amount of p53 protein in hippocampus, not dependent on its ubiquitin-proteasome degradation, it may suggest that IL-6 is involved in other mechanisms responsible for p53 accumulation. Therefore, we evaluated the expression of selected genes involved in autophagy, especially that multiple p53 target genes have been shown to influence this process. On one hand p53-mediated transcriptional up-regulation of AMPK, Pten, and sestrins has been demonstrated to activate autophagy. On the other hand, cytoplasmic p53 has been shown to suppress autophagic flux through an unknown mechanism (Rufini et al. 2013). Moreover, it has been also shown that IL-6 influences autophagy through both inhibitory and stimulatory action (Qin et al. 2015). IL-6/STAT3 signaling pathway was demonstrated to inhibit autophagy in U937 cells, while it activated this process in pancreatic cancer cells (Kang et al. 2012). In our setting IL-6 deficiency was associated with up-regulation of autophagy-related genes: Pten, Tsc2, Sesn1, and Dram1, which was more pronounced in young adult mice. Moreover, in 4-month-old IL-6-deficient mice the amount of Bcl-2 protein, which is suggested to take part in IL-6-dependent autophagy regulation, was significantly decreased in comparison with agematched WT controls. Taking into consideration that in young adult IL-6KO mice p53 protein level was low and expression of Pten, Tsc2, Sesn1, as well as Dram1 was upregulated it may suggest that lack of IL-6/ STAT3/Bcl-2 signaling could account for better autophagy performance. In aged animals of both genotypes the expression levels of assessed autophagy-related genes were lower in comparison with respective young adult groups what may lead to an increased accumulation of altered protein forms. In the CNS IL-6 is mainly synthesized by astrocytes (Erta et al. 2012;Gruol 2015) and under stressful conditions these cells may become a significant source of reactive oxygen species (ROS) affecting function of neurons. Senescence and aging are associated with an increase in the level of oxidative-damaged proteins, lipids and DNA (Rufini et al. 2013), and aging astrocytes have been shown to present an increased mitochondrial oxidative metabolism leading to an agedependent increase in hydrogen peroxide generation and NFjB signalling in the cytosol, as well as to its translocation to the nucleus (Jiang and Cadenas 2014). Because GFAP protein is a reliable astrocytic marker we compared its amount in young adult and aged animals of both genotypes. The moderate increase in the intensity of GFAP staining in both aged groups, was in accordance with 35% increase of IL-6 mRNA amount in hippocampus of aged WT animals. Therefore, increased under normal condition synthesis of IL-6 could account for rather low-level of age-related oxidative stress and this weak genotoxic stress was insufficient to activate apoptosis. However, diminished level of p53 protein in hippocampus of aged mice not producing endogenous IL-6 might be associated with slower progression of age-related changes. Moreover, higher expression of genes associated with autophagy in IL-6KO mice points to the involvement of IL-6 in age-related accumulation of cellular damages in the hippocampus. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
7,089.6
2019-10-09T00:00:00.000
[ "Medicine", "Biology" ]
Relativistic equations with singular potentials The first part of this paper concern with the study of the Lorentz force equation q′1-|q′|2′=E→(t,q)+q′×B→(t,q)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \left( \frac{q'}{\sqrt{1-|q'|^2}}\right) '= \overrightarrow{E}(t,q)+q'\times \overrightarrow{B}(t,q) \end{aligned}$$\end{document}in the relevant physical configuration where the electric field E→\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overrightarrow{E}$$\end{document} has a singularity in zero. By using Szulkin’s critical point theory, we prove the existence of T-periodic solutions provided that T and the electric and magnetic fields interact properly. In the last part, we employ both a variational and a topological argument to prove that the scalar relativistic pendulum-type equation q′1-(q′)2′+q=G′(q)+h(t),\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \left( \frac{q'}{\sqrt{1-(q')^2}}\right) ' +q = G^{\prime }(q) +h(t), \end{aligned}$$\end{document}admits at least a periodic solution when h∈L1(0,T)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$h\in L^1 (0, T)$$\end{document} and G is singular at zero. Introduction The main scope of this paper is to investigate the existence of T -periodic solutions of the relativistic Lorentz force equation Here, − → E and − → B denote, respectively, the electric and magnetic fields and are given by where V : [0, T ] × (R 3 \{0}) → R and W : [0, T ] × R 3 → R 3 . By a solution of Eq. (1.1) we mean a function q = (q 1 , q 2 , q 3 ) ∈ C 2 satisfying (1.1) and such that |q (t)| < 1 for all t. Lorentz force equation (1.1) models the motion, in a relativistic regime, of a slowly accelerated charged particle under the influence of an electromagnetic field. The relativistic nature of Eq. (1.1) turns out in its left-hand side, which involves the relativistic momentum introduced by Poincaré in [11], with the velocity of light in the vacuum and the charge-to-mass ratio normalized to one, for simplicity. Instead, the presence of an electromagnetic field is emphasized by the Lorentz force − → E (t, q) + q × − → B (t, q) in its right-hand side. It represents one of the most significant equation of Mathematical Physics (see e.g., [9]). Nonetheless, a rigorous mathematical variational approach for its study was developed only recently in 91 Page 2 of 22 D. Arcoya and C. Sportelli ZAMP [3,4] (see also [5] for the case − → B ≡ 0) where the maps V and W are assumed to be of class C 1 , while the relevant cases including configurations of electric fields coming from the physical models and consisting of singular electric potential, had remained open. An early result concerning these singular models was achieved recently in [10] via a topological method. In this work, the authors consider the case that the electric field − → E is a sufficiently large L 1 -pertubation of a Coulomb electric potential, or more specifically, − → E (t, q) = −∇V (q) − h(t) with h ∈ L 1 ([0, T ], R 3 ). The potential V is assumed to be singular at zero and, for some γ ≥ 1, c > 0, satisfies the inequality q · ∇V (q) ≤ −c/|q| γ if |q| is small enough. On the other hand, the magnetic field − → B is supposed to be bounded with a singularity at zero of lower order than the singularity |q| −γ−1 . Applying a global continuation theorem, the existence of a T periodic solution is guaranteed only when the mean value h of h is greater than the supremum of C(t) := lim sup |q|→∞ | − → B (t, q)|. In particular, this result clearly fails if e.g., h is identically zero. We emphasize in addition that the approach to the singular problem using variational methods had still maintained open. The aim of this paper is to fill the observed gaps by developing the variational framework needed to address Eq. (1.1) and other relativistic singular problems as well, establishing the landmark for future investigations related on these topics. To be more precise, we show not only that Eq. (1.1) can be studied using a variational approach even when the electric field − → E is singular, widening the range of possible choices of V and covering the case untreated in [3,4]; but also, we prove that the topological argument carried on in [10] can be employed in such a way to handle other kinds of relativistic problems in which appears a singular term. As a first step to study (1.1) variationally, we derive a new version of the Mountain Pass Theorem, which has its own interest and allows one to identify critical points of functionals which possess singularities (see Theorem 2.1). Our abstract result relies upon the idea developed in [1] to address the study of a relativistic spherical pendulum. In particular, it will be essential to impose for the action functional I that I(q n ) blows up when {q n } converges uniformly to a function which "touches" the singular set of I (see (2.1) below and compare with Lemma 5.1 in [1]). On this regard, we thank the anonymous referee who brought to our attention the paper [6], in which this condition is also used to provide the existence of solutions for another type of relativistic singular problem, namely, a relativistic Keplerian problem in the plane. Thus, by using our abstract result, we derive the existence of a T -periodic solution for Eq. (1.1). To be more precise, we assume that V is dominated by the function −c/|q| (c is a positive constant) when q is located in a neighborhood of the origin, while the sum of the magnitudes of V (t, q) and ∇ q V (t, q) tend to 0 uniformly in t ∈ [0, T ] when q approaches infinity. Also, we suppose that W is bounded, its modulus and the sum of the magnitudes of the components of its gradient at q converge to 0 uniformly in t when q goes to infinity. Thus, if also there exists c 0 > 0 such that we prove that Eq. (1.1) admits a T -periodic solution. Observe that the periodic solution provided by the former result could be trivial provided that V and W depends only on the variable q , V is of class C 2 in R 3 \ {0} and there exists ξ ∈ R 3 \ {0} such that V (ξ) < 0, ∇V (ξ) = 0. In this case, we also prove that if the matrix is positive definite, then Eq. (1.1) has a periodic solution which is different from the constant solution ξ. Anyway, in order to not weigh this introduction down with too many details, we prefer to specify each hypothesis and to state our main results in Sect. 3. The remaining part of the paper is motivated by the study of the spherical pendulum in [1]. We study the existence of periodic Lipschitz solutions q(t) ∈ R of the scalar relativistic pendulum-type equation where h ∈ L 1 (0, T ), the singular function G dominates the function 1/|q| as q ∼ 0 and its first derivative is bounded when q is far away from the singularity at 0. It is worth noting that, unlike (1. Local mountain pass for singular non-smooth functionals In [3] a Mountain Pass Theorem without compactness conditions is given for the Szulkin critical point theory [13]. We give here a generalization of it which will be useful to handle functionals having singularities. Assume also that lim n→∞ I(q n ) = +∞, (2.1) for every sequence {q n } ⊂ Λ whose distance dist (q n , E\Λ) is converging to zero. Let also K be a compact metric space, K 0 ⊂ K a closed subset and γ 0 : K 0 → Λ a continuous map. Consider the set then, for every ε > 0 and γ ∈ Γ Λ such that there exist γ ε ∈ Γ Λ and q ε ∈ γ ε (K) ⊂ E satisfying Remark 2.2. Notice that the continuity of Ψ in its closed domain implies that Ψ is lower semicontinuous in E. Proof. Let Γ be defined as which is a complete metric space endowed with the uniform distance Since Λ is open in E and K is compact, the set Γ Λ = {γ ∈ Γ : γ(t) ∈ Λ} is open in Γ. Consider the functional Υ : Γ Λ → (−∞, +∞] given by Observe that every γ in the domain of Υ, that is, verifying Υ(γ) < +∞, satisfies that γ(t) ∈ Dom Ψ for every t ∈ K. Hence, the continuity of Ψ in its closed domain implies that I • γ is continuous in the compact K and we have Υ(γ) = max t∈K I(γ(t)). contradicting (2.4). The claim has been proved and thus there exists μ ∈ (0, μ 0 ) such that γ ε,μ ∈ • N μ . In the sequel we fix this constant μ and we denote γ ε,μ = γ ε . We conclude the proof by showing the existence of t ε ∈ T : Indeed, assume by contradiction that for every t ∈ T there exists ϕ t ∈ E\{γ ε (t)} such that We can repeat the argument in the proof of Theorem 1 in Section 2 of [3] to deduce for every sufficiently small δ > 0 the existence of γ * ∈ Γ such that The first inequality allows to choose δ > 0 such that γ * ∈ N μ (remind that γ ε ∈ • N μ ) and then the second inequality contradicts (2.5) and completes the proof. The relativistic Lorentz force equation Consider the relativistic Lorentz force equation (1.1) when the electric and magnetic fields, respectively In order to study it, we denote by W 1,∞ (0, T ) the space of all Lipschitz functions in [0, T ] (or equivalently the absolutely continuous functions in [0, T ] with bounded derivatives) and we consider the Banach space We consider also the subspace E of all T -periodic vector functions q ∈ W 1,∞ (i.e. q ∈ W 1,∞ such that q(0) = q(T )). Let also K be the convex and closed set given by Following [3] the Lagrangian action I : Λ → (−∞, +∞] associated to the problem of the existence of T -periodic solutions of the Lorentz force equation (1.1) is given by where the functionals Ψ and F are defined by Since Ψ is a proper convex function which is continuous in its domain K (similar proof to that of Lemma 2 in Section 3 of [3]) and F is a function of class C 1 in K Λ , Szulkin's critical point theory from [13] is applicable for I. Recall what is it understood by a critical point in this theory. By a similar argument to this one in Theorem 2 of Section 3 in [3], we have that the critical points q ∈ K Λ of I are just the T -periodic solutions of (1.1). The following lemma will be essential to control the singularity of V at q = 0. Lemma 3.2. Assume that W is bounded and V satisfies the following hypothesis: and W is bounded, the first and second integrals of are bounded. Hence to prove the lemma, it suffices to show that Let t 0 ∈ [0, T ] be such that q(t 0 ) = 0. Two cases can occur: either q ≡ 0, or (up to a change of the zero t 0 by other zero) we can assume that there exists In the first case, q ≡ 0, we have |q n (t)| ≤ ε 0 for every t ∈ [0, T ] provided that n is large enough. Thus, the above hypothesis implies dt, for n large enough. By Fatou lemma, we deduce lim sup which shows that lim n→∞ T 0 V (t, q n ) dt = −∞ and the lemma is proved in this case. In the second case, observe that is also bounded from above for all n. Therefore, to conclude the proof it suffices to show that lim n→∞ |qn|≤ε0 1 |q n | dt = ∞. In order to prove it, since q n ∞ ≤ 1, we note that q n q n |q n | 2 dt = log |q n (t 1 )| − log |q n (t 0 )|. Consequently, using that q n (t 1 ) converges to q(t 1 ) = 0 and q n (t 0 ) to q(t 0 ) = 0, we deduce that and the proof is concluded. Remark 3.4. Observe that, as a consequence of the above lemma, the functional I satisfies the condition (2.1) of Theorem 2.1. Indeed, let {q n } ⊂ E be a sequence satisfying lim n→∞ dist (q n , E\Λ) = 0 and assume by contradiction that condition (2.1) does not hold true. Then, up to a subsequence, we can suppose that {I(q n )} is bounded from above. In particular, q n ∈ K Λ . By using this and choosing p n ∈ E\Λ such that q n − p n = dist (q n , E\Λ) we get the uniform boundedness of p n in [0, T ], which together to the fact that each p n vanishes at some point in In the incoming results we will use this direct sum decomposition Proof. Let {q n } be a sequence satisfying (3.1) and (3.2). Since q n = q n + q n with q n = q n ∞ + q n ∞ = q n ∞ + q n ∞ ≤ T + 1, to deduce the boundedness of {q n } in E it suffices to show that {q n } is bounded. Suppose by contradiction that, up to a subsequence, |q n | converges to infinity. Choosing ϕ = q n in (3.2) we obtain Since |q n (t)| = |q n + q n (t)| converges to infinity, and Therefore, again by (V ∞ ) and (W ∞ ) we would obtain from (3.1) that a contradiction proving that the sequence {q n } is necessarily bounded. By the compact embedding of E into C([0, T ], R) we can assume, up to subsequences, that q n (t) → q(t) uniformly in [0, T ]. Since each q n is Lipschitz with Lipschitz constant equal q n ∞ ≤ 1, we deduce that q is also Lipschitz with Lipschitz constant smaller or equal to one; i.e., q ∈ K. By Lemma 3. Proof. Recalling that π 2 T 2 is the second eigenvalue of the periodic problem associated to the operator −q (t), we deduce by its variational characterization that Thus, using this and the inequality 1− 1 − |p| 2 ≥ 1 2 |p| 2 for every |p| ≤ 1, we obtain for every q ∈ E ∩K Λ that The condition (3.3) implies then that On the other hand, by (V ∞ ), when q ∈ E converges to infinity we have and we can choose ρ > 0 such that the boundary ∂B ρ of the ball B ρ in E of center zero and radius ρ satisfies that sup that is, I verifies the geometry of the Rabinowitz's saddle point theorem [12]. Therefore, since dim E < ∞, we can apply Theorem 2.1 with K the closed ball B ρ in E of center zero and radius ρ, K 0 the boundary in E of this ball and γ 0 the identity function in K 0 to deduce the existence of a sequence {q n } ⊂ E such that Observe that the periodic solution given by Theorem 3.6 can be trivial. Indeed, if for instance, V and W does not depend on t, then q = ξ is a constant solution if and only if ∇V (ξ) = 0. In this section we show a sufficient condition in order to obtain a second solution of (1.1). Proof. As it has been mentioned, since V and W only depends on the variable q, the hypothesis ∇V (ξ) = 0 means that q = ξ is a constant (thus periodic) solution of (1.1). Moreover, the condition about the positive definiteness of the matrix given by (3.4) implies that the functional F presents a strict local minimum at q = ξ. Since any constant is trivially a local minimum of the function Ψ, we deduce that I = Ψ + F has a strict local minimum at q = ξ. In particular, there exists r 0 > 0 such that Theorem 3.7. Assume that V and W only depends on the variable q, satisfies conditions On the other hand, by (V ∞ ) when the constant η ∈ R 3 converges to infinity we have Periodic oscillations of a relativistic type-pendulum This section is devoted to the study of the scalar equation (1.3). Our main existence result is achieved by means of both variational and topological arguments. We divide the section in two subsections. Existence via a variational approach Following [1], in this subsection we study the existence of T -periodic solutions of (1.3) where h ∈ L 1 (0, T ) and the singular function G : R\{0} → R satisfies the hypothesis Since the function G has a singularity at q = 0, we work in the subset Taking the functional I related to problem (1.3) is given for every q ∈ Λ by Similarly to the previous section, the critical points q of I in E are just the T -periodic solutions of the Eq. Proof. Since {q n } is bounded in C([0, T ])R), the first, second and fourth integrals of I(q n ) (I is given by (4.1)) are bounded. Hence to prove the lemma, it suffices to show that T 0 G(q n ) dt converges to infinity. By the hypothesis (G 0 ), there exists ε 0 > 0 such that Let t 0 ∈ [0, T ] be such that q(t 0 ) = 0. Two cases can occur: either q ≡ 0, or (up to a change of the zero t 0 by other zero) we can assume that there exists t 1 ∈ (t 0 , T ] such that q(t 0 ) = 0 < |q(t)| ≤ ε 0 , for every t ∈ (t 0 , t 1 ]. In the first case, q ≡ 0, we have |q n (t)| ≤ ε 0 for every t ∈ [0, T ] provided that n is large enough. Thus, the above hypothesis implies By Fatou lemma, we deduce lim inf which shows that lim n→∞ T 0 G(q n ) dt = +∞ and the lemma is proved in this case. In the second case, we have Using that G(s) is bounded from below for 0 ≤ t ≤ T and ε 0 < |s| ≤ sup n q n ∞ < ∞, we have |qn|>ε0 G(q n ) dt is also bounded from below for all n. Therefore, to conclude the proof it suffices to show that lim n→∞ |qn|≤ε0 In order to prove it, since q n ∞ ≤ 1, we note that q n q n q 2 n dt = log |q n (t 1 )| − log |q n (t 0 )|. Consequently, using that, as n tends to infinity, q n (t 1 ) converges to q(t 1 ) = 0 and q n (t 0 ) to q(t 0 ) = 0, we deduce that lim n→∞ |qn|≤ε0 and the proof is concluded. Remark 4.4. As in the Remark 3.4, the above lemma implies that the functional I given by (4.1) satisfies the condition (2.1) required in Theorem 2.1. As in the previous section, we will use the direct sum decomposition Proof. Let {q n } ⊂ E be a sequence satisfying (4.3) and (4.4). Observe that for every q ∈ K Λ we have q ∞ = q ∞ ≤ 1, which together to the existence of a zero of q in [0, T ] (consequence of the zero mean value of q) implies that Hence, in order to prove that {q n } is bounded in E, it suffices to show that {q n } is bounded. To this aim, choosing w = q n + q n in (4.4), it follows that This implies that the sequence {q n } is bounded. Indeed, otherwise we can assume, up to subsequences, that |q n | converges to ∞. Thus, by (4.5), q n = q n + q n is away from zero for large n; that is, s 0 , n 0 0 exist such that |q n | ≥ s 0 for every n ≥ n 0 . By assumption (G ∞ ), there exists η > 0 such that By (4.6) we infer for every n ≥ n 0 that q 2 n T ≤ ηT |q n | + |q n | h L 1 + ε n |q n | i.e. |q n | ≤ η + T −1 h L 1 + ε n T −1 , (for every n ≥ n 0 ) contradicting the convergence of |q n | to ∞ and proving that the sequence {q n }, and thus the sequence {q n }, is bounded in E. By the compact embedding of E into C([0, T ])R we can assume, up to subsequences, that q n (t) → q(t) uniformly in [0, T ]. Since each q n is Lipschitz with Lipschitz constant equal q n ∞ ≤ 1, we deduce that q is also Lipschitz with Lipschitz constant smaller or equal to one; i.e., q ∈ K. By Lemma 4.3 and (4.3), we have concluding the proof. Proof. Taking into account the decomposition given by (4.2) and using (4.5) we have By this and since 1 − √ 1 − s 2 ≥ 1 2 s 2 ≥ 0 we deduce that the functional I defined by (4.1) satisfies We claim that this inequality implies that I is bounded from below over E. Indeed, if we take a minimizing sequence { q n } in E ∩ K Λ of I; i.e., such that . This means that there exists ε > 0 such that q n ∞ ≥ ε, for every n. Therefore, by using again that q n ∞ ≤ T , we obtain from (4.7) that This implies that inf E I ∈ R and the claim is proved. On the other hand, for every q ∈ E we have and thus, by (G ∞ ), we have lim q →∞ In consequence, we can choose ρ > 0 such that the boundary ∂B ρ of the ball B ρ in E of center zero and radius ρ satisfies that sup that is, I verifies the geometry of the Rabinowitz's saddle point theorem [12]. By Lemma 4.5 (instead of Lemma 3.5) we can repeat the argument in the proof of Theorem 3.6 to deduce the existence a subsequence (q n k ) of (q n ) converging in C([0, T ], R) to a critical point q ∈ K Λ of I with critical level I(q) = c. Now we look for T -periodic solutions q ∈ E of the Eq. (1.3) with h ≡ 0, i.e., Firstly, observe that every constant ξ ∈ R\{0} verifying ξ = G (ξ) is a trivial (constant) T -periodic solution of (4.8). In this case, in order to find another solution we apply the Mountain Pass Theorem to the functional I given by (4.1) with h ≡ 0; that is, Observe that it is of class C 2 (because G ∈ C 2 ) with the first and second derivatives given for every q, w 1 , w 2 ∈ E by In particular, the condition ξ = G (ξ) implies q = ξ is a critical point of F and the hypothesis G (ξ) > 1 means that the second derivative of F at q = ξ is positive definite. Thus, F has a strict local minimum at q = ξ. Taking into account that we deduce that q = ξ is also a strict local minimum of I. Hence, there exist δ, r > 0 such that I(q) ≥ I(ξ) + δ, when q − ξ = r. As in the proof of Theorem 4.6, by assumption (G ∞ ), for every q ∈ E we have Setting x := (q, p) and denoting by N f λ the Nemitskii operator associated to the function f λ (t, x), the previous problem can be written as the first order ordinary differential equation If P : X → X is the projection given by Note that, if X := Ker P = x ∈ X : 1 T T 0 x(t)dt = 0 , we can consider the operator K : L 1 ([0, T ], R 2 ) → X defining for each g ∈ L 1 ([0, T ], R 2 ), the function Kg as the unique solution x ∈ X of the equation Thus, using the previous notations, (4.9) turns into where • P has a finite range, • N f λ is continuous with N f λ (Ω) bounded in X, • and K| X : X → C 1 ([0, T ], R 2 ) is linear and continuous. Thus, by the compact embedding of C 1 ([0, T ]) into C([0, T ], R 2 ) (due to the Ascoli-Arzelà theorem) we have that T λ : X → X is compact and we can employ [7, Theorem 2] to address problem (4.10). For the sake of completeness, we recall it here in our particular case.
6,055
2023-04-17T00:00:00.000
[ "Physics" ]
Inclusive Fitness Theorizing Invokes Phenomena That Are Not Relevant for the Evolution of Eusociality In this Formal Comment, the authors challenge the claims of a recent theoretical study that genetic relatedness is important in the evolution of eusociality. the new individual becomes a migrant; now one of two things can happen: (iii) with probability f s the individual is replaced by a solitary one that starts its own colony; or (iv) with probability f e the individual is replaced by a eusocial one that goes back to the colony from which the migrant emerged; there the individual can either (v) stay with probability q and thus increase the size of the colony headed by the solitary mother or (vi) with probability 1 − q, the now eusocial individual leaves and starts its own eusocial colony. The model does not seem to describe a plausible biological scenario. Moreover, the migrant pool has the following strange property: as soon as a colony enters an individual into the migrant pool, that colony gets an individual back from the migrant pool. In contrast, if an individual dies or leaves a colony to form its own nest, then the size of the colony decreases. In LRQ's model, none of the migrant workers ever have a chance to take over the nest. For the origin of eusociality, LRQ's model assumes a presocial species in which a fraction of offspring migrate to new nests, but subsequently all offspring leave to start nests of their own, rendering the initial migration useless. Eusociality arises via an extraordinary mutation that reduces the second "leaving" action while having no effect on the initial "migration" action. For interpreting the model in terms of mothers laying eggs into other nests, LRQ assume that as soon as a mother lays an egg into another nest, she receives an egg into her own nest. The magnitude of egg swapping that occurs in their model is astonishing: if r < 0.5 (as in their Fig 1) then the dominant mode of reproduction is that a mother lays her own eggs into other nests and, in return, raises the eggs of other mothers. It is hard to imagine that those phenomena occur in nature, let alone are relevant for the origin of eusociality. LRQ cite two studies [7,8] justifying their biological assumptions, but neither study lends actual support. Field [7] reports only a small percentage of eggs being laid into other nests in three species (2%,7%, and 0%-15%), while none of the other phenomena investigated in Field's paper (such as nest usurpation) are compatible with LRQ's mathematical equations. Abbot et al. [8] describe social parasitism among aphids, in which reproductive individuals invade another gall to reproduce there; again this phenomenon is not compatible with LRQ's equations, in which only a single resident queen reproduces in each nest and is never replaced. Why do LRQ investigate such models? They present NTW as saying relatedness does not matter in general, but this is incorrect. Instead NTW write, "Relatedness does not drive the evolution of eusociality. We can use our model to study the fate of eusocial alleles that arise in thousands of different presocial species with haplodiploid genetics and progressive provisioning. In some of those species eusociality might evolve, while in others it does not. Whether or not eusociality evolves depends on the demographic parameters of the queen (. . .), but not on relatedness. The relatedness parameters would be the same for all species under consideration" (Fig 1). NTW have presented a simple model for the origin of eusociality, which requires no variation in relatedness. LRQ try to force variation of relatedness into this model unnaturally and for no biological reason. LRQ's approach also highlights a major problem of inclusive fitness thinking: "relatedness" is treated as an independent causal agent, such that one way of varying relatedness is equivalent to any other. This problem was addressed by NTW, who wrote, "It is possible to consider situations where all measures of relatedness are identical, yet cooperation is favoured in one case, but not in the other. Conversely, two populations can have relatedness measures on the opposite ends of the spectrum and yet both structures are equally unable to support evolution of cooperation. Hence, relatedness measurements without a meaningful theory are difficult to interpret." The relatedness discussions of NTW are more sophisticated than what is presented by LRQ. "Relatedness" is often used as inclusive-fitness jargon for population structure, giving the wrong impression that all aspects of population structure can be described by a single quantity. Decades of research [9][10][11][12][13] have shown that the effects of population structure on evolutionary dynamics are more intricate than such a view would suggest. LRQ find that their Model 1 makes it harder for eusociality to emerge than the original model of NTW. But this effect can be understood immediately without recourse to "relatedness": solitary mothers use the mixing pool to gain eusocial offspring that help them reproduce, while eusocial mothers receive solitary offspring who do not help. LRQ's Model 2, examining maternal control, is based on the following assumptions. Solitary mothers force eusocial migrants, who offer to help, away from their nest. Eusocial mothers receive solitary migrants, conscript some of them to help, but force others to leave and build their own solitary nests. According to this model, eusociality originates as follows: in the solitary ancestor species, there are already potential workers who move between nests and offer to help, but they are not admitted. The step to eusociality is a mutation in which a mother allows those workers to join her nest. Again there does not seem to be any biological evidence in favor of such a model. Model 2 is directed at NTW's remark that workers can be seen as robots. But this remark indicates the difference in evolutionary dynamics when interactions occur between mother and offspring staying together as opposed to independent agents coming together [14]. Everyone understands that mutations expressed in queens or workers can lead to different population genetic models. LRQs' finding that eusociality evolves more readily in Model 2 is easily explained: eusocial mothers conscript migrants, whereas solitary mothers force them away. Now the migrant pool LRQ's first model adds that imaginary (and probably non-existing) presocial species with hypothetical trading of workers or eggs are less likely to evolve eusociality. (b) A more meaningful question, not studied by LRQ, would have been whether eusociality is more likely to evolve if the queen mates once or several times [15]. But again a fundamental question here is: what is the condition for the rare emergence of eusociality in species with single mating? This question, never addressed by inclusive fitness theory, was studied by NTW. Since all those species have the same relatedness between mothers and offspring, but only very few of them evolve eusociality, "relatedness" is not the answer. helps the eusocial mothers. Any conflict can be understood without invoking relatedness: eusocial mothers try to conscript solitary workers, but solitary individuals try to build their own nests. A feature of Model 2 is that eusociality evolves even for zero relatedness. Thus, LRQ actually propose in their Fig 1 that relatedness is neither necessary nor sufficient for the evolution of eusociality. In Model 3, LRQ abandon migration and compare two different fitness functions for the original model of NTW, which is a useful investigation in principle. LRQ make great efforts to minimize the fitness advantage that is needed for eusociality to emerge. They overlook, however, that the formula for the minimum threshold is already provided by NTW (see SI eq. 55). For obtaining the minimum threshold, the maximum advantage has to arise for the smallest colony size (m = 2), and there is no worker mortality. Then the condition is b À 2d > 2ðb 0 À d 0 Þ: Here b and d are the reproductive and death rate of the queen if she has at least one helper; b 0 and d 0 are the corresponding quantities when she is alone. None of the numerical values obtained by LRQ are below the threshold given above. Hence in their Model 3, LRQ do not contradict any result of NTW, but only provide a numerical confirmation. In their attempt to uphold inclusive fitness, LRQ do not offer any inclusive fitness calculation for their first two models, where "relatedness" r varies. We are neither told what the inclusive fitness is for solitary or eusocial individuals, nor what relatedness is (in terms of identity by descent) for any pair of individuals. LRQ only try to calculate a version of Hamilton's rule for r = 1, but this calculation is incorrect: offspring who stay and leave are treated the same-ignoring differences in reproductive value-and there is no consideration of death terms. NTW's criticism of inclusive fitness is deeper than what is presented by LRQ, and contrary to what LRQ suggest, the mathematical facts proven by NTW have never been answered or negated by inclusive fitness proponents. LRQ's paper demonstrates how inclusive fitness theorizing becomes an end in itself, which distracts from the biological questions at hand. In contrast, the mathematical theory of evolution is clear and powerful and shows that the concept of inclusive fitness is not needed to understand any phenomenon in evolutionary biology.
2,157.4
2015-04-01T00:00:00.000
[ "Biology", "Philosophy" ]
An Interdisciplinary Approach on the Mediating Character of Technologies for Recognizing Human Activity In this paper, we introduce a research project on investigating the relation of computers and humans in the field of wearable activity recognition. We use an interdisciplinary approach, combining general philosophical assumptions on the mediating character of technology with the current computer science design practice. Wearable activity recognition is about computer systems which automatically detect human actions. Of special relevance for our research project are applications using wearable activity recognition for self-tracking and self-reflection, for instance by tracking personal activity data like sports. We assume that activity recognition is providing a new perspective on human actions; this perspective is mediated by the recognition process, which includes the recognition models and algorithms chosen by the designer, and the visualization to the user. We analyze this mediating character with two concepts which are both based on phenomenological thoughts namely first Peter-Paul Verbeek’s theory on human-technology relations and second the ideas of embodied interaction. Embedded in the concepts is a direction which leads to the role of technical design in how technology mediates. Regarding this direction, we discuss two case studies, both in the possible using practice of self-tracking and the design practice. This paper ends with prospects towards a better design, how the technologies should be designed to support self-reflection in a valuable and responsible way. Philosophies 2015, 1 56 Introduction Technologies shape the world in which we live.How to conceptualize the role they play in our world is an interesting question from a philosophy of technology perspective.Technologies, especially those based on computing, have changed our daily life intensively in the last decades.Ubiquitous computing is a computer science research field which especially addresses the relations between the user and the computing system in everyday life (e.g., domestic, work, leisure and social networks). Wearable computing (e.g., smartphones or smart-watches) is one of these technological developments, which has changed everyday behavior in all of the mentioned settings from domestic to social networks.In this paper, we focus on wearable devices for activity recognition; it is part of the current computer science research and is about devices designed for detecting human actions with the purpose of monitoring these actions or improving the human-computer interaction.Human actions are thus seen as a measurable entity.We assume that activity recognition applications have the potential to influence or shape the everyday world.Especially interesting are applications for using wearable activity recognition as a tool for self-tracking or self-monitoring.In this work, we focus on this application field. Using wearable activity recognition devices for self-tracking is already common, for example in devices for monitoring sport activities like jogging or fitness exercises [1].Commercial products-for example, the Nike Fuelband, Fitbit or Jawbone UP or apps on smartphones as well as smartwatches-are quite common in tracking activities, especially sport activities.People have a growing interest in tracking or logging their everyday experiences [2], which can be seen for example in the lifestyle-oriented community of quantified self [3].A scientific investigation of self-tracking and life-logging with ubiquitous computing devices can be originally found in the work of Li et al., who used the term "personal informatics" for it [4].There are also projects connected to the field of health behavior change and education.For example, self-tracking in order to change unhealthy behavior (smoking, eating the wrong food or not drinking enough) or to increase healthy activities (doing more sports).The so called persuasive computing field is a research area which is interested in general concepts for applying technical devices to motivate or persuade the user to change his behavior [5].Most of the applications we mentioned here are specialized and are only of interest to a smaller group of people.However, it can assumed that the "next step" to more serious and complex applications in mainstream settings will come soon [6]. Regarding our interest in the relation between persons and technical systems, wearable devices are especially interesting because of the close relationship they provide.Close means here both that the device has the potential to be near the body and that it is designed for long-term usage with the goal of 24/7 assistance.These factors are also described as everywhere and anytime.Technically, this is possible because the sensors are small (can therefore be integrated everywhere) and can be worn a long time without the need of recharging them.This means they can be used day and night without noticing them as explicitly present.According to that, the activity can always be monitored in the background and can either be used as input for the interaction with the wearable device or for a retrospective evaluation of the user's behavior.We think the phrase "in interaction" describes this close relation to the wearable device in which the user should not "step back" but rather fully integrate the device in the daily behavior. To summarize, the idea that technology is shaping the world in which we live, will be analyzed in this paper focusing on the technology of wearable activity recognition.In the applications of self-tracking, a new perspective on the user's actions is provided, which can be seen as an extended experience of the self.This new perspective is mediated by the technology, the interpretation algorithms and the way in which the detected activity data is visualized to the user. In the interdisciplinary analysis, we follow stems on the one side from an ongoing research practice where we use results from two case studies and on the other side on philosophical concepts based on general thought about the mediating character of technology.We start with a scheme of a closed loop (as a first systematization), which illustrates the mediating character of wearable activity recognition in the way persons reflect on their actions. In Interaction with Wearable Activity Recognition: Two Cases In this section, we provide a first systematization on the mediating character of the technology of wearable activity recognition.We discuss the mediating role especially concerning the influences this technology has on the users' decisions to act.The systematization is realized by a feedback loop scheme which is shown in Figure 1 (a similar feedback loop was introduced in [7]).Thereby, we give a deeper insight into the design of devices by introducing two cases which are both projects we are now or previously were related to.The cases should also be used to explain the feedback loop, which is shown in Figure 1. Reflection Person Action Recognition System Figure 1. Automated feedback of an activity recognition system as basis for reflection Case A (in Table 1) is about using wearable activity recognition to detect smoking behavior with the further goal of increasing the awareness of the person on his smoking habits.A motions sensor (accelerometer) is used which can be worn on the wrist like a watch to detect the typical movement of the arm while smoking [8].The sensor makes a long term usage possible (low energy consumption) and so enables monitoring up to two weeks.The goal of such an automatic detection of smoking habits, is to combine it with information on the spend time and the amount of smoked cigarettes and visualize this information for the user.This visualized information can then be used by persons, whose smoking habits are tracked, to find out more about this behavior, potentially with the result of becoming more aware of it or finding out triggers and in the end change their behavior. Table 1.The two cases presented based on the structure in Figure 1. Case A Case B Recording Data A wrist-worn device for motion tracking which can measure acceleration in three dimensions (raw sensor data). A wrist-worn device for motion tracking which can measure acceleration in three dimensions (raw sensor data) [9]. Automated Classification Specific features in the raw sensor data are seen as indicators for the activity smoking.Characteristic for smoking is the frequent movement of the arm following a certain curved line.Specific features in the raw sensor data are seen as indicators for working steps of an experiment in the laboratory.For example pipetting is characterized by a specific position of the wrist. Feedback Feedback for the user is given by a visualization-tool which shows a summery of the time, total amount of smoked cigarettes and the money spent for smoking.The information can be used to raise knowledge about the smoking behavior and reflect on it. A protocol of the experiment could be exported which gives insights in the sequence of execution and durations of certain working steps.Then the results can be evaluated by comparing different executions and outputs of the experiment. Case B (in Table 1) is about a recognition system supporting scientists who work in a biological laboratory.In this project, possibilities for detecting single working steps in different experimental settings in a biological laboratory and possibilities to visualize the data for the scientists are explored.Thereby, the same wrist-worn sensor was used to detect, for example, the activities "using a pipette", "stirring" or "pouring".The structure of the system and the ways of interpretation are equal to the smoking detection case.This automatic detection results in a protocol of the experiment, which makes it possible to compare different executions of the experiment and makes it easier to reflect on possible errors that were made during the experiment. Method: The Interdisciplinary Approach Our approach is interdisciplinary: on the one hand, it belongs to the research of computer science, more specific to the field of human-computer interaction (HCI) and ubiquitous computing, and, on the other hand, to philosophy of technology in its direction of understanding and conceptualizing the relations between technology and human beings.Ubiquitous computing focuses on a user-centric design and technical systems supporting everyday life.Thereby, it is the challenge of a good computer system design (interaction design) to include an understanding of social and individual needs for designing an efficient and functional system but also to design technologies which serve the user in a good, valuable way.With respect to the philosophical direction, it is one interest to understand how technology in general and, specifically, new technologies are shaping or mediating a person's perception of the world and the self. Such an interdisciplinary approach is appropriate and necessary when thinking about computing technologies in everyday environments in a scientific way.The ubiquitous computing research has been aware of these requirements since its beginning-for example, Weiser in his vision was also referring to that [10].That philosophical concepts can enrich the computer science research is common in this field.Phenomenological thoughts are especially discussed and widely known in the community [11][12][13][14].However, this interdisciplinary direction only makes up a small part in ubiquitous computing, compared to the amount of classical work driven by feasibility aspects. On the other side, the philosophical research which addresses technology, compared to the long philosophical tradition, is quite new.The philosophy of technology research includes theoretical approaches based on the question what technology is (essence) and the capacity of artificial intelligence (AI) but empirical approaches as well, focusing on concrete human-technology relations.Especially following the second line, philosophy of technology requires concrete cases and knowledge in the design practice to specify or even build the concepts. Concluding, on the one side, concepts based on the philosophical theories can help to specify models and classifications in the design process.On the other side, it is interesting for philosophy of technology to include the current design practice and applications in its socio-technical concepts. Background: The Mediating Character of Technology We intend to show how the shaping or mediating character of wearable activity recognition can be conceptualized, especially the influences on the users' actions. In this section, we introduce two approaches, whereby the first belongs more to the philosophy of technology research but with a direction to the practices of design.The second approach is a quite popular set of concepts located in the HCI and Ubiquitous Computing which are discussed under the term embodied interaction.Starting with the philosophical background, we introduce Peter-Pauls Verbeek's theory, who is providing a "systemic analysis of the relations between human beings and technological objects, wherein he discusses the connection to contemporary industrial design" [15] (p.3). Verbeeks Thoughts on the Relation of Human Beings and Technology In our approach, we are interested in the mediating character of technology especially technical systems of everyday use.Peter-Paul Verbeek is providing a theoretical concept on the mediating role of technology, which he discusses under the heading "philosophy of technical artifacts" [15] (p.9).Verbeek is thereby following the philosophical tradition of phenomenology but is enriching it with hermeneutic thoughts and concepts of the actor-network theory.It is important for him to understand the mediating role, not one-sided which he insinuates regarding the classical phenomenology theory (of technology).This theories are exclusively focusing on the limiting function of technology in the way technology is "alienating human beings from themselves and reality" [15] (p.4).Following Verbeek, in contrast, is to assume that there are both, possibilities and limits of technology use, and the task to find out these should be one topic of a philosophy of technology.He says: "Many forms of technological mediation are possible that transform our access to the world in myriad ways, some of which open up to us new ways of access unavailable to 'naked-eye perception', and some of which narrow this access" [15] (p.144). He is therefore providing a richer framework which makes it possible to understand the mediating character in two ways.First, the user's perception and interpretation of the real world can be mediated by the use of technology and second, the involvement with the technical device, in the way action and interaction is made possible, is mediated. Mediated perception and interpretation describe how certain world-views are preferred through the use or involvement of a technical system.Following the involvement direction, it is about how technology affects in which way human existence is shaped that means "how it can make possible certain kinds of actions and inhibit others" [15] (p.191).Thus, perception and actions are mediated by technology in a way that it makes new perspectives and ways of behavior possible, not only concerning the concrete functions of the technical system, but also cultural and social circumstances.Technology is both changing the perception in scientific research, how we see the world in a macro-perspective, but also, and more interesting for our approach, how it changes the everyday perception (mirco-perspective) of people using technical systems.One famous example for a social impact is the round table as a technology where an authoritarian hierarchical order is reduced because it has, in contrary to the rectangular table, no head of the table where the most important person is sitting-or two other examples: the development of a raster-tunnel-microscope has changed the world-view in a physical understanding, the observing capacity of human beings is mediated in a way that new things are visible (in a new smaller sphere), but Facebook is also reshaping the social relationships of persons (because they are structured and visualized in a certain way).The goal of Verbeek's investigation is not to show that there is a mediating effect, because it is clear for him there is always one, but he wants to show how this mediating role can be understood by including all possible levels of impact, from micro to macro and from individual to social or cultural. Based on his theoretical thought, he suggests, a philosophy of technology approach should be done in a way that the possibilities and limits of technologies in all of its facets are addressed.Additionally, Verbeek focuses on the engineering design as a further direction, which he calls the practical value of his approach [15] (p.204).Thereby, he adapts some thoughts of a concept, called "material aesthetics", which refers to a material oriented design approach.One aspect, Verbeek focuses on, is how for example objects in their material impression acquire meaning to the user.One research project he refers to shows that people find technical objects special for them, mostly because they have "memories that clung to them" [15] (p.223).Therefore, the material aesthetic of a device can be more relevant to the everyday use of technology than the pure function the technology was planned for.Verbeek says: "Technologies are not merely functional products that also have dimensions of style and meaning" [15] (p.235). These examples of concrete design do not fit well to the technology and applications we like to address here, but Verbeek's approach builds a good theoretical background and also contains the questions that should be addressed in philosophy of technology. Embodied Interaction From these thoughts, we go over to an interdisciplinary theory especially referring to HCI and Ubiquitous Computing.We focus on a concept of embodied interaction which was first introduced to the computer science research by Paul Dourish [11]. He grounds his concept on thought of the phenomenological tradition in philosophy.Embodiment is extensively discussed in the philosophy, but we focus on the concepts which have already been applied to the field of HCI and ubiquitous computing.Linking phenomenological thoughts with computer science design by itself has tradition in the well-known book of Winograd and Flores [16].In Dourish's concept of embodied interaction, he focuses on the role of how we participate within the world.That means, how (inter)actions in the world are related to how we have a perception or create meaning of something and thus action and perception cannot be separated."The world has meaning in how it is physically organized in relationship to our physical abilities, and in how it reflects a history of social practice" [11] (p. 9).The body is embedded in our experiences within the world, and in terms of the environment(s) that it creates.The mediating character of technology is in a lot of aspects similar to the approach of Verbeek. After this brief introduction, we will highlight the influences his thoughts had and still have in the current design practice (Interaction Design).One topic for Dourish applying his concepts is how the context-awareness of technical systems can be implemented.Context is a topic which is widely discussed in computer science concentrating on building everyday devices.It describes how situations (e.g., everyday settings) can be understood properly by an automatic interpretation system.Therefore, he argues that rationalistic approaches which try to hold on the world in an ontology of static contexts should be rejected and a dynamic construct of context should be preferred [17].With a dynamic construct he means that context is dependent on personal preferences, cultural and social meaning and can change over time.As a consequence for the interaction design, it can mean to include possibilities for the user to change labels of context in runtime.For example, a classic context type is location.Location can be understood in its GPS coordinates, its postal address or concerning the personal experience a person has with it (a cafe with the best espresso in town).Depending on the application, it can make sense due to different reasons to choose one of them.However, this classification can also change over time when the meaning changes (e.g., he has quit drinking espresso because of health reasons). Although Dourish is naming his approach embodied interaction, he is not going very deep in the phenomenological thoughts on the understanding of the body.Svanaes captures this and refines the concept of the lived body for embodied interaction by especially referring to the philosopher Merleau-Ponty [12].Following him, the body has a dual nature: "on the one hand, we can see it as an object among other objects in the 'external' world.On the other hand, it exists to us as our experiencing/living body".The lived body as a concept is arguing against the problematic classic perspective, separating the body as an object from the mind as sphere of subjectivity.Shusterman, who discuses the distinction in a deeper way, says: to the body belongs for example the bones, the inner organs or the neuronal structure of the brain.The lived body instead is the dynamic experience of the body, which is located in living feeling and perception [18].Following that, persons are meditated by the way the body is in the world, including it bodily perception (for example tactually) but also, for example, by the fact, that the body is vulnerable and mortal.For example, pain is neither a pure bodily phenomena or mere explainable with an interpretation by the cognitive system.Additionally, the way someone has experienced a situation both in a conscious or unconscious way with its emotions and feelings, influences the further perception.Related to that, we want to mention one direction in this lived body discussion that concerns the role of habits and routines in daily life.Shusterman focuses on the "unreflected spontaneity of acting and feeling" which manifests in habits and routines which are expressions of the lived body.In contrast to the measurement of pure bodily phenomena (e.g., heart rate, blood pressure), in the technology of wearable activity recognition the routines and habits become part of a reflection.This reflection on the "unreflected spontaneity" is an interesting aspect which should be a topic of a further discussion [18]. Critical Conclusion of the Concepts Both concepts provide a richer understanding of the relation between technology and persons with a direction to the role of artifact design.Richer means here, in simple terms, to take more aspects into account than the "obvious" functions of the technical system.How technology shapes the world was the term we used in the beginning.This understanding of technology is related to the research which can be found under the term of socio-technical systems.Classically, this research is focused on the cultural and social dimension of technology (e.g., the round table example) but not only in use, rather for example the social dimension of production (e.g., new conditions of mass-production) and disposal (e.g., radioactive martial).Verbeek is not only collecting different dimensions of effects that technologies have, rather he introduces a systematic order which is centrally oriented on the distinction between mediated perception and action.We think this distinction has its weaknesses because actions are depended on the perception which determines the decisions to and plans for actions.Referring to the concept of the lived body, which brings both dimensions of perception and action in a closer relation, is one possibility to refine this. We have focused on theories, having an explicit direction towards the design of technical systems which fits to the interest of the interdisciplinary approach.One problem of both theories is that it remains to be seen to which further purpose such an analysis on the mediating character of technology will lead.The direction of engineering design, Verbeek provides, is quite vague and focuses only on one aspect, the material aesthetics.Additionally, by rejecting the dystopian judgment, what he does by criticizing the classical phenomenological concept about technology, his theory lacks a critical dimension. However, these concepts, even better in combination, are a good starting point to analyze a new technology which can be predicted as very influential in the everyday world.According to that, the theoretical background can also be used as a basis for a judgmental or ethical investigation.Before we show such a perspective in a final section (towards a better design), we discuss the case studies in two directions and give an idea how the technology of wearable activity recognition is mediating. How Is Wearable Activity Recognition Mediating? Wearable Activity Recognition provides a new perspective on human actions, which is mediated by the technology.Analyzing this mediating character is the interest of this paper.We follow, therefore, the theoretical thoughts, especially of Verbeek for analyzing the possibilities and limits of technologies: he asks, therefore, which perceptions and actions a technology is opening up and which it is narrowing.Our analysis includes two approaches.First, the mediating character seen from an applications perspective, including the two case studies/projects, and second seen from the perspective of the design practice, which is also based on the experience gathered during the project work. In the Use Practice: Mediated Reflection on Activity Data Starting with the first, we focus on the applications of self-tracking and the possibilities for using the tracked data for further reflection and behavior change.In the closed loop scheme, shown in Figure 1, we refer to this application whereby the reflection on activities with the possibility of changing them closes the loop. In the named applications, the influences are part of the functions that means the technology is mediating the users self-perspective in a way that changes the behavior.However, there are also possibly unintended ways the technology is mediating the perception of the user-for example, in case B, in which the wearable sensor is used for detecting working steps in the laboratory with the purpose of documenting the experiments.One result can be that scientists who use this tool probably adapt their behavior by reflecting on the documentation delivered by the tool.This behavior change is hence intended. When using such a tool, it is clear that only what is detectable becomes part of the documentation.Following that, it is predictable that when persons who use this tool in the laboratory slightly adapt to a certain way of doing an activity, because some performances were more or less successful regarding the detection.These influences are unintended or at least not part of the function.For example, observations in the biological laboratory have shown that there are different performances possible to complete a working step: stirring different fluids in a beaker can be done with a spoon or by moving the beaker and bringing the fluid to move.It can be expected that using such systems for documentation and reflection of results direct a general way of performing tasks or some types of doing it.So using such systems on the one side facilitates a comparison of activities in the laboratory but on the other side can also limit the diversity of performances.That may not be a problem in this application but should kept in mind regarding other applications. In case A, the technology of wearable activity recognition is applied to detect smoking behavior with the goal of increasing the awareness of the smoking activity.This can probably help to decrease the amount of smoking and even aid quitting smoking.The technology is mediating the perspective on the smoking activity of a person by combining it with additional information about time of the day, duration and money spent.When the user reflects on this mediated smoking information, it is possible that the person becomes more conscious about the smoking behavior in the future or can find out triggers which normally led him to smoking.A simple trigger would be, for example, alcohol related socializing with friends, what a person can find out when he sees in his data that depending on the night-especially on Friday or Saturday-the amount of smoked cigarettes increases.This perspective on smoking behavior is intended and enables reflection and behavior changes e.g., being especially attentive when drinking alcohol on weekend nights.However, it is also possible that unintended behavior changes can result.In some situations, it is imaginable that users perform in such a way that the activity is not detectable.For example, in order to not feel guilty the next day when smoking too much on a night in a bar, users could use the other hand (without wrist sensor) for smoking instead. We had the intention to present some examples which show the complexity of the mediating character of wearable activity recognition.Next, we will focus on investigating the mediating character out of a design practice perspective. In the Design Practice: Getting the Everyday One of the main technical challenges of wearable activity recognition is to design computing systems which are able to get an everyday understanding of human actions.This challenge of how to deal with everyday entities is a general problem in the HCI research and especially in the ubiquitous computing where the focus on everyday applications is intrinsic.The research, referring to this question about computer based detection of everyday entities, is related to context-aware computing.It is mostly about the computable understanding of situations in which the technical system is in interaction with users to make this interaction appropriate to these circumstances. In this chapter, we want to show how an everyday understanding of action is included in wearable activity recognition technologies.It is an investigation based on the current practice with reference to the two cases.With focus on the cases, the question reads as follows: how do people usually smoke and how are single procedures in the laboratory performed, named and understood?For the development of such recognition systems, the learning-based approach is the most common.It is based on the idea that the computing system is learning from situations in which the activities are performed, so to say learning from realistic real world data.In this concept, the distinction between the training phase of a system and the runtime is important.The training influences how the system works at runtime.The training data consists of a raw data stream and its related information on the activities, the so-called ground truth.Situations in which these data are acquired should be as realistic as possible but always have an artificial component because there has always been an observational aspect which is required to produce the annotations for the data (ground truth).In case A, a special cigarette lighter was developed to get the annotations and, in case B, the annotation was done by video-observation.This training results in an ordinary understanding of doing things or different types of doing things, the so-called stereotypes.The stereotypes influence what is detectable and so mediate the user's actions and perception. Thus, the mediating character of the technology is predetermined mainly by the learning-based algorithms, including the chosen probabilistic models, classifications of labels and how the training data is acquired.Especially, the aspect on which real world data the systems should based, is discussed under the phrase "out of the lab into the wild" (of the everyday) [19]. Conclusions The technology of wearable activity recognition is providing a perspective on persons' actions which can result in intended behavioral changes (e.g., smoking cessation) but also in unintended influences when, for example, persons adapt the way of doing things because the technology is only able to detect certain stereotypical performances.How these stereotypes are constructed in the design practice was shown in the last section.Bringing these dimensions of possible and existing use practices and the design practice together was the methodological orientation of this paper.A deeper philosophical discussion, how technology is mediating the relation of person to the world and self is grounded in Verbeeks work and the concepts of embodied interaction.This directs one to an understanding of how the perspective on persons' actions is mediated, meaning how persons relate to their actions, which was the central goal of this paper. Following that, we conclude: actions are an intrinsic part of how we experience the world, what we are aware of and how we see our role in social interaction.When certain activities are brought into view by wearable activity recognition devices, it prevalently influences how people see their actions and, following on that, how they experience similar situations in the future.The awareness regarding an activity (e.g., smoking) can change and also in which way certain activities are interpreted.Additionally, the characterization of wearable technologies, which provides a close everyday and anytime relation, intensifies the assumption about influences on behavior and the perception.What follows from that, especially out of an ethical perspective, will be briefly discussed in the last section. Prospect: With Philosophy of Technology Towards a Better Design In this section, we show some consequences which can be concluded out of the analysis of the mediating character of wearable activity recognition technologies.Towards a better design means to think about a computer science design which is valuable in an ethical and responsible way.Valuable design means that social responsibility, ethics and privacy should be an intrinsic feature of the technology.For that, it is useful to mention the prospective Verbeek gives in his theory: one goal of his analysis is to investigate the possibilities and limits of technologies, which perception and action can be made possible and which is narrowed.Thus, using wearable activity recognition tools makes it possible to detect certain activities and visualize them for the user.This means some activities are preferably perceived by the persons and other activities, which are not detectable, become less important.Shortly, only what is detectable comes into the point of view.In the design practice, for example, Rogers follows concepts of embodiment and is suggesting an integration of such a wider perspective."Opportunities are created, interventions installed, and different ways of behaving are encouraged.A key concern is how people react, change, and integrate these into their everyday lives" [19]. Regarding our applications, the question for a better design in a valuable way is how the system should be built to really support self-reflection.This can include the possibility of the user to influence the labeling process and the automatic interpretations.For example, having the possibility to reflect on the raw data itself when required, is increasing the trust to the results an activity recognition system is providing.Technologies which try persuading the user to act in a certain way can help to motivate, e.g., to do sports, but also used to manipulate to act in a certain way which is only useful for the provider e.g., buying behavior detection with mechanisms to motivate the user to buy more. Talking about valuable design regarding technologies which detect and store personal data, privacy concerns have to be discussed.One central aspect regarding this technology is that the raw data often consist of a lot more information than is used for the specific function of the recognition system.Following that, not only the interpreted activities should underlie a strict personal privacy but also the raw data, for example the accelerometer data stream.Regarding existing applications of, for example, sport tracking or eating behavior tracking, the data can be potentially used by health insurances to provide the regular users cheaper offers.This should be discussed regarding privacy and justice concerns. Wearable activity recognition is an ongoing research which is strongly related to questions of a philosophy of technology in its conceptual interest on the role that technology has in our world but also in its critical dimension.We aimed at showing in this paper that such an interdisciplinary approach referring to the computer science design is promising.
7,865.2
2015-11-27T00:00:00.000
[ "Computer Science" ]
Capacity Issues and Efficiency Drivers in Brazilian Bulk Terminals This paper presents an efficiency analysis of Brazi lian bulk terminals built upon the conjoint use of Data Envelopment Analysis and the bootstrapp ing technique. Confidence intervals and bias corrected central estimates were used as corne rsto tools, not only to test for significant differences on efficiency scores and their reciproc als, but also on returns to scale indicators provided by different DEA models. The results of th e study suggest that most Brazilian bulk terminals present increasing returns-to-scale, that is, they are too small in size comparatively to the tasks performed, indicating a capacity short fall. Results also suggest paths for improving efficiency levels in a scenario of low in vestments and capacity constraints: privatization and cargo specialization. A final con tribution to the literature lays on the development a simple methodology to assess returnso-scale based on bootstrap results. INTRODUCTION here is a consensus that ports are a vital link in the trading chain due to their contribution to a nation's international competitiveness in the globalization scenario.In order to support trade oriented economic development, port authorities around the globe have been under pressure to improve port efficiency (TONGZON, 1989;CHIN;TONGZON, 1998).In Brazil, accelerated economic growth has increased this demand for port services. Between 2006 and 2010, the physical aggregate throughput handled by Brazilian ports -measured in tons/year -grew at an average rate of 10% per year (CEL, 2009).This increasing demand for reliable services has put enormous pressure on the infrastructure of Brazilian ports. Port operations management in Brazil was, until the mid-1990s, completely regulated and controlled by the federal government.Any investment in port infrastructure could be performed solely by Companhia Docas, a state owned company.Only in 1993, when the Brazilian Federal Law 8630, also known as "Port Modernization" Law, was edited the path for port privatization, leasing of terminals, installation of local port authorities and labor deregulation started to be paved (CURCINO, 2007).Although investments in capacity expansion were minimal from that period to these days, the comparison of several ports in terms of their overall efficiency has become an essential part of the Brazilian microeconomic reform agenda for sustaining economic growth based on foreign trade (FLEURY;HIJJAR, 2008). In 2006, when a federal authority linked to the Transportation Ministry was created to allocate investments among the sector, the performance measurements of ports and terminals started to be conducted on more systematic way.Traditionally, the performance of ports and terminals has been variously evaluated by attempts at calculating and seeking to optimize the operational productivity of cargo handling at the berth and in the terminal area (CULLINANE et al., 2006).In recent years, approaches such as Data Envelopment Analysis (DEA) have been increasingly utilized to analyze production and performance of ports and terminals. DEA is a non-parametric model based on linear programming (LP) used to address the problem of calculating relative efficiency for a group of Decision Making Units (DMUs) by using multiple measures of inputs and outputs.Applied studies that have used DEA however have typically presented point estimates of inefficiency, with no measure or even discussion of uncertainty surrounding these estimates (CESARO et al., 2009).To solve these problems, T bootstrap techniques have been introduced into DEA analysis (CESARO et al., 2009) allowing the sensitivity of efficiency scores relative to the sampling variation of the frontier to be analyzed, avoiding problems of asymptotic sampling distributions. Inspired by the current debate in the Brazilian port sector, in which anecdotal evidences suggest a capacity shortfall (AGÊNCIA BRASIL, 2004;DOCTOR, 2003;SALES, 2001), this paper presents an analysis of Brazilian bulk terminals built upon the bootstrapping technique. The basic idea is to use confidence intervals and bias corrected central estimates as cornerstone tools to assess the efficiency issue in bulk terminals, not only to test for significant differences on efficiency scores and their reciprocals (that is, their distance functions), but also on returns to scale indicators provided by different DEA models. Results of this study are twofold.First, they corroborate that Brazilian bulk terminals are running short in capacity, highlighting the issue of how achieving productivity gains in the short-middle terms in the absence of capacity expansion.Second they shed some light on this issue, not only demonstrating that public terminals tend to be less efficient than private ones, but also that riverine terminals to be more efficient than maritime ones, due to the specialization provided by handling and moving soybeans on from producers.This second aspect reinforces the role of port deregulation/privatization and cargo specialization in the quest of higher efficiency levels in Brazilian terminals. The remainder of the paper unfolds as follows.In Section 2 previous studies regarding efficiency measurement in Brazilian port/terminals are presented.Section 3 provides the data to be analyzed as well as additional information on the methodology used such as DEA, Return to Scale characterization and bootstrapping in DEA.Section 4 presents the results on the methodology applied to a sample of 53 bulk terminals in Brazil.Conclusions are given in Section 5. PREVIOUS STUDIES A growing number of studies have used DEA to benchmark port efficiency.The comprehensive literature review presented in Panayides et al. (2009) indicates that the number of port/terminals researched in each study ranges from 6 to 104 (mean 28).According to Martín and Roman (2001), although DEA obtains a single, dimensionless, overall index of efficiency, its essential differences to parametric approaches, such as Stochastic Frontier Analysis (SFA), are found in the very nature of the analytical approach.As long as SFA is stochastic and parametric, DEA uses linear programming techniques.Bootstrapping, however, is one of the most attractive solutions to address this major DEA drawback, that is, the BBR, Vitória, v. 11, n. 5, Art. 4, p. 72 -98, set-out. 2014 www.bbronline.com.brabsence of statistical properties (ASSAF, 2010).In recent years, as far as the Brazilian case is concerned, only two DEA-based studies have appeared in international peer-reviewed journals.All of them addressed the issues of capacity constraints and the impact of contextual variables on efficiency estimates.Rios and Maçada (2006) point out that, at the time of their paper, no studies developed in Brazil had thus far been conducted.The authors analyzed the relative efficiency of 20 container terminals located in Mercosur during 2002Mercosur during , 2003Mercosur during , and 2004 by means of an inputoriented BCC model.Results indicate that 60% of the terminals were managerially efficient in this three-year period, probably reflecting the fact that the Brazilian terminals had reached record rates of cargo traffic, including higher-value added products such as automobiles. According to these authors, container traffic increased 23.1% during the period.In Argentina, the container sector had an increase of almost 17%.No further international peer-reviewed studies on the efficiency of Brazilian ports/terminals were conducted from 2006 to 2010. More recently, Wanke, Barbastefano and Hijjar (2011) analyzed a mix of 25 major Brazilian container/bulk terminals (based on 2008 data).The authors found that the vast majority of Brazilian terminals presented increasing returns to scale, and that bulk terminals appeared to be proportionally smaller than container terminals.Additionally, terminals controlled by the private sector tended (although not statistically significant at 0.05) to be more efficient than those controlled by the government.Statistical tests with efficiency levels were also performed against railroad connectivity and labor force qualification, albeit with inconclusive results, despite previous studies, such as Turner, Windle and Dressner (2004), Cullinane andSong (2003), andDoctor (2003). On the other hand, Barros, Felício, and Fernandes (2012) As implied in Cooper et al. (2001), the number of DMUs should be at least three times higher than the number of inputs and outputs, in order to attain good discriminatory power in the efficiency estimates.The single input collected from each terminal is the aggregate number of loading hours (per year, all berths considered).As regards the outputs, two variables were collected: aggregate throughput per year (in tons) and number of loaded shipments per year.Contextual variables relate to the terminal ownership -whether public (1) or private (0) -and to its geographical location -whether riverine (1) or maritime (0).With respect to the choice of the input/output variables used, readers should recall one of the aims of the paper, which is to assess different possibilities for increasing efficiency levels in a scenario of low investments and capacity constraints, considering all physical assets to be fixed in the short-middle terms.Put in other words, the idea is to efficiently use all the available shipment capacity in the short-term in order to relief system pressure.Therefore, inputs and outputs were chosen in order to better understand how berth time is being used to achieve higher levels of production, both in terms of loaded shipments and aggregate throughputs.)., 2004).Essentially, one should select an orientation according to which quantities (inputs or outputs) the decision-makers have most control over (COELLI, 1996). However, given that LP cannot suffer from such statistical problems as simultaneous equation bias, the choice of an appropriate orientation is not as crucial as it is in the econometric estimation case (COELLI, 1996).Furthermore, the choice of orientation will have only minor influences upon the scores obtained and their relative ranks (COELLI; PERELMAN, 1999). Compared with the stochastic parametric frontier approach, DEA imposes neither a specific functional relationship between production outputs and inputs, nor any assumptions on the specific statistical distribution of the error terms (CULLINANE et al., 2006).An efficient frontier is on the boundary of a convex poly tope created in the space of inputs and outputs, and in which each vertex is an efficient DMU (DULÁ; HELGASON, 1996).Another feature of DEA is that the relative weights of the inputs and the outputs do not need to be known a priori, that is, these weights are determined as part of the solution of the linear problem (ZHU, 2003).or application of DEA to reduced models, in order to rank the effect of variables on efficiency scores (WAGNER; SHIMSHAK, 2007). RETURN TO SCALES CHARACTERIZATION Scale inefficiency is due to either increasing or decreasing returns-to-scale (RTS). Although the constraint on ∑ = n j j 1 λ actually determines the prevalent RTS type of an efficient frontier (ZHU, 2003) -CRS or VRS -scale inefficiency at a given DMU can be assessed under both models.As pointed out by Cooper, Seiford and Tone (2007), while the CCR model simultaneously evaluates RTS and technical inefficiency, the BCC model separately evaluates technical efficiency. As noted by Odeck and Alkadi (2001), the term ∑ = n j j 1 λ is also known as Scale Indicator ( o SI ) within the CCR model.So, even though the term CRS is used to characterize the CCR model, this model may be used to determine whether increasing, decreasing or constant RTS prevail at a given DMU, by making the input and output slacks explicit in the LP formulation.For instance, if its "input saving" efficiency is greater than its "output increasing" efficiency, increasing RTS prevails (ODECK; ALKADI, 2001).Now with respect to the BCC model, since its efficient frontier is strictly concave, the optimal solution will necessarily designate a given DMU as being in the region of constant, decreasing, or increasing RTS. Although the choice of orientation will have only minor influences upon the efficiency scores obtained and their relative ranks (COELLI; PERELMAN, 1999), it should be noted, however, that input and output oriented models may give different results in their RTS findings (BANKER et al., 2004).Thus the result secured may depend on the orientation used (RAY, 2010).Increasing RTS may result from an input-oriented model, for example, while an , 2007), where the "real life" violation of the convexity assumption may or may not be involved. BOOTSTRAPPING METHOD According to Simar and Wilson (2004), none of the theoretical models presented are actually observed, including the efficient frontier (CCR, BCC, or FDH) and its respective distance function to each DMU ( ).Thus, all these elements must be estimated. Estimators are necessarily random variables upon which statistical tests, or at least, confidence intervals (CI) can be built to derive useful conclusions. Therefore the importance of bootstrap-based approaches, such as those presented in Simar and Wilson (2004) and Wilson (2008), for estimation on the efficiency frontier, should be put into perspective.The discussions on RTS in different DEA models have been confined to "qualitative" characterizations in the form of identifying whether they are increasing, decreasing, or constant (BANKER et al., 2004;COOPER;SEIFORD;TONE, 2007).These bootstrap approaches, however, which are also useful to deal with the asymptotic distribution of DEA/FDH estimators, can be used to implement statistical tests of constant returns to scale versus varying returns to scale, convexity among other things (WILSON, 2009).For example, Daraio and Simar (2007) developed several conditional measures of efficiency, which also provide indicators for the type of RTS.The bootstrap methodology used in this study is detailed next. The method used in this study departs from the one developed by Simar and Wilson (2004), also presented in Bogetoft and Otto (2010), which adapted the bootstrap methodology to the case of DEA/FDH efficiency estimators, and uses a Gaussian kernel density function for random data generation.The seven-step algorithm is detailed next.summarizes the pseudo datasets of additional estimates that can be possibly generated from the pseudo datasets of inputs and outputs, using the algorithm previously presented.In order to evaluate the adequacy of the convexity assumption imposed by DEA models and to characterize the prevalent RTS within the sample of Brazilian bulk terminals, the methodological framework presented in this section was applied.More precisely, 95% CIs were determined, not only for the set of estimators INITIAL ESTIMATES The efficiency rankings calculated using DEA/FDH input-oriented models are given in words, the capacity of the bulk terminal is too small relative to the tasks that it performs. BOOTSTRAPPED EFFICIENCY SCORES AND CONVEXITY ASSUMPTION The bootstrapped CCR and BCC efficiency scores, as well as their respective 95% CIs, are presented in Figures 2 and 3 for each DMU.The procedures for computing these estimates, based upon 1,000 bootstrap replications for each efficient frontier, followed the discussions detailed in Simar and Wilson (2004) and Curi, Gitto and Mancuso (2011). Readers can easily note that public terminals tend to be less efficient than private ones (median of CCR bias-corrected estimates: 0.05 against 0.11; median of BCC bias-corrected estimates: 0.16 against 0.17), thus corroborating previous studies.The opposite is true for riverine terminals (median of CCR bias-corrected estimates: 0.34 against 0.08; median of BCC bias-corrected estimates: 0.40 against 0.16).Most of them are specialized in handling and moving soybeans on from producers, located at middle-eastern inland states, to the closest road/railway in order to reach the major export terminals, located at the ports Santos and Paranaguá.The asymptotic nature of the CIs should also be noted, as their lower and upper bounds are not symmetrical around the central estimate. With respect to the convexity assumption, the upper bounds for the 95% CIs for the FDH and BCC distance functions are given in Figure 4.The upper bounds for the CCR distance function were omitted in order to improve its readability.It should noted that taking reciprocals of the CI estimates, for the case of analyzing input distance functions instead of efficiency scores, requires reversing the order of the bounds; that is, the reciprocal of the The convexity assumption is not statistically supported in seven DMUs when comparing, against each other, the upper bounds of the 95% CIs for the FDH, BCC, and CCR distance functions.More precisely, the convexity assumption does not hold, at 5% of significance, in DMUs 6, 10, 19, 24, 28, 38, and 47.The "statistical rejection" of the convexity assumption appears to be related to a heterogeneous group of DMUs in terms of their efficiency scores, regardless of their size and contextual variables (cf. Figure 5).This group encompasses the two previously mentioned cases (6 and 24), were original RTS characterizations were found to be discrepant.Further analyses to deal with both discrepant RTS characterizations are discussed next section. CONCLUSIONS Inspired by the current debate in the Brazilian port sector, in which anecdotal evidences suggest a capacity shortfall (AGÊNCIA BRASIL, 2004;DOCTOR, 2003;SALES, 2001), this paper presented an analysis of Brazilian bulk terminals built upon the bootstrapping technique.The basic idea of the study was to use confidence intervals and bias corrected central estimates as cornerstone tools, not only to test for significant differences on efficiency scores and their reciprocals (that is, their distance functions), but also on returns to scale indicators provided by different DEA models. The results of this study suggest that most Brazilian bulk terminals are running short in capacity.According to Odeck and Alkadi (2001) and Ross and Droge (2004), a DMU may be scale inefficient if it experiments decreasing returns to scale by being too large in size, or if it is failing to take full advantage of increasing returns to scale by being too small.Therefore, it can be suggested that the capacity of the bulk terminals in Brazil is too small relative to the tasks that they perform. It can also be noted that public terminals tend to be less efficient than private ones, corroborating previous studies.As for riverine terminals, the results suggest they tend to be more efficient than maritime ones.Most of them are specialized in handling and moving soybeans on from producers, located at middle-eastern inland states, to the closest road/railway in order to reach the major export terminals, located at the ports Santos and Paranaguá. Putting in a broader perspective, the purposes of the analysis conducted within Brazilian bulk terminals are threefold.First, it was useful to corroborate empirical evidence regarding the fact that these terminals are running short in capacity, i.e., that increasing returns-to-scale prevail within this industry.Although Wanke, Barbastefano, and Hijjar (2011) reached the same conclusions, it is important to pin point the major methodological differences between both papers: here the analysis focused solely on bulk terminals, with input/output variables deliberately selected to assess different possibilities for increasing efficiency levels in a starting-point scenario of low investments, capacity constraints, and fixed assets in the shortterm.Also, the convexity assumption, very common in DEA studies, was not taken for granted here: bootstrap was used to test this assumption among FDH, BCC, and CCR distance functions, allowing also the returns-to-scale characterization to be probabilistically evaluated under both types of frontiers. BBR, Vitória, v. 11, n. 5, Art. 4, p. 72 -98, set.-out. 2014 www.bbronline.com.brSecond, as mentioned before, the analysis revealed paths for increasing terminal efficiency within the ambit of this capacity constrained scenario: simple non-parametric tests revealed substantial differences between public/private terminals and between riverine (more specialized) and maritime (less specialized) terminals in moving on grains from producers to international markets.This suggests action plans for public authorities and private decision makers in order to better deal with the capacity shortage. Third, it served as a basis to illustrate some operational thresholds that may emerge during similar analysis.One of them is related to the impact of rejecting the convexity assumption on finding that only one return-to-scale characterization (either CCR or BCC) is statistically significant at a given DMU.The other relates to the additional analysis that should be performed when both returns-to-scale characterizations significantly diverge or none of them is found to be significant at a given DMU.In such cases, respectively, the minimal confidence interval level upon which only one RTS classification remains significant or the maximal CI level below which the first RTS classification becomes significant should determined.For both cases, a simple methodology presented as a flowchart was developed, constituting another contribution of the paper. Future research should still address the capacity issue in Brazilian ports, possibly adopting a longitudinal perspective and involving the testing for the most influential variables, in order to provide a full map of the efficiency drivers in this environment.Possible approaches in DEA or even SFA could also deal with the issue of efficiency decomposition both in financial and operational terms, taking into account handling costs, waiting time in queues, and service levels, issues that are critical for the competitiveness of Brazilian ports. Figure 1 - Figure 1 -Map of the Brazilian Bulk Terminals Researched Source: The authors.3.2DATA ENVELOPMENT ANALYSISDEA is a non-parametric model first introduced by Charnes,Cooper and Rhodes (1978).It is based on linear programming (LP) and is used to address the problem of calculating relative efficiency for a group of Decision Making Units (DMUs) by using multiple measures of inputs and outputs.Given a set of DMUs, inputs and outputs, DEA determines for each DMU a measure of efficiency obtained as a ratio of weighted outputs to weighted inputs. authors.Thus, the BCC model differs from the CCR model only in the adjunction of the model orientation, whether input or output-oriented, the two measures provide the same scores under constant returns to scale (CRS), but are unequal when varying returns to scale (VRS) are assumed as the efficient frontier (COOPER; SEIFORD; ZHU of iid (independent and identically distributed) draws from the probability density function used to define the respective kernel function; let of fact, once the B pseudo datasets of inputs and outputs for the n DMUs have been obtained, it is straightforward to estimate CIs on a given o DMU , not only for the actual distance functions, but also for the efficiency scores and the RTS indicators.Table4 rejection impacts the RTS characterization under the same input-orientation.These analyses were implemented in Maple 12, with 1,000 bootstrap replications, generated upon Gaussian kernel density functions, for each efficient frontier.Their results are discussed next. Figure 7 - Figure 7 -Simple Methodology to Assess RTS Characterization Based upon CIs Source: The authors. Table 1 . All these data relate to 2011 and their descriptive statistics are presented in Table2. Table 2 -Summary Statistics for the Sample (Year: 2011) Descriptive statistics Input measured Outputs measured Contextual variables (I) Loading hours (per year) (O) Loaded shipments (per year) (O) Aggregate throughput (tons/yr) Source: The authors. DMUs under evaluation, and io x and ro y are the th i Table 3 summarizes BBR, Vitória, v. 11, n. 5, Art. 4, p. 72 -98, set-out.2014 www.bbronline.com.br the envelopment models with respect to the orientations and frontier types (ZHU, 2003), where o DMU represents one of the n o DMU , respectively. Table 3 -DEA Envelopment Models ANDERSON, 2003) weights are not crucial a priori, DEA results heavily rely on the set of inputs and outputs used.The more variables (inputs and outputs) in the DEA, the less discerning the analysis is (JENKINS;ANDERSON, 2003).This fact demands higher concern for the variable selection process.Given the large number of initial potential variables to be BBR, Vitória, v. 11, n. 5, Art. 4, p. 72 -98, set.-out.2014 www.bbronline.com.br Table 5 Ross and Droge (2004)1) features variables returns to scale, which are more flexible and reflect managerial efficiency apart from purely technical limits.The vast majority (51 out 53) of the Brazilian bulk terminals analyzed seems to be unambiguously experiencing IRS under both RTS characterizations.No terminal appears to be unambiguously experiencing DRS.Discrepancies between RTS characterizations were found in only two cases (DMUs 6 -the large iron ore terminal of Vale at Tubarão Port -and 24 -a relevant public riverine terminal for soybean transportation from producers located at the inland states of Mato Grosso and Rondônia); both scale efficient, that is, located at the MPSS.According toOdeck and Alkadi (2001)andRoss and Droge (2004), a DMU may be scale inefficient if it experiments decreasing returns to scale by being too large in size, or if it is failing to take full advantage of increasing returns to scale by being too small.So far, these results suggest that most Brazilian bulk terminals are running short in capacity.Put in other average efficiency estimates than the BCC model, with respective average values of 0.15 and 0.27.Also, the CCR model identifies more inefficient terminals (51 vs. 50) than the BCC model does.This result is not surprising, as the CCR model fits a linear production BBR,Vitória, v. 11, n. 5, Art. 4, p. 72 -98, set.-out.2014www.bbronline.com.brtechnology (WILSON, 2009)distance function measure gives the lower bound for the efficient score measure, and vice-versa(WILSON, 2009). BBR, Vitória, v. 11, n. 5, Art. 4, p. 72 -98, set-out.2014www.bbronline.com.brupper bound Bogetoft and Otto (2010)ds for the 95% CIs for the o SI and o u RTS indicators, as well as their respective bias corrected central estimates, are given in Figure6.The methodology used to analyze its results is synthesized in Figure7.Within the CCR case, a given RTS characterization is considered to be statistically significant only if the lower and upper bounds of the confidence interval for the o SI indicator are both greater than 1 (DRS) or smaller than 1 (IRS).On the other hand, when the BCC case is considered, only if the lower and upper bounds of the confidence interval for the o u indicator are both greater than 0 (DRS) or smaller than 0 (IRS).Both bounds equal to 1 or 0, respectively, strongly suggests CRS at a given significance level.As argued byBogetoft and Otto (2010), since the connection between a given RTS characterization and its estimates is uncertain or stochastic, the hypotheses of a given characterization should be rejected if at least one of the estimated scale indicators falls outside such critical values.
5,783.4
2014-09-01T00:00:00.000
[ "Engineering", "Economics", "Business" ]
Second Hankel Determinant for a Class of Analytic Functions Defined by Fractional Derivative Recommended by Vladimir Mityushev By making use of the fractional differential operator Ω λ z due to Owa and Srivastava, a class of analytic functions R λ α, ρ 0 ≤ ρ ≤ 1, 0 ≤ λ < 1, |α| < π/2 is introduced. The sharp bound for the nonlinear functional |a 2 a 4 − a 2 3 | is found. Several basic properties such as inclusion, subordination, integral transform, Hadamard product are also studied. Introduction Let A denote the class of functions analytic in the open unit disc and let A 0 be the class of functions f in A given by the normalized power series Also let S, S * β , CV β , and K denote, respectively, the subclasses of A 0 consisting of functions which are univalent, starlike of order β, convex of order β cf. 1 , and close-to-convex cf. 2 in U.In particular, S * 0 S * and CV 0 CV are the familiar classes of starlike and convex functions in U cf. 2 . Given f and g in A, the function f is said to be subordinate to g in U if there exits a function ω ∈ A satisfying the conditions of the Schwarz Lemma such that f z g ω z , z ∈ U .We denote the subordination by f z ≺ g z z ∈ U or f ≺ g in U. International Journal of Mathematics and Mathematical Sciences It is well known 2 that if g is univalent in U, then f ≺ g in U is equivalent to f 0 g 0 and f U ⊂ g U . For the functions f and g given by the power series their Hadamard product or convolution , denoted by f * g, is defined by By making use of the Hadamard product, Carlson-Shaffer 3 defined the linear operator L a, c : where and λ k is the Pochhammer symbol or shifted factorial defined in terms of the gamma function by It can be readily verified that L a, a a / ∈ Z − 0 is the identity operator; the operators L a, b , L c, d commute, where b, d/ ∈Z − 0 , that is, and the transitive property, that is, holds.Each of the following definitions will also be required in our present investigation. Definition 1.1 cf. 4, 5 , see also 6 .Let the function f be analytic in a simply connected region of the z-plane containing the origin.The fractional derivative of f of order λ is defined by where the multiplicity of z − ζ λ is removed by requiring log z − ζ to be real when z − ζ > 0. A. K. Mishra and P. Gochhayat 3 Using Definition 1.1 and its known extensions involving fractional derivatives and fractional integrals, Owa and Srivastava 5 introduced the fractional differintegral operator Ω λ z : A 0 → A 0 defined by and Definition 1.2 cf.7 .For the function f given by 1.2 and q ∈ N : {1, 2, 3, . ..}, the qth Hankel determinant of f is defined by We now introduce the following class of functions. 1.16 Let P be the family of functions p ∈ A satisfying p 0 1 and R p z > 0 z ∈ U .It follows from 1.15 that where α is real, |α| < π/2, and p z ∈ P. We note that and the class R λ ρ has been studied in 8 . It is well known cf. 2 that for f ∈ S and given by 1.2 , the sharp inequality |a 3 − a 2 2 | ≤ 1 holds.This corresponds to the Hankel determinant with q 2 and n 1.For a given family F of functions in A 0 , the more general problem of finding sharp estimates for |μa 2 2 − a 3 | μ ∈ R or μ ∈ C is popularly known as the Fekete-Szeg ö problem for F. The Fekete-Szeg ö problem for the families S, S * , CV, K has been completely solved by many authors including 9-12 . In the present paper, we consider the Hankel determinant for q 2 and n 2 and we find the sharp bound for the functional |a 2 a 4 − a 2 3 | f ∈ R λ α, ρ .We also obtain some basic properties of the class R λ α, ρ .Our investigation includes a recent result of Janteng et al. 13 .We also generalize some results of Ling and Ding 8 . Preliminaries To establish our results, we recall the following.Lemma 2.1 see 2 .Let the function p ∈ P and be given by the series Then, the sharp estimate Lemma 2.2 cf.14, page 254 , see also 15 .Let the function p ∈ P be given by the power series 2.1 .Then, for some x, |x| ≤ 1, and for some z, |z| ≤ 1. Main results We prove the following. 3.4 Therefore, 3.4 yields Since the functions p z and p e iθ z , θ ∈ R are members of the class P simultaneously, we assume without loss of generality that c 1 > 0. For convenience of notation, we take c 1 c c ∈ 0, 2 .Using 2.3 along with 2.4 , we get International Journal of Mathematics and Mathematical Sciences An application of triangle inequality and replacement of |x| by μ give 4 12 : F c, μ say , The choice of α 0 yields what follows. Corollary 3.2.Let the function f given by 1.2 be a member of the class R λ ρ .Then, 3.14 Equality holds for the function 3.20 Hence, f z ∈ R μ α, ρ , and the proof of Theorem 3.4 is complete. Proof.Since the Hadamard product is associative and commutative, we have Therefore, Now applying Lemma 2.5, we get R e iα Ω λ z f * g z z > ρ cos α. Then, the function I f defined by the integral transform Proof.The Integral transform I f can be written in terms of Carlson-Shaffer operator as
1,458.8
2008-03-11T00:00:00.000
[ "Mathematics" ]
YOUPI: Your powerful and intelligent tool for segmenting cells from imaging mass cytometry data The recent emergence of imaging mass cytometry technology has led to the generation of an increasing amount of high-dimensional data and, with it, the need for suitable performant bioinformatics tools dedicated to specific multiparametric studies. The first and most important step in treating the acquired images is the ability to perform highly efficient cell segmentation for subsequent analyses. In this context, we developed YOUPI (Your Powerful and Intelligent tool) software. It combines advanced segmentation techniques based on deep learning algorithms with a friendly graphical user interface for non-bioinformatics users. In this article, we present the segmentation algorithm developed for YOUPI. We have set a benchmark with mathematics-based segmentation approaches to estimate its robustness in segmenting different tissue biopsies. Introduction Immunohistochemistry and immunofluorescence are currently the most commonly used approaches for analyzing tissue biopsies. These techniques enable the use of four to six fluorescence-conjugated antibodies for detecting markers expressed in the same tissue, leading to a limited number of multiple staining combinations due to the low number of revelation channels that can be used together. Moreover, the properties of fluorescenceassociated antibodies can overlap. Their overlapping complicates the process of using a complex fluorescence-conjugated antibody combination for later analysis in terms of diagnosis, prognosis, or treatment elaboration, thus leading to using only a small number of fluorochromes with non-overlapping signals (1). However, these challenges have recently been overcome by evolving approaches, such as the recent fluorescence-associated approach CO-Detection by indEXing (CODEX) (2). Additionally, new approaches based on the use of metal-tagged antibodies have resulted in the development of technologies such as Multiplexed Ion Beam Imaging Technology (MIBI) (3) and Imaging Mass Frontiers in Immunology frontiersin.org 01 Cytometry (IMC) (4). They ensure the detection of cellular marker expressions that do not interfere with each other. These techniques are significant for evaluating up to 40 markers on each cell simultaneously (5) and for preserving high-resolution spatial information (6). The cell segmentation process extracts image information from the generated data, allowing for associating each detected marker and spatial coordinates with each cell (7). This step is essential for subsequent good-quality downstream analysis, such as in-depth immunophenotyping, which provides a powerful toolkit for understanding physiological processes and diagnostic methods. Segmentation consists of annotating the image to assign specific pixels to objects and thus gather pixels with already known criteria. When applied to biological data, segmentation is critical for working at a single-cell level. Thus, cell segmentation remains a complex exercise, primarily because of the cells' irregular shapes, heterogeneous density, and unevenly distributed membrane marking (8). Few attempts have been made to handle the task with simple mathematical approaches, such as CellProfiler, which has been developed using threshold pixel intensity, the watershed method, or the propagation algorithm (9). Pipelines based on the combination of Ilastik (10) and CellProfiler softwares (11) and, more recently, QuPath software (12) are some commonly used tools in research laboratories for the cell segmentation of IMC data. However, these solutions are time consuming, false-positive detections are still frequently observed, and they are not available as easy-to-use kits for non-computer users. The need for robust, accurate segmentation techniques with quick, easy access remains challenging. Meanwhile, artificial intelligence is beginning to play an important role in scientific research, mainly through machine learning approaches. Techniques such as clustering, decision trees, and deep neural networks are already being used to segment magnetic resonance imaging (MRI) data (13). Machine learning is used to analyze and learn common characteristics from available data. Although cell segmentation remains a challenge even with machine learning, it is useful when dealing with the heterogeneity of cell shapes from different tissues (14). For decades, machine learning has been used for image analysis in the biomedical field for classification tasks (15). In our case, segmentation is a particular type of feature extraction. Due to recent advances in machine learning, we can now choose from a variety of network architectures, such as GAN (16), TRANSFORMER (17), and U-Net (18), according to the task required. If sufficient and varied data are provided, this type of network can capture the heterogeneity of cell shapes. Here, we present YOUPI (an acronym for YOUr Powerful and Intelligent tool), an innovative tool for cell segmentation in tissue whose images are generated by an IMC. YOUPI works with a Ushaped neural network (19)(20)(21). Better known as "U-Net" (18), this type of network is adapted to answer the question of cell segmentation with biological data and is fascinating since it requires only a small amount of data to be trained. Therefore, the U-Net appears suitable for analyzing rare precious tissue samples. We developed YOUPI to provide a tool with a friendly graphical user interface for obtaining cell segmentation masks. Antibody conjugation Carrier-free antibodies were conjugated to metal tags using the MaxPar ® labeling kit (Fluidigm) following the manufacturer's instructions. Antibodies were stored at 500 mg/ml in a stabilizing solution (Candor Biosciences) at 4°C. Tissue staining and IMC image acquisition All samples issued from different patients were included in a registered autoimmune disease or tumor tissue collection, and the present study was conducted following national and institutional guidelines in compliance with the Helsinki Declaration and after approval by our institutional review board. Formalin-Fixed and paraffin-embedding (FFPE) sections of 4 mm thickness from the salivary glands of patients with Sjögren's syndrome, intestinal cancer, small cell lung cancer, and non-small cell lung cancer were cut and placed onto glass slides. Sections were de-paraffinized with xylene and carried through sequential rehydration from 100% Ethanol to 70% Ethanol before being transferred to a Tris buffer solution (TBS). Heat-induced antigen retrieval was performed in a water bath at 95°C for 30 min in Tris/ EDTA buffer (10mM Tris, 1mM EDTA, pH9). Slides were cooled to room temperature (RT) and subsequently blocked using phosphatebuffered saline (PBS) with 3% BSA for 30 min at RT. Each slide was incubated with 100 ml of the metal-tagged antibody cocktail (Table 1) overnight at 4°C. Then, the slides were washed three times with PBS and labeled with a 1:500 dilution of Intercalator-Ir (Fluidigm) in TBS for 2 min at RT. Slides were briefly washed with H2O and air dried before IMC acquisition. Data were acquired on a Hyperion Imaging System ™ coupled to a Helios Mass Cytometer (Fluidigm) at a laser frequency of 200 Hz and a laser power of 3 dB. For each recorded region of interest (ROI), stacks of 16-bit singlechannel TIFF files were exported from MCD binary files using MCD ™ Viewer 1.0 software (Fluidigm). To prepare for training the neural network, cell-based morphological segmentation was carried out using supervised pixel classification with the Ilastik toolkit (22) to identify nuclei, membranes, and backgrounds. CellProfiler software (11) was used to segment the resulting probability maps. Inputs of 16-bit TIFF images with their corresponding segmentation masks were uploaded to histoCAT analysis toolbox (23) to open a session data analysis. Dimensionality reduction and unsupervised FLOWSOM clustering for 16-bit single images were performed using Cytobank on FCS files. Workstation used for training The model was trained for about 30 minutes on a workstation with 125 GB of RAM and an Intel Core i9-10920X CPU. IMC training dataset pre-processing The dataset used to train the neural network consisted of 81 patches randomly selected, including 30 patches from salivary glands of a patient with Sjögren's syndrome, 20 patches from an intestine, five patches from a small cell lung cancer, and 26 patches from a nonsmall cell lung cancer. The heterogeneity of the tissues was expected to improve the capacity for detecting cells of various shapes. The trained model was evaluated on 90 new patches from salivary glands. Cell-based morphological segmentation was conducted using supervised pixel classification by Ilastik (22) to identify nuclei, membranes, and backgrounds, followed by CellProfiler analysis (11) to segment the resulting probability maps. Masks obtained from this segmentation were manually corrected, based on the codetection of the nuclei signals and membrane signals, and used as the ground truth for the network training. Training and characteristics of the neural network YOUPI is based on U-Net architecture. It consists of convolutional neural networks (CNN) arranged to perform semantic segmentation and contains two parts. The first contracting one has a typical CNN architecture. Each block of this path consists of two 3×3 convolution layers in a row, followed by a rectified linear unit (ReLU) and a 2×2 max-pooling operation. This procedure is performed four times. The second expansive path involves an oversampling of the feature map, followed by 2×2 convolutions at each stage. To enable precise localization, the feature map from the corresponding layer in the contracting path is cropped and concatenated onto the upsampled feature map. Two successive 3×3 convolutions follow this step to end on a ReLU. A 1x1 convolution is used for the final layer to reduce the feature map to the desired number of output channels (18). The output is in the form of an image whose pixels have values between 0 and 1. To obtain a binary image, all pixels below 0.5 must be 0 for black, and all pixels greater than 0.5 must be 1 for white. The white part represents the cell, and the black part represents the background. For the training, we used a binary cross-entropy loss function defined as follows: where N stands for the total number of pixels in an image, y i represents the corresponding target value, and yî is for the predicted pixel probability. The cross-entropy loss compares the predicted probabilities with the ground truth values, and the loss is minimized during the training process. The network runs with the Adam's optimizer, a stochastic gradient descent method, a batch size of 16, and 500 epochs with an early stop, using Keras/Tensorflow packages in Python 3. To evaluate the performance of the YOUPI tool, we checked the value of the Intersection over Union (IoU) metric, also known as the Jaccard index. where A is the predictive mask, and B is the ground truth. The IoU is one of the most frequently used metrics for evaluating a model of image segmentation (24). Mathematically, it represents the proportion of the area overlapping the target mask and the prediction output. Its value can vary between 0 and 1. The mean IoU of a patch is computed as the mean of the IoU of each cell in the patch. YOUPI features 2.6.1 Cell segmentation mask generation For each ROI, stacks of 16-bit single-channel TIFF files were exported from MCD fi les using MCD ™ Viewer 1.0 software (Fluidigm). A first cell-based morphological segmentation was conducted, as described in section 2.4. A second cell segmentation method was performed using the OME-TIFF files of the markers stacked in a single TIFF file with ImageJ software (v.1.8.0_172). The TIFF file was opened with QuPath software (v.0.3.2), and the image type was set to fluorescence. The segmentation process was then run with the optimal parameters for each ROI (filter function, signal intensity threshold, etc.), based on the iridium channel. FCS file exportation Session data analysis was opened with 16-bit TIFF images. Their corresponding segmentation mask previously generated were uploaded in histoCAT (23) to export data in FCS files. Statistical analysis To extract quantitative data, FCS files were uploaded with OMIQ software. Data were expressed as mean ± SEMs. Statistical analyses were performed with GraphPad Prism (GraphPad Software, La Jolla, CA) using the Wilcoxon test for comparing the paired values. Significant differences were estimated at p< 0.05. Results 3.1 Elaboration of the dataset for pre-processing the neural network 3 .1.1 Elaboration of the preliminary segmentation mask The training dataset is the result of several steps. Once the preparation of the tissue slides is ready, IMC acquisition is performed (Figure 1). The acquisition produces a stack of images representing the same ROI with a different marker detected for each image. The MCD acquisition files from the Hyperion Imaging System ™ are converted into TIFF format using an IMC file conversion tool (https://github.com/BodenmillerGroup/imctools). The marker panel was previously designed according to the tissue from which the IMC images were obtained. The interesting membrane markers are selected (Table 1; Supplementary Figure 1) and summed. In addition, image with stained nuclei is also selected. A graphical interface allows for colorizing the summed membrane markers in green and the nuclei marker in red before summing the two colors. Ilastik and CellProfiler software are then used to obtain a preliminary segmentation mask. Once generated, the binary mask of segmentation is split into patches of 128x128 pixels before manual correction. Segmentation from manual correction Although time consuming and tedious, manual correction remains optimal for achieving accurate results in cell segmentation. Graphical overview of the pre-processing steps to build the IMC training dataset. The pre-processing steps required to train the neural network of the YOUPI software consist of the following: 1. The preparation of biological tissues, 2. The acquisition of images with the HYPERION IMC, 3. The filtering of the images, 4. The selection of interesting markers, 5. The summation of interested markers, 6. The segmentation of raw images with Ilastik and CellProfiler, 7. The splitting of segmented images in 128×128 pixel patches, 8. The recovery of patches for manual correction, 9. The correction with the interface, allowing for opacity control and brush. IMC: Imaging Mass Cytometer. To simplify the manual correction, a first tool was developed to superpose the segmentation mask generated by Ilastik and CellProfiler softwares and the patches of the IMC images. The control of opacity improves the visibility of the cell borders ( Figure 2). A second tool was developed that provides a black-andwhite brush to manually correct segmentation errors. In the example shown in Figure 2, the Ilastik and CellProfiler softwares segmented two cells instead of one in the largest rectangle and three cells instead of one in the smallest rectangle. A manual correction was thus necessary to obtain a single cell for both rectangles. The 81 patches used to train the neural network were generated accordingly. YOUPI development 3.2.1 Elaboration of the graphical interface To improve the usability of the U-Net segmentation, a graphical interface was developed to select the IMC images among the stack (Figure 3). The user can choose markers from the panel to generate an image that displays membrane (in green) and nuclei (in red) to provide input for the neural network. Median and smooth filters are used to enhance the quality of the image and the contrast and brightness adjusted for each chosen marker. Depending on the intensity of the signals, adjustment x2 to x3 is used for nuclei signal, and up to x10 for membrane signals. This adjustment process is essential for ensuring accurate cell segmentation. Input and output of the U-Net The graphical interface allows users to build a colorized greenand-red image with the appropriate membrane and nuclei markers. When the image is considered sufficiently sharp, a simple click on the "Segmentation" button in the graphical interface is required. Once this button is pressed, different steps occur. The image is split into patches of 128×128 pixels. These patches are processed by the U-Net model to segment and generate binary cell segmentation masks. Once performed, the patches are gathered to rebuild the original image ( Figure 4). Post-processing steps Once segmented, the image is ready for the post-processing steps. First, all cells in the mask whose coordinates do not show nuclei signals on the corresponding Hyperion acquisition are eliminated ( Figure 5A). Second, all objects measuring only 1 or 2 pixels are removed, and the filter "fill holes" is used to fill abnormally holed nuclei ( Figure 5B). Overall, when users click on the "Segmentation" button from the graphical interface, the image that appears has gone through all the post-processing steps. CSV file generation for downstream analysis Two types of CSV files can be generated from the postprocessed segmented image. Since cells are defined as a list of identified pixels, the centroid of each cell is accessible, and the mean and median intensity of all markers for each cell can be calculated according to the pixel intensities in the raw IMC image. Therefore, one CSV file (Supplementary Table 1) contains the centroid of each cell from the segmentation mask, followed by the mean intensity of all associated markers. A second CSV file (Supplementary Table 2) contains the centroid of each cell and, subsequently, the median intensity of all associated markers ( Figure 6). The median intensity allows to ignore the aberrant high-intensity pixels that are frequently identified in IMC images. These CSV files can then be used for downstream supervised or unsupervised analysis. Metrics for evaluating the YOUPI tool To evaluate the performance of the YOUPI tool, we checked the value of the Intersection over Union (IoU) metric, also known as the Jaccard index. The IoU is one of the most frequently used metrics for evaluating a model of image segmentation (24). Mathematically, it represents the proportion of area overlapping between the target mask and the prediction output. Its value can vary between 0 and 1. The mean IoU of a patch is computed as the mean of the IoU of each cell in the patch. Visual interface preview for manual segmentation correction. 1. A 128×128 pixel patch of the IMC image, 2. Visualization of the segmentation mask generated by the Ilastik/CellProfiler pipeline to detect segmentation errors through the opacity management of the patch, 3. Manual correction of the segmentation error. IMC, Imaging Mass Cytometer. Graphical interface of the YOUPI software. An overview of the graphical interface and the ease-of-use functions of the YOUPI software. Figure 2 shows the efficient segmentation achieved using the YOUPI software, demonstrating its ability to segment multiple tissue types even without a training phase. To further global quality, we sought an additional metric. We developed biological metrics, including the number of cells, the percentage of real cells, and the rate of false-positive cells. Real cells are objects characterized by at least a tenth percentile of the grayscale nuclei signal, below they are considered as non-existent cells, while false-positive cells correspond to objects co-expressing exclusive markers. Eighteen new ROIs from different tissues not used in the training data set and from different batches of data (two ROIs of one batch and two of another batch from salivary glands of patients with Sjögren's syndrome, six from lung cancer, and eight from intestinal cancer) were segmented with the combination of Ilastik and CellProfiler, with QuPath and with YOUPI software, and the results were analyzed. Though not significantly different, the highest number of cells was found with the combination of Ilastik and CellProfiler software but was associated with the segmentation of non-existent cells ( Figure 7A) and with the significantly lowest rate of real cells (91.2 ± 13.5%) compared with those detected with the YOUPI software (99.0 ± 2.1%, p< 0.01). There was no significant difference between the number of cells detected with the QuPath software and the YOUPI software. However, there were also significantly fewer real cells with the QuPath software (96.6 ± 6.5%) than with the YOUPI software (p< 0.05). Gating on the real cells, the rate of CD20+ B cells ( Figure 7B), CD3+ T cells ( Figure 7B), CD68+ macrophages ( Figure 7C) and PanKeratin+ epithelial cells ( Figure 7D) were not significantly different between YOUPI software and Ilastik and CellProfiler, and QuPath softwares. Only Ilastik and CellProfiler seems to detect fewer CD3+ T cells. The rates of CD20 and CD3 double-positive cells were also evaluated. Since CD20 and CD3 markers belong to B and T lymphocyte lineages, respectively (25), one single lymphocyte cannot express both markers. As shown in Figure 7E, no significant difference in the rates of false double-positive cells for Ilastik and CellProfiler, QuPath, and YOUPI software (3.9 ± 6.3% vs. 4.0 ± 5.4% vs. 4.2 ± 5.5%, p > 0.05). Similarly, the rates of PanKeratin and CD8 double-positive cells were evaluated. PanKeratin is an epithelial cell marker (26), while CD8 is a T-cell subset marker (25). Both cannot be co-expressed by a single cell. Again, the rate of false double PanKeratin and CD8 positive cells was low, and there were no differences among the three methods of segmentation (0.9 ± 1.7% vs. 0.9 ± 1.4% vs. 1.0 ± 1.1%, p > 0.05) ( Figure 7D). Consistently, correlation between QuPath vs YOUPI, Ilastik and CellProfiler vs YOUPI, and Ilastik and CellProfiler vs QuPath have been evaluated (Supplementary Figure 3). Strong correlation was identified for all analyses but Ilastik and CellProfiler vs YOUPI and Ilastik and CellProfiler vs QuPath for the percentage or real cells. Performed on separated tissues, in which the number of total cells is different, the percentages of real cells and all cell subsets are Discussion We introduced YOUPI, a cell segmentation tool for images generated by the Hyperion IMC intended for non-computer-friendly users. YOUPI works with U-Net. Three post-processing steps were added to obtain better control over the final segmentation mask. This mask provides access to single-cell data via CSV files for downstream supervised (manual gating strategy) or unsupervised (t-SNE, Phenograph, etc.) analysis whose performance falls between that of Ilastik and that of QuPath (Supplementary Figure 5). The cell segmentation of tissue images can thus be performed using existing software, such as Ilastik and CellProfiler, or QuPath. Our experiences indicate that they require specific skills in image processing (e.g., image format conversion) and analysis to efficiently achieve cell segmentation. They rely on multiple third-party tools (ImageJ, Python scripts, etc.) and are thus barely accessible to nonbioinformatician users. Ease-of-use has guided our development of YOUPI. In contrast to the other tools, the all-in-one graphical interface can be used instinctually by non-computer scientists. In addition, two CSV files are generated. The first is a CSV file containing the mean intensity of markers, which scientists commonly use. However, aberrant high-intensity pixels are frequently identified in IMC images. The second is a CSV file containing the median intensity allowing to ignore these artifacts that can impact downstream analysis. It was added to the YOUPI software during the post-processing step to remove aberrant pixels. Furthermore, among all existing tools or pipelines allowing for cell segmentation, the Ilastik and CellProfiler analysis of a given ROI requires approximately four hours of work, and QuPath software requires thirty minutes. With YOUPI, an inexperienced computer user obtains the segmentation mask in less than 10 minutes, including choosing markers of interest, thus making YOUPI a tool that is easy to use and delivers results quickly. The segmentation results of the U-Net network are obtained with a set of essential steps. Manual segmentation is a crucial aspect of training to obtain reliable data. To obtain prebuilt binary images, we decided to use masks generated by Ilastik and CellProfiler softwares. Thus, QuPath was not useful since it does not consider membrane markers. Several weeks were required to manually correct the binary image patches. Although manual errors could have been made, the U-Net learned to perform segmentation as efficiently as Ilastik and CellProfiler, and as QuPath software at detecting the number of cells and of false-positive cells based on the corrected images. It proved to be the most efficient at detecting real cells. Correlation analyses of all tissues together, as well as, analyses on separated tissues, indicate that the Ilastik and CellProfiler detection method hardly match with QuPath and YOUPI software for the detection of real cells and some cell subsets. Segmentation results with YOUPI and QuPath are similar. The mean IoU value confirmed the robustness of the cell segmentation results obtained with YOUPI. Although CD3, CD20, CD8, and PanKeratin molecules are expressed on distinct populations of cells (25,26), some CD3 and CD20 double-positive cells or PanKeratin and CD8 double-positive cells could nevertheless be detected. It has recently been found that CD3+ T lymphocytes can express CD20 in blood and tissue (27) FIGURE 6 Overview of cell ID to obtain centroids and the mean/median of gray levels per marker. Based on the image from the post-processing segmentation, all cells were identified and had X and Y coordinates thanks to their centroid. This information is written in a CSV file. This file also contains the mean intensity of each marker for each cell. A second CSV file contains the median intensity of each marker for each cell. ID, identity. with potential relevance to human diseases (28), indicating the possible existence of a few real CD3+ and CD20+ double-positive cells. Alternatively, with high cellular density and depending on the thickness of the tissue section, touching or overlapping cells may be detected, given the possibility of detecting unexpected, rare PanKeratin and CD8 double-positive cells due to IMC technology's limitations. In conclusion, the image patches segmented to train YOUPI software came from different tissues (salivary gland, intestine, small cell lung cancer, and non-small cell lung cancer) to ensure the heterogeneity of cell types and shapes to train the U-Net. U-Net is a network specifically developed to analyze biomedical images, which allows for good performance from a restricted annotated dataset (18). Nonetheless, segmentation coming from a neural network will Biological metrics for evaluating cell segmentation performance. Eighteen ROI were segmented with Ilastik and CellProfiler, QuPath and YOUPI software. never be flawless. However, with the inclusion of additional postprocessing steps (Figure 5), obvious mistakes the neural network could make are controlled; thus, YOUPI is safe. YOUPI's overall performance is comparable with that of other segmentation tools, despite relying on different approaches with different flaws. The naive mathematical approaches of Ilastik and CellProfiler can lead to the identification of non-existent cells, while the empirical threshold approach of QuPath can result in oversized shapes. Different intensities of image colorization can impact the quality of segmentation masks generated by YOUPI's U-Net. It would be useful to apply these algorithms together to overcome their respective flaws. Therefore, the user can build an image that will be segmented according to the markers in which he or she is interested. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by the Comité de Protection des Personnes Ouest VI, Brest, Boulevard Tanguy Prigent, 29200 Brest, France. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. Author contributions YS, PH, NF, and CJ designed the study, and YS, PH, and MR analyzed generated data. PH performed the image acquisition, and YS developed the YOUPI tool with the support of NF. YS, PH, NF and CJ wrote the manuscript, and J-OP participated in the text edition. All authors contributed to the article and approved the submitted version.
6,199.2
2023-03-02T00:00:00.000
[ "Computer Science" ]
Text mining-based word representations for biomedical data analysis and protein-protein interaction networks in machine learning tasks Biomedical and life science literature is an essential way to publish experimental results. With the rapid growth of the number of new publications, the amount of scientific knowledge represented in free text is increasing remarkably. There has been much interest in developing techniques that can extract this knowledge and make it accessible to aid scientists in discovering new relationships between biological entities and answering biological questions. Making use of the word2vec approach, we generated word vector representations based on a corpus consisting of over 16 million PubMed abstracts. We developed a text mining pipeline to produce word2vec embeddings with different properties and performed validation experiments to assess their utility for biomedical analysis. An important pre-processing step consisted in the substitution of synonymous terms by their preferred terms in biomedical databases. Furthermore, we extracted gene-gene networks from two embedding versions and used them as prior knowledge to train Graph-Convolutional Neural Networks (CNNs) on large breast cancer gene expression data and on other cancer datasets. Performances of resulting models were compared to Graph-CNNs trained with protein-protein interaction (PPI) networks or with networks derived using other word embedding algorithms. We also assessed the effect of corpus size on the variability of word representations. Finally, we created a web service with a graphical and a RESTful interface to extract and explore relations between biomedical terms using annotated embeddings. Comparisons to biological databases showed that relations between entities such as known PPIs, signaling pathways and cellular functions, or narrower disease ontology groups correlated with higher cosine similarity. Graph-CNNs trained with word2vec-embedding-derived networks performed sufficiently good for the metastatic event prediction tasks compared to other networks. Such performance was good enough to validate the utility of our generated word embeddings in constructing biological networks. Word representations as produced by text mining algorithms like word2vec, therefore are able to capture biologically meaningful relations between entities. Our generated embeddings are publicly available at https://github.com/genexplain/Word2vec-based-Networks/blob/main/README.md. Introduction The field of Natural Language Processing (NLP) is concerned with the development of methods and algorithms to computationally analyze and process human natural language. Solutions in this domain often have practical significance for everyday applications such as conversion between written and spoken language to enhance media accessibility, translation between linguae, optical character recognition (OCR) for street sign detection in traffic assistance systems, or document/media content classification for recommendation systems. In biomedical research, NLP is of importance to extract reported findings, e.g., protein-protein interactions, from scientific texts. Several studies employed supervised machine learning algorithms to identify and extract knowledge from scientific literature [1][2][3][4], which requires extensive manually labeled datasets for training. A novel approach was recently introduced that applied neural networks (NNs) to learn high-dimensional vector representations of words in a text corpus that preserve their syntactic and semantic relationships [5]. The word2vec method, proposed by Mikolov et al. [5], embedded words in a vector space by predicting their co-occurrence so that words with similar meaning had a similar numerical representation. Word2vec can effectively cluster similar words together and predict semantic relationships. As produced by word2vec, word embedding allows computing relations between words obtained from a large unlabeled corpus, e.g., using their vector cosine similarity. Several works in the biomedical research field have since adopted word vector presentations for various tasks like named entity recognition (NER) [6,7], medical synonym extraction [8], as well as extraction of chemical-disease relations [9], drug-drug interactions [10] or proteinprotein interactions [11]. Many studies have used PubMed abstracts [12], citations, or full text articles as a standard to generate word embeddings. However, usually a study examines a specific analysis task for defined aims. Each study uses input corpora including PubMed to achieve certain aims by considering different strategies such as adding a domain knowledge to obtain specific embeddings or by applying different evaluation methods. PubMed [12] is always the best library used to evaluate different word embedding strategies due to the large and valuable biomedical knowledge included. The novelty of such methods usually lies in the techniques used for the corpus processing and/or the methods used to evaluate and validate their utility in downstream analysis. Therefore, the generated embeddings are assessed for their quality by considering particular pre-processing procedures or by using different size of the input corpus as well as by applying different evaluation and validation techniques. Different validation strategies have been proposed to assess the quality of word embeddings. Wang et al. evaluated word embeddings' performance generated from four different corpora, namely clinical notes, biomedical literature (articles from PubMed Central (PMC)), Wikipedia, and news articles [13]. The evaluation was performed qualitatively and quantitatively. Their experimental results showed that embeddings trained on clinical notes are closer to human judgments of word similarity. They also demonstrated that word embeddings trained on general domain corpora are not substantially inferior in performance than those trained on biomedical or clinical domain documents. Zhang et al. assessed both the validity and utility of biomedical word embeddings generated using a sub-word information technique that combines unlabeled biomedical literature from PubMed with domain knowledge in Medical Subject Headings (MeSH) [14]. They evaluated the effectiveness of their word embeddings in BioNLP tasks, namely a sentence pair similarity task performed on clinical texts and biomedical relation extraction tasks. Their word embeddings have led to a better performance than the state-ofthe-art word embeddings in all BioNLP tasks. Several recent studies, particularly in molecular biology, have also considered word embeddings to represent biomedical entities and their functional relationships. Du et al. trained a gene embedding from human genes using gene co-expression patterns in data sets from the GEO databases and achieved an area under the curve score (AUC) of 0.72 in a gene-gene interaction prediction task [15]. Chen et al. employed NER tools to recognize and normalize biomedical concepts in a corpus consisting of PubMed abstracts. They trained four concept embeddings on the normalized corpus using different machine learning models. They assessed the concept embeddings' performance in both intrinsic evaluations on drug-gene interactions and gene-gene interactions and extrinsic evaluations on protein-protein interaction prediction and drug-drug interaction extraction. Their concept embeddings achieved better performance than existing methods in all tasks [16]. In other studies, word embeddings were used as input features to improve machine learning algorithms' performance. Kilimci et al. used different document representations, including term frequency-inverse document frequency (TF-IDF) weighted document-term matrix, mean of word embeddings, and TF-IDF weighted document matrix enhanced with the addition of mean vectors of word embeddings as features. They analyzed the classification accuracy of the different document representations by employing an ensemble of classifiers on eight different datasets. They demonstrated that the use of word embeddings improved the classification performance of texts [17]. Evaluating the validity of word embeddings by examining semantic relations can improve the transparency of word embeddings and facilitate the interpretation of the downstream applications using them. In many studies, word2vec has been applied to PubMed abstracts as a state-of-the-art method to generate word embeddings. However, their performance can differ significantly given different tasks for evaluation and validation. In this study, we applied word2vec to generate biomedical word embeddings using a corpus consisting of over 16 million PubMed abstracts and thoroughly validated their ability to capture biologically meaningful relations. Our generated word embeddings differ in the preprocessing phase which included a new procedure and the methods used to evaluate their performance for biomedical analysis. The new pre-processing procedure consists of substituting synonymous terms of biomedical entities by their preferred terms. Our validation methods are similar to those used by Chen et al. [16]. However, their evaluations concentrated on genes by considering drug-gene and gene-gene interactions. Our assessment covers vector cosine similarity of relations in protein-protein interactions (PPIs), common pathways and cellular functions, or narrower disease ontology groups using existing knowledge in biomedical databases. Most word embeddings have been trained using either the word2vec [5] or the GloVe [18] model, which uses information about each word's co-occurrence with its nearby words to represent it in a distinct vector. Word2vec employs negative sampling and sub-sampling techniques to reduce the computational complexity. Word embeddings, learned by word2vec or other methods such as Skip-Gram or Continuous Bag-of-Words that predict a word's context from raw text using a target word, are called static embeddings. Such static embeddings are useful for solving lexical semantic tasks, particularly word similarity and word analogy, and representing inputs in downstream tasks [19]. More recent techniques in language modeling were unsupervised pre-trained language models such as BERT (Bidirectional Encoder Representations from Transformers) [20] and ELMO (Embeddings from Language Models) [21] that create contextualized word representations. Such models support fine-tuning on specific tasks and have shown effective performance improvements in diverse NLP tasks such as question answering and text classification. BioBERT [22] (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining) was initialized with the BERT model to pretrain domain-specific language representations on large-scale biomedical articles. BERT outperformed the NLP state-of-the-art on a variety of biomedical and clinical tasks. However, training BERT is computationally expensive due to its high model complexity and a large amount of training data needed to achieve acceptable accuracy. Moreover, convolutional neural networks (CNNs) are among the often applied deep-learning network architecture that have delivered a good performance in image recognition and classification. CNN models were also effective for NLP tasks such as text classification [23,24]. They have been used in bioinformatics [25], namely in drug discovery and genomics [26], and motivated further progress on graph structured prior information with promising results on the prediction of metastatic events [27,28]. Graph Convolutional Neural Networks (Graph-CNNs) have been proven to be effective in capturing structural information in graphs [29]. Graph-CNNs have been effectively used for a variety of NLP applications such as machine translation [30]. Information from word embedding can be represented as edges in a graph with words as nodes. Thereby, Graph-CNNs are an effective way of representing such graphs and validating the embedding utility. Our word embeddings were assessed for their biological utility on a metastatic event prediction task. We trained Graph-CNNs on cancer gene expression data with gene-gene networks derived from word2vec embedding as prior knowledge to predict the occurrence of metastatic events. Graph-CNNs achieved comparable performance with word2vec-embedding-derived networks on liver, prostate and lung cancer data, and a slightly better performance on breast cancer data compared to protein-protein interaction networks or networks derived using other word embedding algorithms. Materials and methods Word embeddings generated for this study were based on a corpus of 16,558,093 article abstracts from the public PubMed FTP repository. Corpus pre-processing and training of word embeddings Natural language text data usually require an amount of preparation before they can be fed into the model training. For the purpose of pre-processing of the text corpus we implemented a pipeline that conducted the steps depicted in Fig 1. The first phase applied classical processing steps such as lowercasing, lemmatization, and removal of punctuation and numerical forms. We assumed replacing synonyms of biomedical terms with their main terms can affect the similarity between words in a way to better capture functional relationships between biomedical entities. Thus, we introduced an optional step before starting training, in which synonymous terms were substituted by externally defined main terms. This corpus was then used to train the word2vec model. For training we used the word2vec implementation in Gensim [31] with context window of size 5, minimum count is equal to 5 and 300 neurons which is also the number of generated vector dimensions. For this study, we generated the following two word2vec embeddings: 'Embedding_v1' in which synonymous terms of genes, diseases, and drugs were substituted by their preferred terms from HUGO [32], MeSH [33], and DrugBank [34], respectively; and 'Embedding_v2' for which the same preprocessing strategies and the same training process were applied but without replacing synonyms. Moreover, we assigned type labels using the same biomedical databases mentioned above in order to filter similarities. This enabled us to compare similarities between entities in obtained embeddings to existing knowledge in biomedical databases. Validation of word embeddings To provide an expedient tool for biological research, relative locations of terms within the vector space of the embedding should exhibit agreement with existing biological knowledge. Our validation addressed protein-protein interactions, signaling pathways and biological processes, drug targets and human diseases, which have been and continue to be of interest in many biomedical research projects. The conducted validation experiments therefore examined whether vectors of members within groups defined by respective biological databases featured increased cosine similarities compared to randomly sampled entities. Signaling pathways, biological processes and human diseases. Reactome 72 [35] and TRANSPATH1 2020.2 [36] pathway-gene assignments as well as Gene Ontology (GO, release 2020-03-25) [37] biological process-gene assignments were extracted from the geneXplain platform [38] version 6.0. Disease terms covered by the embedding were mapped to 139 groups of the Human Disease Ontology version 2019-05-13 [39] with more than 5 and less than 1000 member diseases. We calculated medians, lower and upper quartiles of cosine similarities for gene pairs within pathways and biological processes with at least 10 and not more than 3000 genes as well as for disease pairs within the 139 disease groups. In addition, we calculated medians, lower and upper quartiles for 2000 randomly sampled gene pairs and for 700 randomly sampled disease pairs that were not contained in selected groups (S1-S4 Files). Protein-protein interactions. Known protein-protein interactions were extracted from Reactome 63 for 4254 genes (16727 interactions) with vector presentations in the embedding. For the purpose of comparison, we sampled 10,000 random gene pairs and the same number of gene pairs with known interactions (S5 File). Drug-gene associations. The DrugBank [34] database combines detailed drug information with comprehensive drug target information. We extracted genes associated with each drug reported in DrugBank with type target. By considering 5234 drugs and their target genes, we created drug pairs based on the common genes that two drugs share in each pair. Drug-gene associations were obtained from DrugBank release 4.5.0 and cosine similarities of 50000 drug pairs with at least one shared target gene were compared to 50000 drug pairs without common target genes. Moreover, to examine the variability of the similarity distribution of drug pairs based on the number of genes they share, we sampled three drug pair groups (group1: no genes, group2: < = 5 genes, group3: < = 9 genes) (S6 File). The RMA (robust multi-array average) probe-summary algorithm [42] was applied to normalize each data set separately after which they were combined and further normalized using quantile normalization applied over all datasets. When more than one probe was associated with a gene, the probe with the highest average expression value was chosen, leading to 12179 genes. Training set classes consisted of 393 patients with metastasis within the first 5 years and 576 patients without metastasis between 5 and 10 years after biopsy. Other types of cancer. We further applied Graph-CNNs to classify normal vs liver, lung or prostate tumor tissue as well as to predict FOLFOX therapy sensitivity of colorectal cancers. Gene expression data of FOLFOX therapy responders and non responders were obtained from GEO series GSE28702 [43]. Data sets were compiled from GEO series GSE6222, GSE29721, GSE40873, GSE41804, GSE45436, GSE62232 for liver cancer, GSE10799, GSE18842, GSE19188 for lung cancer, and GSE3325, GSE17951, GSE46602, GSE55945 for prostate cancer. The expression measurements were normalized using the justGCRMA method of the R/Bioconductor package gcrma versions 2.60.0 (FOLFOX response data) and 2.56.0 (other cancer data sets) [44]. For the cancer data sets assembled from different GEO series, batch correction was carried out using the R/Bioconductor package limma, version 3.40.6, with batches corresponding to source GEO series accessions [45] (sample and batch information for the normal vs. cancer data is given in S8 File). Information to map probe set identifiers to human gene symbols was obtained from Ensembl version 102 [46] using the R/Bioconductor package biomaRt [47], resulting in about 8500 genes after intersecting the microarray genes with PPI networks described in the section 2.4. PPI networks A broad range of machine learning models have been developed to analyze high-throughput datasets in the aim of predicting gene interaction and identifying prognostic biological processes. Recently, biomedical research has shown the ability of deep learning models in learning arbitrarily complex relationships from heterogeneous data sets with existing integrated biological knowledge. This biological knowledge is often represented by interaction networks. The high data dimensionality and the complexity of biological interaction networks are significant analytical challenges for modeling of the underlying systems biology. In this section, we present the PPI networks derived from different sources and used as prior knowledge to structure gene expression data. 2.4.1. Human protein reference database. In a recent study, Chereda et al. [27] employed the Human Protein Reference Database (HPRD) protein-protein interaction (PPI) [48] network to structure gene expression data of breast cancer patients. Genes from gene-expression data were mapped to the vertices of the PPI network yielding an undirected graph with 7168 matched vertices consisting of 207 connected components. The main connected component had 6888 vertices, whereas the other 206 components each contained 1 to 4 vertices. Since the approach of utilizing prior network information in Graph CNNs required a connected graph [29] training was carried out on the gene set of the main connected component. In this study, we used the same PPI network with the same approach of Chereda et al. [27]. Word2vec embedding-based networks. We created two gene-gene networks (Embedding_net_v1 and Embedding_net_v2) from the embedding version that excluded synonyms (Embedding_net_v1) and from the other version where word synonyms were taken into account (Embedding_net_v2). Both networks consisted of gene pairs with edges weighted by their cosine similarity values. The cosine similarity threshold was set to 0.65 yielding the Embedding_net_v1 network with 10730 genes in 4397 connected components with a main component of 6092 vertices and the Embedding_net_v2 network with 10729 genes in 4399 components with a main component covering 6106 vertices. The main connected components of Embedding_net_v1 and Embedding_net_v2 networks shared 5750 genes, therefore overlapping in the majority of vertices. STRING-derived network. The STRING database [49] is a collection of proteinprotein associations which can be derived from one or more sources such as gene neighborhoods, gene co-occurrence, co-expression, experiments, databases, text-mining, and whose confidence is expressed by an aggregated score computed from scores of the individual interaction sources. We considered the text-mining score as well as the combined score of all the interaction sources to build weighted protein-protein interaction networks. This way, the classification performance of Graph CNNs trained on the STRING text-mining network could be compared to Graph CNNs with prior knowledge from word2vec embedding-based networks. Like with the HPRD PPI, we mapped the genes to the two constructed STRING networks and supplied their main components to the training process. Score thresholds were chosen to obtain comparable number of vertices as in the HPRD PPI. 2.4.4. BERT-embedding-derived network. BERT (Bidirectional Encoder Representations from Transformers) [20] is a recently contextualized word representation model. The main technical innovation of BERT is the use of bidirectional transformers. BERT was pretrained in English Wikipedia and Books Corpus as a general language representation model. BioBERT [22] is a language representation model based on BERT and designed for biomedical text mining tasks. It was initialized with the BERT model provided by Devlin et al. in 2019 [20] and pre-trained on PubMed abstracts and PubMed Central full-text articles (PMC). We used the pre-trained BioBERT weights of 'BioBERT-Base v1.0' that was trained using the same vocabulary of BERT base (12-layer, 768-hidden, 12-heads, 110M parameters) on English text and 200k PubMed abstracts in addition. We converted the pre-trained TensorFlow checkpoint model to Pytorch [50], extracted the numerical vectors of 768 dimensions each and calculated the cosine similarities between entities to eventually extract a gene-gene network. The number of proteins in the main connected component was also kept according to the comparable number of vertices in the HPRD PPI. Random network. For further comparison, we created an unweighted, random network containing the same 6888 vertices that were mapped to HPRD PPI. Each vertex was connected to 8 randomly chosen vertices except itself. Repetitions of vertices were possible. As a result, the nodes' degrees form a unimodal distribution lying in the interval [8,30] with a mean value of 15.991, median 16, and standard deviation 2.80. To compare the performance of GCNN depending on the underlying networks, this version of a random network was used to structure the breast cancer data in section 2.3.1 Additionally, we createad a random network with random weights, modifying the random unweigted network from previous paragraph by assigningd to each edge a random value from the interval [0.65, 1]. This random network with random weights was used for comparison as a prior knowledge for the GCNN trained on datasets described in the section 2.3.2. Graph-convolutional neural network (CNN) One of the approaches for validation of the embedding networks is to analyze how the underlying molecular network influences performance of the machine learning method utilizing prior knowledge. The Graph-CNN method was applied on the breast cancer dataset introduced in section 2.3 in a recent study [28]. Also, we used other cancer datasets described in section 2.3.2. For all the datasets, the machine learning task is to predict a binary endpoint for a patient. The schema of the prediction workflow can be found in [26, of Fig 1]. As it was applied in the study [27], we subtracted the minimal value of the data from each cell of the quantile normalized gene expression matrix to keep the gene expression values non-negative. The classification accuracy of Graph-CNNs was compared for different sources of network prior information: HPRD, Embedding_net_v1, Embedding_net_v2, STRING and BioBERTbased network. As for embedding networks, we utilized weighted and unweighted (taking into account only topology) versions. The vertices were mapped to the genes of gene expression data and weighted edges were filtered according to a threshold value. We considered thresholds higher than 0.5 for cosine similarity between vertices and arrived at the values of 0.63 and 0.65 to keep the number of vertices mapped to the gene symbols comparable. The main connected component of the underlying graph was used to structure the data. The performance was assessed by 10-fold cross-validation. For each of the data splits the model was trained on 9-folds and the classification was evaluated using 10-fold as validation set. For each dataset, the architecture and hyperparameters of Graph-CNN remained the same for all underlying molecular networks. The Graph-CNNs were trained on the same number of epochs for each data split. For the majority of the prior knowledge networks, Graph-CNN was trained with 100 epochs, but for some versions of prior knowledge, a smaller number of epochs showed better results since convergence of gradient descent was happening faster. The most common evaluation metrics were used: area under curve (AUC), accuracy and F1-weighted score. The metrics were averaged over folds and the standard errors of their means were calculated. We compared performances based on weighted and unweighted networks. Assessment of text corpus size effect We tested how the text corpus size influences the variability of word representations and compared the similarities between given concepts obtained from four resulting embeddings. The embeddings were produced by applying word2vec to four text corpora of different sizes. The four text corpora were of size 4M, 8M, 12M, and~>16M. We selected 10 terms of different entity types, the genes brca1, psen1 and egf, the medical terms breast neoplasms, eczema, sleep and schizophrenia, and the molecular compounds ranitidine, lactose, and cocaine. The selected terms are among the ones that tend to appear frequently in literature which makes them have strong reltionships with their neighbors. The similarity variance of these relationships would reveal the effect of the text corpus size. Besides, this would also demonstrate how biologically meaningful are those relationships. For each entity term, we calculated its first 10 nearest neighbors and selected the ones that are commonly present in the four resulting embeddings (S7 File). Validation results To demonstrate the utility of our word2vec embeddings in data analytical applications, we examined the agreement of cosine similarities between words according to their vector representations with information extracted from biomedical knowledgebases (see Materials and methods). As a result, pairs of genes with known interactions in the Reactome database showed on average higher cosine similarities than gene pairs without known interaction in the same database (Fig 2). Similarly, cosine similarities of drugs with overlapping target gene sets were, on average, higher than similarities between drugs without common target genes. Furthermore, cosine similarities within Reactome and TRANSPATH1 pathways, as well as within GO biological processes, were increased compared to median cosine similarities of randomly sampled gene pairs (Fig 2). Regression curves estimated for the medians revealed a correlation between the number of pathways or GO category members and the median similarity, with higher values for smaller gene sets. We think that gene pairs in smaller pathway networks or biological processes were more likely to correspond to direct molecular interactors that share a close functional context than in pathway or functional categories with a higher number of members. The embedding, in many cases, indeed captured these relations. While disease-disease cosine similarities within Human Disease Ontology (HDO) groups also revealed such a trend for groups with less than 25 members, median similarities within groups were often smaller than for randomly chosen disease pairs (Fig 2). Therefore, disease-disease relations captured by broader HDO groups did not correspond well with vector presentations of the embedding. Better correspondence was observed for narrower disease groups but did not exceed similarities of random disease pairs. Full plots are provided as S1-S6 Figs. Moreover, drug-drug similarities were also assessed by the number of genes shared by two drugs. Median cosine similarities of drug pairs have increased as the number of shared genes increased (S7 Fig). For comparison to the results of the word2vec-based embeddings, the same data sources were applied as for The models with gene-gene networks derived from the embeddings showed best performance compared to HPRD PPI, STRING-derived networks, BioBERT-derived network, and random network, in classifying patients into two groups, metastatic and non-metastatic. The networks were compared based on the similarity threshold and the number of vertices included. The architecture of Graph-CNN consists of 2 convolutional layers. Two convolutional layers were used with 32 convolutional filters. Maximum pooling of size 2 applies to both of the convolutional layers. Two fully connected layers have 512 and 128 nodes, consequently. We utilized 10-fold cross validation to assess the performance of Graph-CNN models, as stated in the section 2.5. Table 1 presents the performance of Graph-CNNs trained with the word2vec-embedding networks (Embedding_net_v1 and Embedding_net_v2) and STRING derived network incorporating the edge weights. For STRING, the edge weights are the scores computed based on text-mining techniques (see Materials and methods). We didn't consider a weighted BioBERT-derived network since the minimal weight was already 0.938 for edges forming a network with around 6000 vertices, which is not that much different from 1.0. We can see that Embedding_net_v1 demonstrated a better performance than Embedding_net_v2 for almost the same number of vertices and better than the text-mining-based STRING network. In Table 2, we compared how unweighted network topologies influence the classifier's performance depending on the similarity threshold and the number of vertices included. The baseline performance corresponds to HPRD PPI prior knowledge. STRING (combined) and BioBERT-based networks were considered only as unweighted since the weight thresholds to reach a comparable number of vertices were close to 1, 0.938 and 0.952, respectively. The embedding networks have a threshold value allowing to change the strength of similarity between vertices. Change of threshold for Embedding_net_v1 from 0.63 to 0.65 increased the classification result in weighted and unweighted cases. We can also observe that for embedding networks, the incorporation of weight's edges increased slightly, although not substantially, the classification performance. Meanwhile, STRING and BioBERT-based networks do not bring any improvements compared to HPRD PPI or random network. Thus, Graph-CNNs showed the best results on our dataset, incorporating weighted Embedding_net_v1 with a threshold of 0.65. Other cancer data sets. For the purpose of comparison, we have also trained Graph-CNNs on the other cancer type datasets described in the section 2.3.2. As in the previous section, we have tried several underlying networks as prior knowledge to estimate their influence on the classification performance measured using 10-fold cross-validation. We used HPRD PPI, Embedding_net_v1, and two versions of randomized networks. The first version had the same topology as the Embedding_net v1 but with permuted vertices. We intended to remove the biological information about the direct interactions of PPIs while preserving the topology. The second version was created as described in the second paragraph of section 2.4.5. The grid search of Graph-CNNs hyperparameters was used to optimize its architecture on each of the datasets. The architecture of Graph-CNN remained the same within one dataset for different networks of the prior knowledge. Tables 3-6 demonstrate the metrics values as performance estimates. Interestingly enough, we did not notice any substantial differences between HPRD PPI, Embedding_net_v1, and Embedding_net_v1 with permuted vertices. It is noticeable that with those prior knowledge networks, Graph-CNNs performs comparably to Random Forest (Tables 4-6), except to the liver cancer dataset, where Random Forest outperformed Graph-CNNs. Only in Tables 5 and 6 does the random network with random edges worsens the classification rates. In those cases, the Graph-CNN had convergence issues during the training. Effect of text corpus size We generated four embeddings trained with corpora of different sizes to examine the variability of cosine similarity values depending on the amount of training data. Fig 3 illustrates the results for selected terms of different types and their nearest neighbors. The cosine similarities are varying between the terms. For example, the similarity value between the "breast neoplams" and "ovarian neoplasms" had increased slightly as follows: 0.851 (4M), 0.855 (8M), 0.861 (12M), 0.862 (~16M). Breast neoplasm is one of the most frequent diagnosed neoplasms reported in biomedical literature. Many studies have also reported similarities between breast and ovarian cancer since they share similar mutations (tumor suppressors). On the other hand, the similarity between "brca1" and "brca2" is almost the same in the four embeddings (0.898, 0.893, 0.891, 0.898 for 4M, 8M, 12M, and~16M, respectively) with a very high similarity compared to the other nearest neighbors of "brca1". BRCA1 and BRCA2 genes are the most common genes defined in literature with certain mutations and lead to an increased risk of breast and ovarian neoplasms. Similarly, for "schizophrenia" and "bipolar_disorder", their similarity has changed slightly (0.822, 0.825, 0.828, 0.829 for 4M, 8M, 12M, and~16M, respectively). An overlap between schizophrenia and bipolar disorder has been commonly reported in the literature. In contrast, the similarity between "eczema" and "atopy" had changed differently. It had decreased from 0.713 in the embedding with the corpus of size 4M to 0.669 in the one with 8M, to continue increasing again to 0.691 (12M) and 0.701 (~16M) while staying lower than their similarity in the embedding pre-trained with the smallest corpus. Overall, we observed that the nearest neighbors of selected terms were assigned similarity values as well as a similar ranking that were varying slightly in the four embeddings for the majority of the selected terms. However, for common terms such as breast neoplasms, BRCA1, and schizophrenia and their nearest neighbors with which they tend to appear more frequently in biomedical literature, the similarity was more robust. Discussion In this paper, we focused on demonstrating the utility of word embedding-derived knowledge in uncovering valuable biological relationships and its application in machine learning tasks. While the text corpus used in this study consisted entirely of PubMed abstracts, our approach can be applied to complete scientific or other types of natural language texts. Our word embedding generation differs from the usual applications by incorporating an uncommon pre-processing step in addition to the conventional techniques, which is the substitution of biological entity synonymous terms by their preferred terms. The substitution was applied on gene, disease, and drug terms using existing knowledge in biomedical databases. By this procedure, we aimed to capture all contexts of the same concepts present as synonyms and to boost their neighborhood. Thereby, the model treats the different terms as one term and generates only one numerical vector. Such technique would affect the neighborhood of a word by normalizing the different variants of a term by mapping them to a single form. Therefore, such a normalization would help to reduce variability and, in some cases, ambiguity. We performed a computational analysis to validate similarities between biomedical entities, namely, genes, diseases, and drugs, using existing knowledge in biomedical databases. Comparisons showed that relations between known PPIs, common pathways and cellular functions, or narrower disease ontology groups correlated with higher vector cosine similarity. Gene pairs with known PPIs in Reactome have generally shown higher cosine similarities. Gene embeddings seem to be rich with semantic information about gene function. On the other hand, gene pairs with high cosine similarities shown without known interactions in Reactome would lead to new investigations to uncover hidden functional relationships. Besides, gene pairs sharing common pathways in Reactome and TRANSPATH1, as well as common biological processes in GO, showed increased cosine similarities compared to the median of randomly sampled gene pairs. Similarities were also increased with smaller group sizes, which more likely represent direct molecular interactions. Disease pairs also showed increased cosine similarities within smaller human disease ontology (HDO) terms/groups, e.g., < = 20 diseases, which probably represent more specific disease classes. However, disease embeddings did not correspond well on the basis of median random similarity. This is potentially interesting to investigate why semantic relations between diseases differ from the HDO, although it would suggest that the embedding could harbor new insights for disease ontologists. Moreover, in comparison with word2vec, we assessed Bio-BERT performance using the same resources for PPIs, common pathways and disease-disease relations. Similarities between biological entities did not show an evident agreement with biomedical knowledge, even though, median similarities were higher than with word2vec performance. The reason for this might be that these similarities correspond to other types of relationships or to different contexts which have been captured with word2vec. Corpus size effect assessment showed that similarities between selected terms were substantially affected by the corpus size. In general, we noticed that the first nearest neighbor for most terms was not strongly influenced by the corpus size even though it was not changing proportionally with the corpus size. The highest and strongest similarities were observed between "breast_neoplasms" and "ovarian_neoplasms" as well as between "brca1" and "brca2". This might be justified by the fact that these terms are very common in the present literature and words in each pair tend to occur more frequently in the same context. This could validate the ability to extract meaningful functional relationships between biomedical terms. Additionally, to demonstrate the utility of the embedding in machine learning tasks, we assumed that similarities between biological entities might help create networks of a specific type. The results of Graph-CNN on the breast cancer dataset showed that weighted and unweighted Embedding_net_v1 allowed to increase the classification performance to predict breast cancer's metastatic event achieving the highest F1 score, AUC, and accuracy. This has demonstrated the biological utility of the embedding as prior knowledge for prediction of a metastatic event in breast cancer. Moreover, the change of similarity threshold of edge weights from 0.63 to 0.65 has led to an increase in performance; this could be due to the fact that "weak genes" contained in the network were filtered out. Besides, Random network-based demonstrated lower performance, although it is still to be investigated how simulated networks with different degree distributions would influence the classification error rate. It was also shown that the model trained with the Embedding_net_v1 network had performed better than with the Embedding_net_v2. The former was produced from the embedding in which synonymous terms replaced with their main terms. Such a procedure has surely influenced the embedding information and, in particular, the semantic relations between terms. For example, considering the gene WNT4 and its nearest neighbor WNT7a, the cosine similarity between them has increased from 0.798 in Embedding_v2 to 0.811 in Embedding_v1, respectively. Although the similarity was slightly increased, this has led to changing WNT7a from being the third neighbor of WNT4 to becoming its first neighbor. Another example, the top nearest neighbors of the TP73 (Tumor Protein P73) gene included the following terms in Embedding_v2: 'cdkn2c, cdkn2b, dapk1, rassf1, lzts1, dlec1'. However, this neighborhood list has completely changed in Embedding_v1 as follows: 'tp53, np73, tap63, mdm2, deltanp73, ing1'. This list included more reasonable neighbors in terms of gene partners. According to STRING database, tp53 and mdm2 are of the top genes that have functional links with tp73 based on evidence from experiments, curated databases and text-mining. This change was enhanced by replacing p73 with tp73. Moreover, for 'schizophrenia', some of the nearest neighbors were normalized which has affected the similarity computed for their variants and the standardized forms between the two embedding versions. The neighbors 'mood disorder' and 'affective disorder' were replaced with 'mood disorders' in Embedding_v1, which has led to an increase of the similarity from 0.746 between 'schizophrenia' and 'mood disorder' and 0.728 between 'schizophrenia' and 'affective disorder' to 0.776 between 'schizophrenia' and 'mood disorders'. Similarly, the replacement of 'asthma' with 'autistic disorder' has increased the similarity from 0.686 between 'schizophrenia' and 'asthma' in Embedding_v2 to 0.713 between 'schizophrenia' and 'autistic disorder' in Embedding_v1. Additionally, we investigated the influence of Embedding_v1 network on Graph-CNN performance using four data sets and classifying normal vs liver, lung or prostate tumor tissue as well as predicting FOLFOX therapy sensitivity of colorectal cancers. The difference in performance between Embedding_v1, HPRD PPI, and Embedding_v1 with permuted vertices was not substantial over the aforementioned datasets. We assume, that "small world properties" could be a reason for that-the majority of vertices' pairs can be connected through the path with no more than 7 hops. 7-hops neighborhood were used by convolutional filters of all the Graph-CNNs we used. Also, for the lung and liver cancer, fully random network demonstrated similar performance. We hypothesize, that in the case of these datasets, Graph-CNN was able to pick up patterns necessary for classification regardless of the vertex's connectivity or network information. A biological reason could stand behind this phenomenon. For instance, for lung and liver cancer data sets where more heterogeneous and expression correlations between genes did not coincide well with provided network topologies. We also noticed that the performance with the network with permuted vertices was always comparable. Only in prostate and colorectal cancer datasets, a random network with random weights worsens the classification performance. This fact is worth to be investigated further. Furthermore, predictions of Graph-CNN applied to the same gene expression data used in this study with the HPRD PPI were explained in a recent study and provided patient-specific subnetworks [28]. An interesting research question brought up by this study is whether patient-specific subnetwork genes predicted using an embedding-based gene-gene network would give different insights into the tumor biology of a patient than those predicted using PPI networks. This might also provide biological insights into the molecular interactions in the network and help to validate the biological utility of the embedding-based network information. Broadly translated our findings indicate that the performance obtained by Graph-CNN is sufficiently good to judge the utility of word2vec-embedding in creating gene-gene networks for machine learning tasks. Since the obtained results with word2vec-embedding-based networks are comparable with other networks, this would be a clear proof of our concept. Furthermore, the influence of embedding-based networks can also be examined by considering text-mining-based networks other than STRING and BioBERT. One could also derive networks from BioBERT using different hidden layers. For our BioBERT-derived network, the vectors of words were extracted from the last hidden layer. The BERT authors extracted vector combinations from different layers. They tested word-embedding strategies by feeding those vector combinations to a BiLSTM (bidirectional long short-term memory used on a named entity recognition task and observing the resulting F1 scores [20]. The concatenation of the last four layers produced the best results on this specific task. It is generally advisable to test different combinations in a particular application since results may vary. Using word2vec-based embeddings to create biological networks would be advantageous compared to other network's resources due to its straightforward application. Databases that maintain manually curated PPI data needs to be always up-to-date which is an expensive task. As biomedical literature is a primary resource to extract PPI data, it would be useful that text-mining-based methods could be easily applied to perform this task. Bio-BERT and other similar methods require extensive computational power and resources to achieve good performance. However, word2vec is easy-to-handle and computationally inexpensive. It is able to achieve good performance without necessarily using large input data. As we showed in our assessment in section 3.3, similarities between common biomedical terms have not been much affected by a smaller input corpus. This has demonstrated that similarities of real biological relationships are robust even with smaller input corpora. Although our examination was based only on gene-gene relations, it can be however extended to cover other types of relations. This utility of the biomedical embedding can be an advantage to create other networks of different types of entities such as disease or drug networks. We have also developed a web service based on this work to explore biomedical concepts present in our generated embeddings. It can be accessed under the link: https:// ebiomecon.genexplain.com/. The service facilitates accessing the embedding information, and it provides functions to explore similarities of biomedical concepts, including the possibility to extract network vertices. Our embedding networks were extracted in the form of vertices. The implementation and the functionality of the web service will be described in more depth in a separate publication (manuscript in preparation). Conclusion In this study, we leveraged a state-of-the-art text-mining tool to learn the semantics of biomedical concepts presented in biomedical literature and to demonstrate the utility of learned embeddings for biomedical research and analysis. Our learned embeddings have been validated in computational analyses addressing protein-protein interactions, signaling pathways and biological processes, drug targets, and human diseases and showed agreement with existing biological knowledge. The results demonstrated that vector representations of biomedical concepts as produced by word2vec can capture meaningful biological relationships between biomedical entities. We also showed that semantic relations extracted from vast literature could be applied as prior knowledge in machine learning tasks.
9,879.8
2021-10-15T00:00:00.000
[ "Computer Science", "Biology", "Medicine" ]
Interface instability modes in freezing colloidal suspensions: revealed from onset of planar instability Freezing colloidal suspensions widely exists in nature and industry. Interface instability has attracted much attention for the understandings of the pattern formation in freezing colloidal suspensions. However, the interface instability modes, the origin of the ice banding or ice lamellae, are still unclear. In-situ experimental observation of the onset of interface instability remains absent up to now. Here, by directly imaging the initial transient stage of planar interface instability in directional freezing colloidal suspensions, we proposed three interface instability modes, Mullins-Sekerka instability, global split instability and local split instability. The intrinsic mechanism of the instability modes comes from the competition of the solute boundary layer and the particle boundary layer, which only can be revealed from the initial transient stage of planar instability in directional freezing. It is very important and urgent to figure out the interface instability mechanisms of freezing colloidal suspensions, as the freezing of colloidal suspensions attracts more and more attention in the interdisciplinary researches of porous ceramics 1-3 , solidification 4-6 , geocryology 7,8 , etc. For example, directional freezing of aqueous suspensions, also called ice-templating method, has been used to produce a variety of aligned porous structure materials with widespread applications, such as filtration, biomedical implant, catalytic carrier, fuel cell and micro-fluid 9,10 . The pore formation in the ice-templating method is determined by the ice growth during the freezing process, while the morphology and size of ice crystals greatly depend on the solid/liquid interface instability. As to the interface instability, a consensus of constitutional undercooling from Mullins-Sekerka (MS) instability has been addressed 11 . However, with a condensed particle layer in front of the interface of the freezing colloidal suspensions, the interface instability mechanism is encountering challenges. Ten years ago, by virtue of some fundamental knowledge of particle constitutional supercooling of the accumulated particles, morphological stability analysis of a planar interface, developed from solidification of alloy systems, has also been employed to understand the directional freezing of colloidal suspensions 12,13 . However, we demonstrated that the interface undercoolings in the freezing of colloidal suspensions mainly come from the solute constitutional supercooling rather than the particle constitutional supercooling very recently 14 . It raises the crisis that what contribution of accumulated particles is on the interface instability without the particle-induced constitutional supercooling. The emerging research frontier of freezing colloidal suspensions also provides a new challenge to the topic of interface instability. As the beginning of the pattern formation, interface instability is a common phenomenon in various natural and industrial processes of self-organization patterning 15 . In the solidification, the interface instability has been well analyzed based on the linear stability analysis of MS instability. However, in the colloidal suspensions system, plenty of nano-particles are accumulated in front of the solid/liquid interface as the solvent of the suspensions transforms from liquid to solid. It is still not clear when and how the interface instability occurs in such phase transformation with complex interactions between the particles and the solid/liquid interface. Over the past decade, there have been arguments and conjectures on the solute effects and particle effects on the pattern formation of freezing of colloidal suspensions 12,16 . However, in spite of these efforts, the origin of the interface instability, one of the most important issues, was ignored. Investigation on the initial interface instability 17-20 will provide abundant information, and hence solve the puzzles of interface instability of colloidal suspensions. It also should be noted that the ice banding phenomenon 21,22 has been seldom mentioned in the previous investigations on interface instability during freezing colloidal suspensions. The ice banding is a common phenomenon in the freezing of soil, and has been reproduced in laboratory. However, the forming mechanisms of ice banding have been investigated without considering the interface instability. The relationships between the ice banding and interface instability are also need to be clarified. In this letter, we revealed the secrets of interface instability in the freezing colloidal suspensions by focusing on the onset of interface instability through a well-designed directional freezing experimental apparatus 23 . Different interface instability morphologies were in-situ observed. Three interface instability modes are proposed based on the analyses of establishing boundary layers of solutes and particles ahead of the interface. The intrinsic mechanism of instability modes is also proved by well-designed experiments. The experiments were carried out in a high precision directional solidification apparatus, which has been used to quantitatively measure the interface undercooling in the freezing of colloidal suspensions 23 . The innovation of the apparatus is the in-situ comparison of the solid/liquid interfaces of colloidal suspensions and its supernatant. Here, we compared the dynamic evolution of the interfaces of the colloidal suspensions and its supernatant during the planar interface instability of directional solidification. Colloidal suspensions of α − alumina powder with mean diameter d = 50 nm (Wanjing New Material, Hangzhou, China, ≥99.95% purity, monodispersity) were prepared by using HCl (hydrogen chloride) and deionized water. Also the stable dispersion of alumina suspensions has been confirmed 21 . The particles had a density of 3.97 g cm −3 . Three systems with different initial volume fractions of particles, φ 0 = 1.31%, 3.63%, 7.75% (wt = 5%, 13%, 25%), were designed to reveal the particle accumulation effects for different volume fractions. Before pulling, the system is homogenized for one hour. The interface morphologies and positions were recorded by one frame per second. The pulling speed of V = 16 μm/s and the thermal gradient of G = 7.23 K/cm is constant only in the experiments of revealing the instability modes. Onset of Interface Instability and Instability Modes With pulling velocity larger than a critical instability velocity, the planar interface undergoes instability process 20 . The critical interface instability morphologies in different systems are focused to reveal the interface instability modes in the freezing of colloidal suspensions. Figure 1 shows three typical onset morphologies of the instability of planar interface in different systems. The adjacent cells are the colloidal suspensions and its supernatant, respectively. The black dot lines represent the initial solid/liquid interfaces after homogenization, moving with the samples. The entire processes from the very beginning of pulling to onset of interface instability are shown in the supplemental videos (Movies S1-S3) for different systems. Movies S1, S2 and S3 correspond to φ 0 = 1.31%, 3.63% and 7.75%, respectively. For the supernatant, fluctuation of small amplitude appears on the planar interface after an incubation time, as shown in Fig. 1. The amplitude enlarged rapidly to a finite level to form cellular structure after the instability, shown as Fig. S1 (Supplemental Material). The interface instability of the supernatant obeys the classical MS instability dynamics which has been well predicted by time-dependent instability analysis 17,20 . However, the instability processes of the colloidal suspensions are of great difference from the supernatant system and depend greatly on their particle volume fractions. The directional freezing of the colloidal suspensions with small particle volume fraction undergoes the similar process as that of the supernatant, as shown in Fig. 1(a). The interface instability also starts from the fluctuation and then develops into cellular structure. The visible fluctuation occurs almost at the same time of that in the supernatant cell, only a little earlier. In this system, the accumulated particle layer has a little bit impact on the incubation time of planar instability. However, as the volume fraction increases, the instability mode totally changes. As shown in Fig. 1(b), the cellular instability disappears. Instead, the accumulated particle layer is split and trapped in the ice at the onset of planar instability of colloidal suspensions. Moreover, this kind of instability happens much earlier than the cellular instability mode. The split comes from the local insertion of ice spears, indicated by the bright spots and the protrusion marked in Fig. 1(b). We called this as "local split instability". The definition is more easily to be understood based on the morphology of steady growth, as expounded in the supplemental material. As the volume fraction further increases, the accumulated particle layer is split into stripe bands at the beginning of the planar instability as shown in Fig. 1(c). The instability mode is similar to the local split instability mode shown in Fig. 1(b), but here the split block is a stripe band jointed with ice lens. The spears penetrate the accumulated particle layer and then grow laterally to form the ice lens. We called it as "global split instability". This kind of instability mode is almost irrelevant from the MS instability of its supernatant. The accumulated particle boundary layer will split even that the planar interface of the supernatant is stable with a pulling velocity smaller than the critical one. The Origin of the Interface Instability The interface instability modes exhibit distinct characteristics, indicating different instability mechanisms. The intrinsic factors, determining the interface instability, need to be further found out. As shown in the MS instability analysis, the interface instability is related to the solute boundary layer ahead of the planar interface 17 . However, there are two types of boundary layers ahead of the planar interface in the freezing of colloidal suspensions, i.e., solute boundary layer and particle boundary layer. The linear stability analysis was also used to analyze the morphological stability with a particle boundary layer 24,25 . Therefore, the establishing processes of these two kinds of boundary layers in the initial transient stage are very interesting and helpful to illustrate the interface instability behaviors. It is very difficult to directly observe the establishing of solute boundary layer and particle boundary layer. However, the dynamic establishing of the two boundary layers can be schematically presented based on the time-dependent analysis of interface migration and the preliminary precise experimental results of solute boundary layer establishing in directional solidification 26 . For one dimensional free boundary diffusion problem, the profiles of solute concentration and particle volume fraction along the system are schematically shown in Fig. 2(a,b) respectively. Figure 2(a) is a well-known process of solidification 27 . As the planar interface propagates, the ejected solutes and particles accumulate in front of the interface and diffuse into the liquid away from the interface. Accordingly, the solute and particle profiles across the system can be solved by time-dependent diffusion equations and boundary conditions at the solid/liquid interface. The time-dependent solute concentration and particle volume fraction in front of the solid/liquid interface is shown by connecting the interface concentration at the liquid side, as shown by the black thick lines in Fig. 2(a,b). The solute concentration in front of the interface will increase from C 0 to C 0 /k gradually in the initial transient stage. C 0 is the initial solute concentration in the solvent of suspensions and k is the partition coefficient of solutes. While for the particle, since the diffusion constant is around 10 −11 m 2 /s for particle with radius of 50 nm at ambient temperature, where k B is Boltzmann constant, r is the radius of the particles, η is the viscosity and T is temperature. Compared with the solute diffusion constant, the particle diffusion constant is much smaller of two orders of magnitude. The particle concentration in front of interface will rapidly increase to the maximum volume fraction of particles φ max within a very short time t s , as shown in Fig. 2(b). Furthermore, the length scale of the equivalent particle diffused layer can be ignored, compared with that of the solute diffused layer, indicating that diffusion of particles is negligible 22 . The boundary layers of the solute and the particle ahead of the interface are totally different. The solute boundary layer profile decays exponentially in the colloidal suspensions, while the particle boundary layer profile shows platform of φ max and then rapidly decreases to the initial volume fraction φ 0 , as shown in Fig. 2. The discrepancies of these two profiles mainly come from the differences of diffusion constants. The establishing boundary layer of solute has been well described in the time-dependent MS instability analysis 17 , with a the diffusion length of where D s is the diffusivity of the solute and V I is the instantaneous interface velocity. The diffusion length is one of the main factors determining the MS instability. For the particle boundary layer, the width of the accumulated particle layer W should also be responsible for the interface instability, The width of the particle layer depends on the instantaneous interface velocity and the initial volume fraction of the particles. The solute boundary layer will induce MS instability if there is a constitutional undercooling region in front of the solid/liquid interface. For the particle boundary layer, the constitutional undercooling of thermodynamics is much smaller compared with the positive thermal gradient. However, the steadily increasing of particle layer width ahead of the planar interface will prevent the migration of the planar interface by holding back the interstitial flow of water, and the new ice lens may forms 28 , indicating the instability of interface. This case corresponds to the split instability. Considering the accumulated boundary layers of solute and particle, the three interface instability modes observed in Fig. 1 can be well understood. 1) The cellular instability is determined by the MS instability from the accumulated solute boundary layer. After interface instability, particles are submerged into intercellular space and then the particle layer is within a limited width. The onset of the initial interface instability of colloidal suspensions synchronizes with the interface instability of supernatant, where the solute effect is dominant compared with the effect of particles. 2) The global split instability mode is mainly determined by the accumulated particle boundary layer of large volume fraction system, related to the forming mechanism of ice lens. The whole interface is entrapped before the MS instability of solute boundary layer. In the global split instability, the particle boundary layer is dominant. 3) When the effects of the particle boundary layer and the solute boundary layer are at the same level, the local split instability mode occurs. The variation of interface position in the transient stage is the key issue to reflect the establishing of the diffusion boundary layer 17,20 , and can be used to reveal the interface instability modes in the freezing colloidal suspensions. The interface position evolutions with time during the transient stage are shown in Fig. 3. The interface instability moments of the supernatant and colloidal suspensions are marked in the curves. Before interface instability, the movements of interfaces in the two adjacent cells are the same for all the three systems with different volume fractions of particles, as shown in Fig. 3. Although there is a dense particle layer in front of the interface, the migration of interface position is almost independent of the particle layer compared with the interface position migration of the supernatant. It indicates that the dense particle layer with a finite width in front of the interface has little impact on the interface evolution. Instead, the accumulated solute boundary layer in both of the supernatant and colloidal suspensions determines the interface migration behavior before interface instability. The interface undercooling from solute constitutional redistribution mainly determines the interface migration before interface instability. The comparison of the interface positions in the systems of the colloidal suspensions and its supernatant also proves the absence of particle-induced interface undercooling 14 . After interface instability, the interface evolves differently in different instability modes. For the MS instability and the local split modes, the undercooling of cellular tip is constant. However, for the global split mode, the solid/liquid interface undercooling oscillates. Figure 2. Sketches of the time-dependent solute concentration (a) and particle volume fraction (b) in front of the solid/liquid interface (black thick lines). The red thin lines are the profiles of solute (a) and particle boundaries (b) for a given time. t s is the time reaching steady state. C 0 is the initial solute concentration, C i is the liquid solute concentration in the front of solid/liquid interface, z i is the interface position, k is the partition coefficient. φ 0 is the initial particle volume fraction, φ i is the particle volume fraction in the front of solid/liquid interface, φ max is the maximum particle volume fraction in the front of solid/liquid interface. w(t) is the width of the particle layer. The Parameters Determining the Interface Instability Modes The selection of the instability modes depends great on the competition of the effects of solute in the supernatant and the condensed particle layer in front of the interface. More details of the intrinsic mechanism of the instability modes will be presented in this section based on experimental evidences: the absence of particle effects on the cellular instability, the pulling velocity dependent interface instability and the variation of interface instability with additive. Although the particle effects on the interface instability are controversial in freezing colloidal suspensions, the absence of particle effects on cellular instability can be clarified. In the pattern formation of directional solidification, it is accepted that the MS instability causes the cellular morphology. The linear stability analysis of particle induced constitutional undercooling is a replication of the MS instability. However, the particle induced constitutional undercooling is smaller than 10 −5 K and there is almost no particle diffusion boundary layer in front of the solid/liquid planar interface. Moreover, the previous experimental results 29 indicated that the absence of cellular structure in the cases of minor additives, but the cellular structure appears with the increasing of additives. Therefore, the particles in the suspensions alone cannot induce the cellular instability, but cause ice lenses. The MS instability only happens at a velocity larger than the critical criterion in directional solidification 27 . Based on this, we design experiments with different pulling velocities. The steady interface morphologies of different pulling velocities are shown in Fig. 4. At a smaller velocity of 3.5 μm/s, the planar interface is stable in the supernatant system, while the local split instability occurs in the colloidal suspensions system, as shown in Fig. 4(a). As the pulling velocity larger than the critical one, the interface propagates with cellular morphology in the supernatant system, and the local split particle clusters becomes smaller and smaller in the colloidal suspensions system, as shown in Fig. 4(b,c). The experimental results indicate that the increasing solute effects gradually change the local split morphologies and may induces into cellular structures as the pulling velocity further increases. As the solute effects can greatly enhances the planar instability, increasing the content of additives will change the interface morphologies. We found recent experiments regarding to the effects of additives 29 . Delattre et al. confirmed the transition from local split mode to the cellular instability mode by increasing the content of additives by in situ X-ray images. They attributed the transition to the variation of viscosity and other factors related to the absorption effects of particle interface. However, the transition of interface morphologies may come from the increasing of solute effects on the interface instability. According to the understanding of competition of solute effects and particle effects, we added NaCl into our system. The addition of NaCl almost does not change the viscosity of the system. Moreover, the NaCl is an inorganic small molecular compound, neither a binder nor dispersant. The experimental results are shown in Fig. 5. It shows that the global split instability mode in Fig. 5(a) is totally changed into the cellular instability mode by only a small amount of NaCl (1%wt of particle amount). Accordingly, it is convinced that the transition from ice banding to cellular structure comes from the increasing of solute effects on the interface instability. Although we have proved some important factors determining the interface instability modes based on preliminary experiment results, there are many more factors affecting the MS instability and the global split instability, including intrinsic physical parameters and control parameters of freezing. In general, as to the MS instability, there are solute diffusion constants, solute partition coefficient, slope of liquidus line, surface tension, initial solute concentration, thermal gradient, pulling velocity etc 11 . For the global split instability, there are particle diffusion constants, particle partition coefficient, particle size distribution, particle shape, initial particle volume fraction, viscosity, thermal gradient, pulling velocity, etc. The interface instability mode can be controlled by adjusting these control factors or selecting materials. Although it is very complex, some clues can be used to guide the design process. For example, the MS instability will win by reducing the initial particle concentration or increasing the solute concentration. On the other hand, the global split mode can win by reducing the solute concentration in the solvent of the suspensions or increasing the particle volume fraction. Conclusions The planar interface instability in the directional solidification of colloidal suspensions has been investigated through the in situ observation of transient stage of the initial planar instability for the first time. The novel in situ comparison of supernatant system and colloidal suspensions reveals the mechanism of planar instability of freezing colloidal suspensions. Three different instability modes were reported, the cellular MS instability, local split mode and global split mode. The cellular structure instability forms the cellular or lamellae structure. The local split mode presents the cellular structure plus trapped clusters. The global split mode forms the band structure. During the instability process, the interface position evolution is determined by the solute redistribution ahead of the interface, and the planar interface losses its stability earlier in the local and global split modes than that in the cellular instability mode. The designed experiments proved that the selection of the instability modes depends greatly on the competition of the effects of solute in the supernatant and the condensed particle layer in front of the interface.
5,062.4
2015-11-09T00:00:00.000
[ "Physics" ]
Comprehensive modeling of THz microscope with a subwavelength source The sub-wavelength THz emission point on a nonlinear electrooptical crystal, used in broadband THz near-field emission microscopy, is computationally modeled as a radiating aperture of Gaussian intensity profile. This paper comprehensively studies the Gaussian aperture model in the THz near-field regime and validates the findings with dual-axis knife-edge experiments. Based on realistic parameter values, the model allows for THz beam characterisation in the near-field region for potential microscopy applications. An application example is demonstrated by scanning over a cyclic-olefin copolymer sample containing grooves placed sub-wavelengths apart. The nature of THz microscopy in the near-field is highly complex and traditionally based on experiments. The proposed validated numerical model therefore aids in the quantitative understanding of the performance parameters. Whilst in this paper we demonstrate the model on broadband electro-optical THz near-field emission microscopy, the model may apply without a loss of generality to other types of THz near-field focused beam techniques. © 2011 Optical Society of America OCIS codes: (110.6795) Terahertz imaging; (300.6495) Spectrocsopy, terahertz; (110.0180) Microscopy; (180.4243) Near-field Microscopy. References and links 1. P. H. Siegel, “Terahertz technology in biology and medicine,” IEEE Trans. Microwave Theory Tech. 52, 2438 – 2447 (2004). 2. M. Tonouchi, “Cutting edge terahertz technology,” Nat. Photonics 1, 97 – 105 (2007). 3. T. Yuan, J. Xu, and X.-C. Zhang, “Development of terahertz wave microscopes,” Infrared Phys. Technol. 45, 417 – 425 (2004). 4. W. Withayachumnankul, G. M. Png, X. Yin, S. Atakaramians, I. Jones, H. Lin, B. S. Y. Ung, J. Balakrishnan, B. W.-H. Ng, B. Ferguson, S. P. Mickan, B. M. Fischer, and D. Abbott, “T-ray sensing and imaging,” Proceedings of the IEEE 95, 1528 1558 (2007). 5. S. Hunsche, M. Koch, I. Brener, and M. Nuss, “THz near-field imaging,” Opt. Commun. 150, 22 – 26 (1998). 6. O. Mitrofanov, I. Brener, R. Harel, J. Wynn, L. Pfeiffer, K. West, and J. Federici, “Terahertz near-field microscopy based on a collection mode detector,” Appl. Phys. Lett. 77, 3496 – 3498 (2000). 7. O. Mitrofanov, I. Brener, M. Wanke, R. Ruel, J. Wynn, A. Bruce, and J. Federici, “Near-field microscope probe for far infrared time domain measurements,” Appl. Phys. Lett. 77, 591 – 593 (2000). 8. S. Mair, B. Gompf, and M. Dressel, “Spatial and spectral behavior of the optical near field studied by a terahertz near-field spectrometer,” Appl. Phys. Lett. 84, 1219 – 1221 (2004). 9. Y. Kawano, and K. Ishibashi, “An on-chip near-field terahertz probe and detector,” Nat. Photonics 2, 618 – 621 (2008). 10. N. van der Valk, and P. Planken, “Electro-optic detection of subwavelength terahertz spot sizes in the near field of a metal tip,” Appl. Phys. Lett. 81, 1558 – 1560 (2002). #138468 $15.00 USD Received 19 Nov 2010; revised 5 Jan 2011; accepted 9 Feb 2011; published 7 Mar 2011 (C) 2011 OSA 14 March 2011 / Vol. 19, No. 6 / OPTICS EXPRESS 5327 11. K. Wang, A. Barkan, and D. Mittleman, “Sub-wavelength resolution using apertureless terahertz near-field microscopy,” CLEO, CMP5 (2003). 12. H. T. Chen, R. Kersting, and G. C. Cho, “Terahertz imaging with nanometer resolution,” Appl. Phys. Lett. 83, 3009 – 3011 (2003). 13. T. Yuan, H. Park, J. Xu, H. Han, and X.-C. Zhang, “Field induced THz wave emission with nanometer resolution,” Proc SPIE 5649, 1 – 8 (2005). 14. A. J. Huber, F. Keilmann, J. Wittborn, J. Aizpurua, and R. Hillenbrand, “Terahertz near-field nanoscopy of mobile carriers in single semiconductor nanodevices,” Nano Letters 8, 3766 – 3770 (2008). 15. R. Kersting, F. Buersgens, G. Acuna, and G. Cho, “Terahertz near-field microscopy,” Advances in Solid State Physics (Springer Berlin / Heidelberg, 2008). 16. H. G. von Ribbeck, M. Brehm, D. W. van der Weide, S. Winnerl, O. Drachenko, M. Helm, and F. Keilmann, “Spectroscopic THz near-field microscope,” Opt. Express 16, 3430 – 3438 (2008). 17. M. Wächter, M. Nagel, and H. Kurz, “Tapered photoconductive terahertz field probe tip with subwavelength spatial resolution,” Appl. Phys. Lett. 95, 041112 (2009). 18. K. Wynne, and D. Jaroszynski, “Superluminal terahertz pulses,” Opt. Lett. 24, 25 – 27 (1999). 19. Q. Chen, Z. Jiang, G. Xu, and X.-C. Zhang, “Near-field terahertz imaging with a dynamic aperture,” Opt. Express 25, 1122 – 1124 (2000). 20. T. Yuan, S. P. Mickan, J. Xu, D. Abbott, and X.-C. Zhang “Towards an apertureless electro-optic T-ray microscope,” CLEO, 637 – 638 (2002). 21. T. Kiwa, M. Tonouchi, M. Yamashita, and K. Kawase, “Laser terahertz-emission microscope for inspecting electrical faults in integrated circuits,” Opt. Lett. 28, 2058 – 2060 (2003). 22. A. J. L. Adam, J. M. Brok, M. A Seo, K. J. Ahn, D. S. Kim, J. H. Kang, Q. H. Park, M. Nagel, and P. C. Planken, “Advanced terahertz electric near-field measurements at sub-wavelength diameter metallic apertures,” Opt. Express 16, 7407 – 7417 (2008). 23. A. Bitzer, and M. Walther, “Terahertz near-field imaging of metallic subwavelength holes and hole arrays,” Appl. Phys. Lett. 92, 231101 (2008). 24. R. Lecaque, S. Gresillon, and C. Boccara, “THz emission Microscopy with sub-wavelength broadband source,” Opt. Express 16, 4731 – 4738 (2008). 25. T. Kiwa, Y. Kondo, Y. Minami, I. Kawayama, M. Tonouchi, and K. Tsukada, “Terahertz chemical microscope for label-free detection of protein complex,” Appl. Phys. Lett. 96, 211114 (2010). 26. H. Lin, C. Fumeaux, B. M. Fischer, and D. Abbott, “Modelling of sub-wavelength THz sources as gaussian apertures,” Opt. Express 18, 17672 – 17683 (2010). 27. J. Xu, and X.-C. Zhang, “Optical rectification in an area with a diameter comparable to or smaller than the center wavelength of terahertz radiation,” Opt. Lett. 27, 1067 – 1069 (1999). 28. G. Dakovski, B. Kubera, and J. Shan, “Localized terahertz generation via optical rectification in ZnTe,” J. Opt. Soc. Am. B 22, 1667 – 1670 (2005). 29. Y. S. Lee, Principles of Terahertz Science and Technology (Springer, New York, USA, 2008). 30. B. E. A. Saleh, and M. C. Teich, Fundamentals of Photonics (John Wiley & Sons, 1991). 31. M. I. Bakunov, S. B. Bodrov, and A. V. Maslov, “Temporal Dynamics of Optical-to-Terahertz Conversion in Electro-Optic Crystal,” CLEO, JWA93 (2007). 32. P. Bonnet, X. Ferrieres, B. L. Michielsen, P. Klotz, and J. L. Roumiguières, Finite-volume time domain method, in Time Domain Electromagnetics (S. M. Rao, Ed. San Diego, CA: Academic Press, 1999). 33. D. Baumann, C. Fumeaux, C. Hafner, E.P. Li, “A modular implementation of dispersive materials for timedomain simulations with application to gold nanospheres at optical frequencies,” Opt. Express 17, 15186 – 15200 (2009). 34. C. Fumeaux, D. Baumann, S. Atakaramians, and E. Li, “Considerations on paraxial Gaussian beam source conditions for time-domain full-wave simulations,” 25th Annual Review of Progress in Applied Computational Electromagnetics, 401 – 406 (2009). 35. A. Taflove, and S. C. Hagness, Computational Electrodynamics: The Finite-Difference Time-Domain Method (Artech House, 2005). 36. C. Fumeaux, D. Baumann, P. Leuchtmann, R. Vahldieck, “A generalized local time-step scheme for efficient FVTD simulations in strongly inhomogeneous meshes,” IEEE Trans. Microwave Theory Tech. 52, 1067 – 1076 (2004). 37. C. Fumeaux, K. Sankaran, R. Vahldieck, “Spherical perfectly matched absorber for finite-volume 3-D domain truncation,” IEEE Trans. Microwave Theory Tech. 55, 2773 – 2781 (2007). 38. B. M. Fischer, “Broadband THz Time-Domain Spectroscopy of Biomolecules,” Ph.D. Thesis, University of Freiburg (2005). 39. H. Lin, B. M. Fischer, and D. Abbott “Comparative simulation study of ZnTe heating effects in focused THz radiation generation,” 35th International Conference on Infrared, Millimeter, and Terahertz Waves, 63 – 64 (2010). #138468 $15.00 USD Received 19 Nov 2010; revised 5 Jan 2011; accepted 9 Feb 2011; published 7 Mar 2011 (C) 2011 OSA 14 March 2011 / Vol. 19, No. 6 / OPTICS EXPRESS 5328 Introduction Terahertz imaging offers many attractive advantages over existing imaging modalities especially in its ability to obtain spectroscopic information [1,2].However, one of the major limitations of THz imaging is low spatial resolution, as determined by Rayleigh's criterion with a wavelength at 300 μm (for 1 THz).The general motivation behind an increased resolution is to distinguish objects in close proximity and to cater for a smaller sample size.The resolution of THz far-field imaging is in the sub-millimeter range and therefore insufficient for potential future imaging applications such as biological cells (micron to sub-micron range) and microstructures in semiconductor structures (sub-micron to nanometer range).Extensive reviews on pulsed THz near-field imaging techniques have been published [3,4].The techniques in general can be categorised as aperture, tip, and highly focused-beam approaches.Aperture based approaches [5][6][7][8][9] require physical sub-wavelength apertures where the spatial resolution is dependent on the aperture size, but no longer on the wavelength.However, the sub-wavelength nature of the aperture places a limitation on the radiation throughput leading to a deterioration in the Signal-to-Noise Ratio (SNR).Finite thickness of the aperture also exhibits waveguide effects that attenuates evanescent low-frequency components and makes detection difficult.Metallic tip based approaches [10][11][12][13][14][15][16][17] have to date demonstrated the highest resolution down to the nanometer range.The approach, however, is complicated with the introduction of a near-field tip, and suffers from low output power.Furthermore, some of the presented techniques are only valid for semiconductor samples.Focused-beam techniques exploit the micron sized farinfrared pump or probe beam spot for generating or detecting THz radiation respectively to achieve sub-wavelength resolution [18][19][20][21][22][23][24][25].The approach is comparatively simpler and does not heavily rely on micro-fabrication technologies.These sub-wavelength THz sources have been investigated in the far-field regime as a radiating aperture of Gaussian profile [26], using semi-analytical techniques commonly applied at microwave frequencies.The results of that investigation helped in explaining the far-field experimental observations reported in [27,28].The techniques presented are however only applicable for far-field interactions and far-field detection, and therefore, are unsuitable for interpreting the effect in the more complicated near-field regime.Motivated by an increasing interest in the focused-beam THz near-field microscopy technique, the present paper provides a numerical approach to accurately characterize electrooptically generated THz radiation in the near-field.The numerical results based on a radiating Gaussian aperture source are validated experimentally, and a practical application of the model to extrapolate the THz beam spot to infer system resolution is also demonstrated. Experimental near-field beam characterization A THz hybrid setup that generates THz radiation by optical rectification and detection via a Photoconductive Antenna (PCA) is used in our experiment, as shown in the setup in [26].An average infrared (IR) pump power of 30 mW is used for THz radiation generation with a 1 mm thick 110 ZnTe crystal from Zomega Terahertz Corporation.An optical lens of 200 mm focal length is used to focus down the IR pump beam to a waist of 51 μm with a Rayleigh range of 10 mm.Taking into consideration the beam divergence in the crystal, the effective IR pump beam waist is expected to be slightly larger than the theoretical value.An approximate maximal SNR of 40 dB and a bandwidth of more than 2.5 THz is achieved [26].A dualaxis (i.e.x and y-axis) knife-edge profile measurement is conducted on the generated THz beam at a distance of 150 μm, as determined from the tip of the knife-edge to the crystal surface.The minimal possible near-field distance is constrained in this case by the razor blades' thickness.This distance is measured with a CCD camera in the actual experiment.A dualaxis knife edge profile is necessary for a thorough THz beam characterization in the near-field because the radial symmetry of the polarized THz beam is broken at the air-crystal interface. Even though the appropriate crystal to knife-edge distance cannot be achieved simultaneously for both axes, a two knife system as shown in Fig. 1 is realized to minimize experimental uncertainties.Sheffield steel razor blades are used as knife-edges.pump beam ETHz Fig. 1.The pump laser beam is focused into a 1 mm thick ZnTe crystal by means of an optical lens.The emitted THz radiation polarized parallel to the x-axis is sliced along the x-axis and y-axis respectively by translating two sharp razor blades in the near-field region in parallel to the crystal back surface. Modeling of the THz knife-edge experiment The following sections describe the electromagnetic modeling of the full knife-edge experiment from electro-optical beam generation to detection.The model combines several techniques and requires simplifications to become tractable in terms of complexity and computational effort.Modeling the knife-edge experiment and comparing the results to experimental data provides a validation of the general simulation approach. Sub-wavelength THz source The generation of THz radiation with optical rectification in a nonlinear crystal is a squarelaw process [29].The pump beam is typically a Gaussian IR beam, i.e. can be approximated as a solution to the paraxial wave equation characterised by a Gaussian intensity distribution in any plane transverse to the direction of propagation [30].The coherent generation of THz radiation in the ZnTe electro-optical crystal is then a source with Gaussian profile distributed along the optical beam axis.The square relationship between generated THz electric field to the optical electric field therefore implies a reduction factor of √ 2 in the radius of THz source.In the present model, the THz source is approximated as the radiation from an aperture with a Gaussian intensity profile, located inside the crystal.This type of Gaussian aperture is similar as introduced in [26] for investigation of the far-field THz radiation.This simplified approach could be in principle refined by using as source an aperture field distribution based on analytical modelling of the nonlinear effects a presented in [31].Alternatively, for thick crystals, a model comprising a series of aperture sources distributed along the z-axis could be applied in modeling distributed THz generation along the crystal. Full-wave near-field electromagnetic simulation In the present study, the Gaussian aperture source is placed inside the crystal surface and the near-field wave propagation is simulated with a electromagnetic simulation tool based on the Finite-Volume Time-Domain (FVTD) method [32,33].The application of in-house developed code is motivated by the possibility of implementing a sub-wavelength Gaussian aperture source as demonstrated in [34].This flexibility in implementation is not readily available in most commercial software, even if other full-wave simulation methods such as the Finite-Difference Time-Domain method [35] would in principle be appropriate for simulation.The FVTD model is shown in Fig. 2. The Gaussian aperture source plane is displaced inside the crystal, 300 μm away from the output surface.A metallic knife-edge is scanned through the beam in a series of simulations, mimicking the experimental beam characterization.The knifeedge has a thickness of 150 μm and is scanned parallel to the crystal surface at a fixed distance of 150 μm between its tip and the crystal surface.The blade tip is tapered at an approximate angle of 21 • as observed from CCD images, and the tip end is flattened to mimic a realistic finite tip sharp width of 4 μm.A symmetry plane is introduced to halve the computational domain size.The nature of this symmetry plane depends on the knife-edge slice considered, i.e. scanning along the x or y-axis in Fig. 1, which corresponds to the two orthogonal polarizations in the computational model.For the y-axis scan, the electric field is perpendicular to the symmetry plane leading to a perfect electric conductor (PEC).Analogously, the symmetry plane is a perfect magnetic conductor (PMC) in the x-axis scan.The volume is discretized with an inhomogeneous tetrahedron mesh [36] to resolve the geometry near the fine tip, and to refine the discretization in the crystal with a relative permittivity of ε r of 9.The computational domain is subsequently truncated with a perfectly matched absorber [37].The Gaussian sub-wavelength source is excited with a sine-modulated Gaussian pulse, which covers the frequency spectrum from 300 GHz to 2.5 THz.The computational cost is relatively heavy considering the size of the computational domain in terms of wavelength, and the fact that a simulation has to be performed sequentially for each position of the blade.Also at every blade position, discrete Fourier transformation are performed on the fly during the time iteration of the FVTD simulation to obtain the equivalent frequency representations.Figures 3 and 4 illustrate the THz electric field amplitude for the x-axis and y-axis knife-edge with the tip positioned at the center of the beam for selected low and high frequencies respectively.The images show a standing-wave pattern inside of the crystal because of the reflection at the output surface.The dielectric interface breaks the radial symmetry of the beam, as the refraction is polarization-dependent.The figures further illustrate the polarization dependent diffraction from the knife-edge tip. Near-field to far-field transformation The full-field simulation can only describe wave propagation and material interactions in a very limited volume close to the source.Therefore, a discrete implementation of the frequencydomain near-to-far-field transformation is performed in accordance with [35] to simulate the radiation pattern relevant for far-field detection.The surface where the equivalent currents are sampled (Hugyens' surface) is chosen as a flat plane located after the knife-edge.show the obtained amplitude radiation patterns at 0.8 and 2.4 THz respectively.The nature of the THz Time Domain Spectroscopy (TDS) detection setup constrains the angle of acceptance at the first parabolic mirror for the power guided to the detector.The green dashed lines on the radiation patterns therefore delimit this acceptance angle of 28 • .Effects of aberration, diffraction and propagation losses through the rest of the optical detection path are assumed to be negligible. Detection modeling In our modeled knife-edge experiment, the power available for detection is not the total radiated power that passes the knife-edge, but only the power integrated in the mentioned acceptance angle from the crystal to the parabolic mirror about the axis of optical propagation.To mimic the THz detection by the antenna located in a typical TDS detection setup, the far-field power has to be integrated coherently in the acceptance angle, e.g.taking into account both amplitude and phase patterns. Experimental results Fourier transforms of the experimentally acquired THz time-domain waveforms are computed and the amplitude at each extracted frequency is plotted against the position of the knife-edge for the respective axes.For the sake of simplicity, only the power spectrum at a certain knifeedge location and the knife-edge of selected frequency components in this study are shown in Fig. 6 and 7 for x and y-axis respectively.As the experiment is conducted in low humidity atmosphere, the frequency components selected are unaffected by water vapor absorption and noise that occur at high THz frequencies.A one step running average over two points has been used to preserve data integrity.The y-axis knife-edge profile shows a stronger diffraction over the x-axis as imposed by the boundary conditions from having the knife-edge parallel to the THz electric field. Model validation In order to validate the developed Gaussian aperture numerical model, the experimental setup is simulated with the estimated and measured values for experimental parameters like (i) IR pump beam waist, (ii) acceptance angle as determined from the crystal-parabolic distance at the parabolic mirror focal point, (iii) knife to crystal surface distance and (iv) the THz frequency of interest.The exact IR pump beam waist inside the crystal is estimated to be 51 μm from the optical setup.The Gaussian profile of the generated THz beam is proportional to the IR power, i.e. the waist will be reduced by a factor of √ 2 to 36 μm.In reality, THz radiation generation is distributed along the bulk crystal thickness, but in our model, the distributed THz radiation generation is approximated as an aperture located at an effective distance (300 μm) within the crystal.Figures 8 to 13 show the simulated and experimental x-axis and y-axis knifeedge profiles at selected frequencies.The simulation is normalized with respect to the full power, while the measurement is normalized for best visual fit because measurement does not clearly yield the full power.The sensitivity of the system modeling to this parameter is probed by altering this distance to 100 μm and negligible qualitative changes in the final results are observed.The simulated knife-edges closely matches all the experimental results in terms of the integral function shape and slope.Qualitatively, diffraction effects in the simulated knife-edge profiles appear to be coincide well with the measured results.Differences are attributed to the discrepancy between the modeled and physical shape of the blade tip.With the beam waist or aperture size held constant, it is noteworthy to observe the changes in the THz beam waist as the frequency increases.At low frequencies (0.35 THz), the knife-edge profiles have a small slope implying a relatively large THz beam waist.This frequency range corresponds to the regime where the far-field radiation pattern resembles that of an obliquity factor in the far-field [26].At higher frequencies (0.615 THz and 1.04 THz), the slopes begins to increase consistently with the increase of aperture size relatively to the wavelength.The overall agreement between the modeled and the experimental knife-edge characterization is validating the modeling procedure.The decrease in beam waist with increasing frequency is well described quantitatively by the simulations, while the diffraction effects that are strongly dependent on the actual dimensions of the blade are demonstrated qualitatively. THz microscopy application The intent of near-field studies is to resolve small samples that are a sub-wavelength distance apart.To demonstrate an application of the computational model, the validated model can be used to predict the THz beam profile of selected frequencies at a near-field distances of 50 μm from the crystal emitting surfaces as shown in Fig. 14.Vertical structures with decreasing sub-wavelength separation distance are simulated by a groove function as illustrated in the left column of Fig. 15(a).Convolving the simulated THz beam profile (electric field distribution) with the decreasing grooves separation distances in the groove function yields the imaging capability.Note, however, convolution does not take diffraction into account that is present in real scans.The system response to convolution predicts the resolving of the grooves at 1.26 THz and above as can be seen in the left column of Fig. 15(b)-(d).For experimental verification, a cyclic-olefin copolymer (TOPAS) sample comprising of identical vertical grooves is fabricated and scanned with the THz near-field microscope at an identical distance of 50 μm from crystal surface.The vertical structure of the grooves ensures that only scans along x-axis is required.This minimizes the typically long scanning time associated with two-dimensional THz pixel-by-pixel imaging.Optical constants of TOPAS in the THz region as a function of frequency as measured in THz-TDS can be found in [38].A groove depth of 50 μm ensures it is Conclusion In this paper we have computationally modeled the sub-wavelength THz emission point on a nonlinear electro-optical crystal, used in broadband THz near-field emission microscopy, as a radiating aperture with Gaussian intensity distribution.We have validated the model of the Gaussian aperture by inserting dual-axis experimental knife-edges in the near-field region while taking into account the limitation of THz radiation detection setup.An application example is demonstrated by scanning over a fabricated cyclic-olefin copolymer sample, embedded with grooves sub-wavelengths apart, and a resolution of smaller than λ /2.3 is achieved in accordance with the model.As a whole, the model aids in the quantitative understanding of the many trade-offs between parameters, such as optical beam waist, crystal sample distance and frequency components, critical to this type of THz near-field microscope design.A micron-sized resolution for biological microscopy is achievable by means of a smaller aperture realized by tighter focusing of the IR pump beam and a shorter detection setup distance.This in turn implies a smaller Rayleigh range and hence requires a crystal tens of microns thick.As the generated THz electric field is proportional to the crystal length and that prolonged exposure leads to heating effects [39], future work aims to incorporate crystal length and hence the generated THz power into the numerical model.The work has introduced and validated numerical techniques for the 3D modeling of a focused beam THz near-field setup with electro-optical crystal and can be extended without a loss of generality to other focused beam THz techniques. Fig. 2 . Fig. 2. Schematic of the numerical FVTD model.The center of the crystal surface is opened up in its center to reveal the source plane inside of the EO crystal.The inset shows the surface skin triangulation and illustrates the refinement of the mesh near the blade tip and on the crystal surface. Fig. 5 . Fig. 5. Normalized THz amplitude radiation pattern with the x-axis and y-axis knife edge at the center of the beam for frequencies (a) 0.8 THz and (b) 2.4 THz.The green dashed lines highlight the acceptance angle of 28 • from the crystal to the parabolic mirror, within which the THz radiation is measured. Fig. 6 .Fig. 7 . Fig. 6.(a) The power spectrum of the THz waveforms acquired with a x-axis knife-edge scanned at a distance of 150 μm from the crystal.With each movement of the knife, the THz electric field becomes weaker until when the THz radiation is entirely blocked by the knife.This can be seen at x = 0 mm, where the knife does not obstruct the THz beam, as opposed to x = 2 mm, where the THz radiation is totally blocked.(b) Selected frequency components are shown at different knife positions. Fig. 8 . Fig. 8. (a) x-axis and (b) y-axis experimental and simulated knife-edge profile of THz radiation beam at 150 μm from the crystal backside at 0.35 THz. Fig. 9 . Fig. 9. (a) x-axis and (b) y-axis experimental and simulated knife-edge profile of THz radiation beam at 150 μm from the crystal backside at 0.615 THz. Fig. 10 . Fig. 10.(a) x-axis and (b) y-axis experimental and simulated knife-edge profile of THz radiation beam at 150 μm from the crystal backside at 1.04 THz. Fig. 11 . Fig. 11.(a) x-axis and (b) y-axis experimental and simulated knife-edge profile of THz radiation beam at 150 μm from the crystal backside at 1.46 THz. Fig. 12 . Fig. 12.(a) x-axis and (b) y-axis experimental and simulated knife-edge profile of THz radiation beam at 150 μm from the crystal backside at 2.1 THz. Fig. 13 . Fig. 13.(a) x-axis and (b) y-axis experimental and simulated knife-edge profile of THz radiation beam at 150 μm from the crystal backside at 2.5 THz. Fig. 14 . Fig. 14.Contour plot along the x and y-axis of the THz beam profile at 50 μm away from crystal surface at (a) 2 THz (b) 1.26 THz and (c) 1 THz.The normalized z-component of the Poynting vector is represented. Fig. 15 . Fig. 15.(a) (Left) Simulated sample structure comprising of grooves separated by decreasing sub-wavelength distances.(Right) The TOPAS sample comprising of vertical grooves (in white) separated by sub-wavelength distances of 300 μm, 200 μm, 150 μm and 100 μm.(b) (Left) Response from convolving THz beam waist at 2 THz with the simulated sample.(Right) Experimental grayscale image of the magnitude at 2 THz, resolves all distances.(c) (Left) Response from convolving THz beam waist at 1.26 THz with the simulated sample.(Right) Experimental grayscale image of the magnitude at 1.26 THz, resolves all distances.(d) (Left) Response from convolving THz beam waist at 1 THz with the simulated sample.(Right) Experimental grayscale image of the magnitude at 1 THz, resolves only the 300 μm distance.
6,214.6
2011-03-14T00:00:00.000
[ "Physics" ]
A new species of chimaeriform (Chondrichthyes, Holocephali) from the uppermost Cretaceous of the López de Bertodano Formation, Isla Marambio (Seymour Island), Antarctica Abstract We describe a new chimaeriform fish, Callorhinchus torresi sp. nov., from the uppermost Cretaceous (late Maastrichtian) of the López de Bertodano Formation, Isla Marambio (Seymour Island), Antarctica. The material shows it is distinct from currently known fossil and extant species of the genus, whereas the outline of the tritors (abrasive surfaces of each dental plate) shows an intermediate morphology between earlier records from the Cenomanian of New Zealand and those from the Eocene of Isla Marambio. This suggests an evolutionary trend in tritor morphology in the lineage leading to modern callorhynchids, during the Late Cretaceous-Palaeogene interval. Introduction Chondrichthyans from the Late Cretaceous of Isla Marambio have been known since the early 20th century. Woodward (1906) indicated the presence of large vertebral centra, doubtfully assigned to Ptychodus Agassiz, 1835, but Welton & Zinsmeister (1980) expressed doubt that they belonged to this taxon. Other reports of cartilaginous fishes from the Late Cretaceous of Isla Marambio indicate the presence of the genus Isurus Rafinesque, 1810 (Grande & Eastman 1986), later reassigned to Sphenodus Agassiz, 1843 by Richter & Ward (1991), and Notidanodon dentatus Woodward, 1886(Cione & Medina 1987, Grande & Chatterjee 1987. In addition, sand-tiger sharks of the genus Odontaspis Agassiz, 1838 and cf. Odontaspis sp. were reported from the Maastricthtian of Isla Marambio (Martin & Crame 2006). Further records in Campanian beds of the James Ross Island indicate the presence of the genera Scapanorhynchus Woodward, 1889 andParaorthacodus Glückman, 1957, as well as a chronostratigraphic extension for Chlamydoselachus thompsoni Richter & Ward, 1991. Also, endemic synechodontiform sharks referred to Cretaceous callorhynchid recovered from Isla Marambio belong to a new species, Chimaera zangerli Stahl & Chatterjee, 1999, from the Maastricthian López de Bertodano Formation, after extended to the Campanian of the James Ross Island (Kriwet et al. 2006). Later, Stahl & Chatterjee (2002) also recognized the presence of I. dolloi in the López de Bertodano Formation. The first occurrence of the genus Callorhinchus Laćepède, 1798 in Antarctica is known by a new species, Callorhinchus stahli Kriwet & Gaździcki, 2003, from the late Ypresian (Telm 2 stratigraphic unit sensu Sadler 1988) of the La Meseta Formation. Finally, Martin & Crame (2006) reported the first occurrence of the genus Callorhinchus in Maastrichtian beds of Isla Marambio. The present paper describes a new callorhynchid fish from the uppermost Cretaceous of Isla Marambio. The material was collected in January of 2011 during fieldwork of the Chilean expedition supported by the Antarctic Ring Project (Anillo de Ciencia Antártica ACT-105, 2010-11, Conicyt -Chile). The particular configuration of the preserved tritors allows identification of a new species among callorhynchids, adding to the known diversity of holocephalans in higher latitudes of the Weddellian Biogeographic Province (sensu Zinsmeister 1979) during the end of the Cretaceous. Locality and geological setting The samples were collected from Isla Marambio (Fig. 1), in the north-eastern part of the Antarctic Peninsula. This island, together with Vega, James Ross and Snow islands contains the most representative outcrops of sedimentary rocks of the James Ross Basin, deposited during the Late Cretaceous-Palaeogene. All collected material from Isla Marambio was found in upper levels of the López de Bertodano Formation. Based on Macellari (1988), the studied locality (64816'11.4''S, 56844'30.6''W) is included in the middle part of the Klb9 unit. The recovered specimens were found associated and had been slightly transported over the recent soil by snow and mud slides, and were found with additional scattered samples. Fossil-bearing levels comprise fine-to-medium sandstones intercalated with sandy siltstones. Erosion has exposed abundant concretionary nodules containing vertebrate and invertebrate remains. The hosting cross section reaches c. 30 m and is formed by a succession of sandy marls with thin intercalations of fine-to-medium carbonate cemented sandstone, and a thin glauconitic marl bed near to the base of the section (namely, 11LB1 section, following our field notation, Fig. 2). Our stratigraphic section is equivalent to the middle part of the Klb9 unit of Macellari (1988). The succession includes frequent, associated remains of elasmosaurid plesiosaurs and mosasaurs, together with scarce and fragmentary neoselachian teeth and osteichthyan vertebrae. Fossil invertebrates are mostly represented by lytoceratids, kossmaticeratids and pachydiscid ammonoids (e.g. Anagaudryceras seymourense Macellari, 1986, Maorites densicostatus Killian & Reboul, 1909, Pachydiscus riccardi Macellari, 1986, gastropods, serpulids, and bryozoans. In addition, two rock samples in the stratigraphic section contain some palinomorphs. The biostratigraphic framework is done by the mentioned ammonoids especially P. riccardi which constrain the age of Callorhinchus torresi sp. nov. to the P. riccardi Zone of Macellari (1988), at the late Maastrichtian. Additional scattered dental plates of the studied taxon were collected in the same area, but their respective stratigraphic provenance could not be rigorously determined due to transport by erosion. Materials and methods The nomenclature used follows Kriwet & Gaździcki (2003). The material was collected in a small valley filled by recent mud at the bottom, with fresh outcrops of sedimentary rocks exposed on the flanks. Two plates (left mandibular and incomplete right palatine) were found directly on top of the sandstone outcrop and near the uppermost part of this valley, indicating minimal transport from the original fossil-bearing level. A third plate (right mandibular) was recovered about 1 m downwards. Additionally, this plate is anatomically complementary and similar in size to the other two plates. Despite this, it cannot be confidently determined that all the material belongs to the same individual because of slight differences in the wearing patterns, and the position of the tritors. Additional scattered plates were collected in the same area. For taxonomic determination, three main criteria were considered: 1) Synapomorphies. Following Didier (1995), important synapomorphies can be recognized in dental plates of holocephalans related to the shape and size of plates, and particularly, the morphology of tritors, their relative position, and their number. These criteria are useful for determining genera and species in wellpreserved samples and are valid for extant and fossil specimens. 2) Ontogenetic stage. The continuous growth of dental plates in chimaeriforms can cause differences in tritor shape and position. In mandibular plates, the growth of the basal surface can cause a thickening of the portion between the symphysis and the anterior inner tritor, while the median and outer tritors become progressively separated. Concerning the collected samples, these are very similar in size, but have very conservative shapes and distribution of the tritors, especially on mandibular plates. In the recovered palatines, these are slightly different in size, but also similar in shape to the median tritor, markedly bifid in the anterior portion. The similar sizes and shapes of all recovered plates suggest that these belonged to individuals of similar ontogenetic stages. 3) Wear pattern. The apical surface is most worn, followed by the occlusal surfaces. Because of this, the anterior margin of mandibulars and palatines could appear to be variable in shape. The occlusal surfaces can display slight variations in the shape of each tritor as a consequence of the abrasive contact between mandibular, palatine and vomerine plates. Despite these considerations, apical surfaces and median tritors of all recovered mandibular plates are very similar in shape. The same seems to apply to the palatines, but since they are incomplete, these cannot be fully compared. Diagnosis: As for the genus (Kriwet & Gaździcki 2003), considering only dental plates; mandibular tooth plate with a single central hypermineralized pad restricted to the distal part of the coronal surface, flanked by narrow tritors on the symphyseal and/or labial edges. Middle tritor of palatine tooth is bifid towards the labial margin, with the symphysial branch being the longest. Vomerine tooth plate quadrilateral with single middle tritorial pad. Distribution: Albian and Cenomanian of Russia (Nessov & Averianov 1996); Cenomanian of New Zealand (Newton 1876); Santonian of Russia (Averianov 1997); Maastrichtian . 3). h. Modified from Kriwet & Gaździcki (2003, fig. 8) and Suárez et al. (2004, fig . 2). Scale bar equals 10 mm. (Fig. 2), López de Bertodano Formation. Derivation of name: The specific name honours Dr Teresa Torres (Universidad de Chile), director of the present research project, for more than twenty years of contribution to Antarctic palaeobotany, and her continuous support to new generations of Chilean palaeontologists, including some of the authors. Diagnosis: Callorhinchus torresi can be distinguished among other species of the genus by the following unique characters: estimated adult size with large plates, having mandibulars with rhomboidal outline; slender median tritors extended in axial direction, slightly thickened in their posterior inner margin; anterior inner tritor reduced and located in apical position; slender posterior inner tritor, covering about one third of the symphysial margin and anteriorly extended immediately posterior to the vomerine facet; slender external tritor, broader in its anterior portion and covering about half of the post-occlusal margin; no accessory tritors present; palatine plates with prominent, bifid anterior tritor, with a clearly larger symphysial branch and deep embayment between the two branches. Discussion In summary, there are five known taxa of callorhynchids from Isla Marambio: a) Chimaera zangerli, Maastricthtian of López de Bertodano Formation (Stahl & Chatterjee 1999); b) Callorhinchus stahli, early Eocene (late Ypresian) of the La Meseta Formation (Kriwet & Gaździcki 2003); c) Ischyodus dolloi, Maastrichtian of the López de Bertodano and late Eocene of the La Meseta formations (Ward & Grande 1991); d) Chimaera seymourensis, from the late Eocene of La Meseta Formation (Ward & Grande 1991); and e) Callorhinchus sp. from the Maastricthtian of the López de Bertodano Formation (Martin & Crame 2006). Considering the similar size and shape of all the plates studied here (Table I), these probably are from adult or subadult individuals. The external tritor can be observed only in the holotype, being the only mandibular collected preserving the outer margin, where this structure occurs. The posterior margin of all the recovered palatines is poorly preserved and is best observed in the right palatine SGO.PV.22012d, having a posterior margin that is relatively straight and diagonally disposed. Despite the lack of better-preserved plates, the observed outlines of median tritors in mandibular plates and those of anterior inner tritors in palatines are different from all known callorhynchids. When comparing the specimens of C. torresi sp. nov. with known fossil and extant species of the genus (Fig. 4), in known Cretaceous species the palatines tend to show a deep embayment between both branches of the anterior inner tritor, that progressively became shallower and smaller from Cenomanian to Maastrichtian representatives, as observed in recent species. Apart from C. torresi, there are no records of mandibular plates of any Cretaceous species of the genus. Thus, C. torresi provides the only Cretaceous reference to discuss the evolution of this anatomical element. Palaeogene records include two species from the Eocene with preserved mandibulars, C. regulbiensis Gurr, 1962, and C. stahli. Both have a posterior portion of the medial tritor that is broader than in C. torresi, while the apical projection of the anterior portion is reduced compared to C. torresi. The same situation is observed in the Miocene species C. crassus Woodward & White, 1930, and it is particularly evident in the extant species C. milii Bory de Saint-Vincent, 1823 and C. callorhynchus both with very reduced tritors and relatively small dental plates. Records in other localities of the Weddellian Biogeographic Province Late Cretaceous holocephalans have been reported in south-western South America since the 19th century. Philippi (1887, plate 55, fig. 5) described a dental plate from the Late Cretaceous of the Quiriquina Island, central Chile, tentatively referred by this author to the genus Chimaera L. 1758. The material is a mandibular plate figured with its anterior portion downwards. Nevertheless, it is possible to see a descending lamina in the symphysial margin and the absence of hypermineralized tritors, suggesting a closer affinity with the genus Edaphodon Buckland, 1838. Several other taxa were later mentioned from Late Cretaceous units of central Chile. Suárez et al. (2003) figured a vomerine plate referred to the genus Edaphodon, from early Maastrichtian beds exposed in Algarrobo (120 km west of Santiago), and mentioned the presence of the genus Chimaera in the same levels. Additionally, these authors indicated the presence of the genus Callorhinchus in late Maastrichtian beds of the Quiriquina Island. The genus Ischyodus was reported from late Eocene-early Oligocene beds of southernmost Chile (Le Roux et al. 2010), and is still unreported in older units along the south-western margin of South America. The genus Callorhinchus was also reported from three different Chilean units with an Eocene-early Pliocene range (Suárez et al. 2004, Le Roux et al. 2010) and also in the Miocene of the Argentinian Patagonia (Woodward & White 1930, Kriwet & Gaździcki 2003. This genus is extant in waters of southern South America. The genus Edaphodon was reported from Campanian-Danian levels of Chatham Islands, New Zealand with the endemic species Edaphodon kawai Consoli, 2006. All these reports indicate that chimaeriform fishes were widespread and abundant in the Weddellian Biogeographic Province during the Late Cretaceous-Palaeogene. Chimaeriform fishes have proven to be a persistent group subsequent to the Cretaceous-Paleogene event, having common genera with widespread distribution in the Weddellian Biogeographic Province and several endemic species from Antarctica and New Zealand. Like other marine vertebrates, chimaeriforms were later affected by major tectonic and oceanographic changes such as the opening of the South Tasman Rise and the deepening of the Drake Passage, with the subsequent establishment of deep seaways (Lawver & Gahagan 2003), along with important climatic changes leading to gradual Antarctic cooling, which reduced the diversity of chondrichthyans in higher latitudes of the Southern Hemisphere at the Eocene-Oligocene boundary (Cione et al. 2007). Callorhynchids were constrained to lower latitudes during the Neogene being especially abundant in the Miocene-Pliocene of northern Chile (Suárez et al. 2004). Since the Miocene, the establishment of the Humboldt Current influenced the distribution of callorhynchids, which today is exclusively restricted to shallow waters of the Southern Hemisphere (Stahl & Chatterjee 2002). Conclusions Callorhinchus torresi sp. nov. is the third fossil record of this genus from Isla Marambio, the second occurrence from Late Cretaceous levels of this locality, and the first species of chimaeriform fish identified in the late Maastrichtian of the López de Bertodano Formation. The studied material allows us to discount morphological variation due to different ontogenetic states, while the unique outlines of mandibular median tritors, and palatine anterior inner tritors, allows us to distinguish it from all known fossil and extant species of the genus. The studied material adds to the record of Callorhinchus during the Cretaceous, suggesting an evolutionary trend in the lineage leading to modern representatives, from slender median tritors extended anteriorly in mandibular plates to a broader posterior portion of the medial tritor with a reduced anterior apical projection in more recent species. Callorhinchus torresi also shows that, as in other Cretaceous species, there is a deep embayment between branches of the anterior inner tritors in the palatines, confirming the notion that the smaller and shallower embayment of more recent species has evolved from this condition.
3,483.2
2012-10-08T00:00:00.000
[ "Geology" ]
The quantum 1/2 BPS Wilson loop in ${\cal N}=4$ Chern-Simons-matter theories In three dimensional ${\cal N}=4$ Chern-Simons-matter theories two independent fermionic Wilson loop operators can be defined, which preserve half of the supersymmetry charges and are cohomologically equivalent at classical level. We compute their three-loop expectation value in a convenient color sector and prove that the degeneracy is uplifted by quantum corrections. We expand the matrix model prediction in the same regime and by comparison we conclude that the quantum 1/2 BPS Wilson loop is the average of the two operators. We provide an all-loop argument to support this claim at any order. As a by-product, we identify the localization result at three loops as a correction to the framing factor induced by matter interactions. Finally, we comment on the quantum properties of the non-1/2 BPS Wilson loop operator defined as the difference of the two fermionic ones. Introduction In this paper we continue the study of 1/4 and 1/2 BPS Wilson loops in N = 4 Chern-Simons (CS) theories with matter, initiated in [1]. These operators were defined in [2][3][4][5][6] and we review their construction in Section 2 along with a quick glimpse at the structure of the N = 4 CS models [7,8]. The interest in supersymmetric Wilson operators arises since they are amenable of an exact computation via localization, then providing observables interpolating from weak to strong coupling [9]. Their determination is usually highly constrained by supersymmetry invariance. For the class of theories under investigation, though, a classical analysis allows to define two seemingly independent 1/2 BPS circular loops, and any arbitrary combination thereof naively provides a supersymmetric observable [3]. Such operators possess a coupling to fermions, encapsulated in a supermatrix structure, and are cohomologically equivalent to a combination of bosonic 1/4 BPS Wilson loops, in a fashion similar to the one that links 1/2 and 1/6 BPS operators [10] in the ABJ(M) models [11,12]. The expectation value of 1/4 BPS operators can be computed via a matrix model average, which in turn allows for the exact computation of the 1/2 BPS circular Wilson loops if the aforementioned cohomological relation survives at quantum level. At strong coupling the dual string theory description differs from the weak regime picture outlined above. In particular, the brane configuration corresponding to the 1/2 BPS operator is expected to be unique, in contrast with the existence of a whole family of observables predicted by field theoretical analysis. In [3] a solution to this tension was proposed by suggesting that only one combination of operators should be exactly 1/2 BPS at quantum level, that is the classical degeneracy of Wilson loops should be uplifted by quantum corrections. If this is the case, the localization prediction turns out to be relevant only for such an exactly BPS operator. However, since it is based on the cohomological relations derived at classical level, it does not shed any light on which the correct BPS combination should be. The question of Wilson loops degeneracy and the determination of the quantum 1/2 BPS operator can instead be answered through a perturbative evaluation of the expectation values of these operators. Such a study was initiated in [1], where a full-blown two-loop computation was performed, which did not find any uplift of the degeneracy, thus leaving the question open. Providing a definite answer to this problem is the main purpose of this paper. • In Section 3, using Feynman rules and power counting arguments together with the definition of the two seemingly independent 1/2 BPS operators, we first prove that as a consequence of the contour planarity their perturbative expectation values coincide at any even loop order, while they are opposite at odd loops. As a consequence, a quantum uplift of the operators, if any, has to appear at odd orders. This explains why no degeneracy has been found so far: The operators are vanishing at one loop, therefore not allowing for any uplift, while their expectation values coincide at two loops, on general grounds. • We are then forced to perform a calculation at three loops, being it the first possible order where a non-vanishing and opposite contribution to the two operators may occur. A complete three-loop computation is of course daunting, but since we are just looking for a smoking gun of the quantum uplift of degeneracy, it is sufficient to focus on a particular color sector where a limited number of non-vanishing diagrams appears. Precisely, we restrict to the sector including contributions proportional to the product of three different colors, N A−1 N A N A+1 . We stress that this simplification has been made possible by the fact that we work with quiver theories with a different gauge group in each node. • In Section 4 we first expand the matrix model at the desired perturbative order and in the selected color sector, in order to be able to compare it with the Feynman diagram computation. We find that at third order a non-vanishing, purely imaginary correction appears. Comparing it with a perturbative calculation done at non-vanishing framing, we prove that this contribution corresponds to a loop correction to the framing factor of the Wilson loop due to interacting matter [13]. Therefore, we expect no three-loop corrections to the expectation value of the actual 1/2 BPS operator when computed in ordinary perturbation theory at framing zero. • In Section 5 we finally perform the three-loop perturbative evaluation of the Wilson loops in the aforementioned regime. We find that a non-vanishing correction indeed appears, which is opposite in sign for the two operators. This proves that the degeneracy of the operators is uplifted quantum mechanically at this order. Moreover, since from the matrix model expansion for the 1/2 BPS operator we expect a vanishing result, we conclude that the quantum supersymmetric Wilson loop is given by the average of the two operators where odd orders cancel out. We argue that this relation holds at all orders in perturbation theory. Finally, it is interesting to note that the Wilson loop operator defined by the difference (W ψ 1 −W ψ 2 ), although non-1/2 BPS, exhibits interesting quantum properties. In fact, thanks to the relation that holds at even and odd orders in the expansion of the two original Wilson loops, this operator has a real non-vanishing expectation value given by a purely odd perturbative series. Moreover, as comes out from our explicit calculation at three loops, it seems to feature lower transcendentality. BPS Wilson loops in N = 4 CS-matter theories We begin by reviewing BPS Wilson loop (WL) operators for N = 4 CS-matter theories introduced in [2,3]. In analogy with the more famous examples of ABJ(M) models, bosonic BPS WL can be introduced that contain only couplings to scalars, and fermionic BPS WL that contain couplings to fermions as well. The building blocks of these operators are defined "locally" for each quiver node A and contain matter fields that are at most linked to nodes A − 1 and A + 1. In order to simplify equations that would be otherwise cumbersome, without loosing generality we will restrict to the specific case A = 1. The bosonic 1/4 BPS WL Following [2,3] we introduce the bosonic WL defined as Note that matter couplings involve scalars q (1) from the hypermultiplet connecting nodes 1 and 2 (solid line in Fig. 1), and scalars q (0) , q (2) from the adjacent twisted hypermultiplets (dashed lines in Fig. 1). The operator can be conveniently expressed in terms of WL associated to nodes 1 and 2 as where we have defined When Γ is a maximal circle in S 2 operator (2.2) preserves 1/4 of the supersymmetry charges. We will work in this case, parametrizing the path as The fermionic 1/2 BPS WL The addition of fermions leads to two inequivalent WL depending on which SU (2) component we consider [3]. The first operator, called the ψ 1 -loop in [3], is defined in terms of ψ (1)1 andψ1 (1) fermionic components. It is given as the generalized holonomy and the commuting spinors c,c are defined in (B.7). We will consider the case of Γ being the maximal circle (2.6) for which the operator is 1/2 BPS. An independent WL operator can be introduced that contains the ψ (1)2 andψ2 (1) fermionic SU (2) components [3]. BPS invariance requires to slightly modify also the bosonic couplings, so that the ψ 2 -loop is given by with the commuting spinors d,d given in (B.14). Precisely, in addition to the replacement ψ1 (1) → ψ2 (1) this loop differs from the previous one for δ J I → −δ J I in the scalar couplings and for different fermion couplings (eq. (B.7) vs. (B.14)). Again, when Γ is a maximal circle this operator is 1/2 BPS. Cohomological equivalence As proved in [2,3], the classical fermionic 1/2 BPS loops are both cohomologically equivalent to the 1/4 BPS bosonic operator given in eq. (2.4). In fact, the following relations hold where the Q-terms are both proportional to the same supercharge. Therefore, more generally any linear combination of the form gives a 1/2 BPS WL that is cohomologically equivalent to the bosonic one. If the classical equivalence survives at quantum level, one can use Q as the supercharge to localize the path integral that computes W + 1/4 on S 3 . As a consequence, the corresponding matrix model provides an all-order prediction not only for the bosonic W + 1/4 but also for fermionic operators of the form (2.12), provided that they survive quantization as BPS operators. From the string dual description we know that at quantum level only one 1/2 BPS WL should survive, being the corresponding 1/2 BPS M2-brane configuration unique. Therefore, we expect that the degeneracy (2.12) gets uplifted by quantum effects and only one particular combination with fixedā 1 ,ā 2 will correspond to the exact quantum 1/2 BPS operator. For this operator we will have where the subscript "f = 1 indicates that this is the matrix model result, therefore at framing one 1 . The uplift mechanism that breaks degeneracy at quantum level is expected to be generated by field interactions that do not occur at classical level. However, since localization actually provides the quantum exact result for the bosonic 1/4 BPS operator, this mechanism for the fermionic ones cannot be understood within this approach. The only possibility to disclose the degeneracy breaking mechanism is to perform a perturbative calculation of the two fermionic WL and look for potential contributions that turn out to give a different result at some loop order. In fact, if at a given order in perturbation theory we find W ψ 1 = W ψ 2 , then comparison with the localization prediction (2.13) will provide a non-trivial equation that uniquely fixes the relative coefficient between W ψ 1 and W ψ 2 , so leading to the correct quantum BPS fermionic operator. With this motivation in mind, we will go through the perturbative evaluation of W ψ 1 and W ψ 2 searching for potential differences, and match it with the weak coupling expansion of the matrix model result for W + 1/4 . 3 All-loop relation between W ψ 1 and W ψ 2 We approach the perturbative analysis by first deriving an all-loop identity between the W ψ 1 and W ψ 2 expectation values. In particular, we prove that as a consequence of the planarity of the contour Γ in (2.6), at a given order L the two WL are related by Here L counts the power of the coupling 1/k. To prove this relation, as an intermediate step we introduce a third fermionic operator that is defined from W ψ 1 by applying a SU (2) L × SU (2) R transformation that exchanges the R-symmetry indices 1 ↔ 2,1 ↔2. From the W ψ 1 defining equations (2.8), we then obtain a new operator W ψ 2 given by the holonomy of the following superconnectionL where the commuting spinors c,c are still given in (B.7). Since the action of the theory is invariant under the R-symmetry group it is a matter of fact that computing perturbatively the expectation value of W ψ 2 we find at any given order. The interesting observation is that W ψ 2 differs from W ψ 2 simply by an overall sign change in the scalar couplings and the replacement of the spinor couplings c → d. Therefore, for a diagram containing n S scalar couplings from the WL expansion (see Fig. 2) the contribution to W ψ 2 is obtained from W ψ 1 simply as We now discuss what is the effect of replacing c spinors with d ones. n A 2n f n S Figure 2. Sketchy structure of loop diagrams contributing to the term in the WL expansion with n A gauge fields, n F (ψ,ψ) couples and n S scalar bilinears. The arguments of this Section are not sensitive to the order of the contour points. Multiplying all the bilinears associated to a given diagram once reduced in this way, we end up with a linear combination of structures that contain powers of (cc) times powers of (cγc). Let's call n γ the total number of (cγc) bilinears. According to the identities in Appendix A, these bilinears may differ at most by an overall sign when we replace c with d spinors. Precisely, (cc) = (dd), (cγ 1,2c ) = −(dγ 1,2d ) and (cγ 3c ) = (dγ 3d ). Therefore, the effect of the replacement c → d in (3.4) will be at most an overall sign, but it is important to count how many signs we get in a given diagram. If we perform all Feynman integrals associated to internal vertices, before solving the contour integrals we obtain a function of the bilinears and external coordinates x µ (τ ) and/orẋ µ (τ ). Moreover, the planarity of the contour (2.6) requires having an even number of epsilon tensors that can then be traded with products of Kronecher deltas 2 . It follows that the n γ (cγc) structures end up being necessarily contracted either among themselves or with external points. However, since structures of the form (cc) and (cγ νc )(cγ νc ) do not contribute with any sign, we can restrict the discussion to the set of (cγc) contracted with external points. Once again, the planarity of the contour (2.6) implies that the final expression will contain only bilinears of the form (cγ 1,2c ) that, according to the identities in Appendix A, will contribute with a sign change under replacement c → d. From this preliminary analysis we can conclude that a given diagram containing n S scalar couplings and proportional to n γ bilinears (cγc) provides contributions to the expectation values of the two fermionic WL that are related as Now, combining power counting arguments with constraints coming from planarity it can be proven that (n S + n γ ) has the same parity of the loop order L, or equivalently that n γ has the same parity of L+n S . We leave the details of the proof of this statement in Appendix C. Using this result in (3.7) we finally obtain the initial claim (3.1). Using similar arguments, in Appendix C we also prove that all results derived perturbatively at trivial framing are real. The loop identity (3.7) implies that the expectation values of the two fermionic WL are exactly the same at any even order L, while they are opposite in sign at odd orders. Therefore, if quantum uplift occurs it has to be necessarily searched at odd orders. In Section 5 we perform a systematic investigation up to L = 3 and provide an explicit computation showing that this is the first odd order where non-vanishing (then non-trivially opposite in sign) contributions arise. The matrix model result for 1/BPS Wilson loop The evaluation of both the partition function and the 1/4 BPS Wilson loop for the necklace quiver theories described in Section 2 can be reduced to a putative matrix integral through localization techniques [14]. An integral representation for the former can be easily obtained by combining the basic building blocks given in [14]. We easily find [15] where we recognize the contribution of the classical action, Bi e 2ik B λ 2 Bi , the one-loop fluctuations of the vector multiplets i<j sinh 2 (λ Bi − λ Bj ) and those of the hypermulof products of Kronecker deltas times one epsilon tensor that would be eventually contracted with external indices, so leading to a vanishing result at framing 0. tiplets i,j cosh (λ Bi − λ B+1,j ). The constant N is an overall normalization, whose explicit form is irrelevant in our analysis. To be consistent with the perturbative calculation we set l B = (−1) B . In this context the 1/4 BPS Wilson loop is given by the vacuum expectation value of the following matrix observable where we have introduced the diagonal matrix Λ A ≡ diag(λ A1 , · · · , λ AN A ) for future convenience. In the r.h.s. of (4.2) we can actually neglect all the odd powers in Λ A since their expectation value vanishes at all orders in 1 k due to the symmetry property of the integrand in (4.1) under the parity transformation λ Ai → −λ Ai . The first step to construct the perturbative series of W (A) is to rescale all the eigenvalues λ Ai by 1 √ k and expand the integrand in (4.1) for large k. The measure factor for large k reads Since we shall write the final result as a combination of vacuum expectation values in the Gaussian matrix model, we have chosen to use the usual Vandermonde determinant as the reference measure. Order 1/k in the expansion (4.3) is governed by the combination . (4.4) The next order is instead controlled by Q A , whose expression can be naturally written as the sum of four different terms In (4.5) B 4 (Λ A ) is a shorthand notation for the coefficient of 1/k 2 when we expand the factor in the measure due to the vector multiplet living in the node A. Instead C 4 (Λ A , Λ A+1 ) arises when we expand the contribution to the measure of the hypermultiplet connecting the node A with the node A + 1 at the same order. Their explicit expressions are quite cumbersome, so we report them in Appendix D. The last two terms, containing P A and (B 2 , C 2 ) respectively, originate from lower order terms when we take the product over different nodes. Finally the explicit form 1 k 3 term S A in (4.3) is irrelevant since it does not affect the evaluation of the Wilson loop. In fact, its contribution cancels out with the normalization provided by the partition function. With the help of the expansions (4.2) and (4.3), it is straightforward to write down the expectation value of the Wilson loop W (B) 1/4 in terms of P A and Λ A up to 1 k 3 order. We find where the subscript 0 in the expectation values indicates that the average is taken in the Gaussian matrix model. The evaluation of orders 1 k and 1 k 2 was discussed in ref. [1] and we shall not repeat the analysis here. We simply recall the final result which coincides with the perturbative result for the 1/4 BPS Wilson loops dressed with a phase corresponding to framing one [1]. The combination (2.4) reads at this order Range-three result at three loops The next step is to analyze the structure of the 1 k 3 contribution. An exhaustive evaluation of all the relevant contributions in (4.6) is quite tedious and cumbersome. However, as already mentioned, in order to investigate the uplift of the cohomological equivalence it is sufficient to focus our attention on terms proportional to a particular color structure. A convenient choice is to look at contributions which depend on three neighboring sites (A − 1, A, A + 1) (range-three sector). They can arise only from the part not depending on Q A in the last sum in (4.6). In fact the other terms in (4.6) vanish unless A = B − 1 or A = B and thus they depend only on two nodes. Actually, most of the contributions present in the last sum in (4.6) face a similar fate and we remain with the following putative three-node term since the connected correlator can be different from zero only if either (A, . If we use the explicit expressions for P B and P B−1 , we can easily single out the only non-vanishing term which depends on three gauge groups. We find Specializing the results at sites A = 1, 2 and inserting in the definition (2.4) we finally have We note the appearance of imaginary contributions at odd orders. As we are going to discuss in the next subsection, they can be recognized as framing contributions. Removing framing In three dimensional CS theories, expectation values of supersymmetric WL when computed via localization acquire imaginary contributions that have the interpretation of framing effects. This concept was originally introduced in pure CS theories in order to define a topologically invariant regularization for WL [16]. Precisely, it consists in a pointsplitting regularization procedure based on the requirement that in correlation functions of gauge connections different gauge vectors run on auxiliary contours Γ f , infinitesimally displaced from the original one. As a consequence, WL expectation values only depend on the linking number χ(Γ, Γ f ) between the framing path and the WL contour via an overall phase factor that exponentiates a one-loop contribution [16] where ρ is a framing independent function of the coupling λ = N/k. The result above can be reproduced by localization for circular Wilson loops in N = 2 supersymmetric CS [14], where in order to preserve supersymmetry the framing contours are Hopf fibers and hence have linking number one. For CS theories coupled to matter the identification of framing contributions in WL expectation values computed with localization and their perturbative origin is less clear. This issue has been recently analyzed in [13] for the 1/6 BPS WL in the ABJ(M) model. There, it has been shown that starting from three loops matter interactions induce non-trivial perturbative corrections to the one-loop framing factor in (4.12), reproducing the localization prediction at third order. We now apply the procedure of [13] to N = 4 CS-matter theory under investigation to provide a perturbative explanation of the imaginary terms in localization results (4.8) and (4.11) as coming from framing. In order to do so, we focus on the bosonic 1/4 BPS WL W + 1/4 , whose framing contributions are easier to understand perturbatively. The cohomological equivalence (2.11) then guarantees that the 1/2 BPS WL has the same expression at framing one. At one loop framing originates by a gluon exchange diagram (as in pure CS). Using the explicit expressions in Landau gauge (see eq. (A.13)) and taking into account that A (1) and A (2) propagators differ by an overall sign, we obtain where the Gauss integral is indeed proportional to the linking number between the deformed contour Γ f and the original WL path Γ. Combining these results for A = 1, 2 according to (2.4) and setting χ(Γ, Γ f ) = −1 (framing 1 in our conventions) we reproduce exactly the one-loop framing contribution in the result (4.8). At two loops the framing dependence of the individual 1/4 BPS bosonic WL arises from the pure gauge sector and exponentiates the one loop contribution. Adding this to the framing independent pieces and combining the WL as in (2.4) reproduces the two-loop result from localization (4.8). At three loops, focusing on contributions in the range-three color sector, the only non-vanishing diagram is the one in Fig. 3. It is associated to the exchange of one effective gauge propagator at two loops where only the one-particle reducible (1PR) corrections can contribute with the right color structure for A = 0, 1, respectively. The mechanism is then the same as in the one-loop computation and we obtain Combining them in W + 1/4 and setting χ(Γ, Γ f ) = −1 we reproduce exactly the third order contribution (4.11). We have then proved that in the matrix model result also the imaginary term (4.11) at three loops has a framing origin. More generally, from the expansion of the matrix model (4.1) one can argue that the expectation value of the WL is purely imaginary at odd loop orders. On the other hand, we show in Appendix C that the perturbative computation performed at trivial framing produces real terms only. Comparing the two results we infer that all the imaginary odd order terms of the localization expression originate from framing. The framing factor pointed out above constitutes a new kind of contribution that arises from the matter sector, in contradistinction with the pure CS phase. We stress that such an occurrence shares the same ilk of that recently uncovered at three loops for the 1/6 BPS WL in the ABJM model in [13] and mentioned at the beginning of this Section. In that situation an analogous 1PR diagram contributes, along with other diagrams, to reproduce the three loop imaginary term of the localization weak coupling expansion. For the quiver theories under investigation in this paper, the possibility of distinguishing different color factors allows to single out a unique contribution from this diagram in the range-three sector, thus providing an even sharper signature of matter triggered framing phenomena. We now turn to the fermionic 1/2 BPS operator, whose framing factor we want to isolate and remove, in order to be able to perform a comparison between the localization result and the field theory computation. In this case the role played by framing in fermionic diagrams is less clear. In the context of the 1/2 BPS WL in the ABJM model it is believed that fermionic diagrams contribute to framing in such a way that its total effect exponentiates into the phase exp i 2 (λ 1 − λ 2 ), in agreement with the localization result [10,17,18]. By analogy with that picture and by comparison between the twoloop results, as carried out in [1], we expect that the contribution of framing still exponentiates in the 1/2 BPS operator for N = 4 CS-matter theories. Therefore we remove the framing dependence from the localization result by taking its modulus This expression can be checked against a three-loop perturbative calculation done in ordinary perturbation theory at framing zero. In particular, it does not contain any third order, range-three term once the framing phase has been stripped off. Quantum uplift of cohomological equivalence According to the cohomological arguments in Section 2 that lead to identity (2.13) and properly removing the framing factor, localization result (4.16) should provide the expectation value at weak coupling for the actual quantum 1/2 BPS fermionic WL. In particular, this implies that while at two loops the BPS combination receives a non-trivial contribution, at one and three loops in the range-three color sector it should not receive any non-vanishing contribution as long as the calculation is performed at framing zero. On the other hand, from a perturbative perspective the general identity (3.1) tells us that computing separately W ψ 1 and W ψ 2 , at two loops they turn out to be identical while at one and three loops non-vanishing contributions differ by an overall sign. Therefore, while no information about the actual BPS combination can be extracted at two loops, if there are non-vanishing contributions at one or three loops, matching localization and perturbative results will fix a 2 = a 1 in (2.12). This is what we are going to discuss in this Section by performing an explicit calculation at three loops. In [1] a preliminary analysis at two loops for W ψ 1 and W ψ 2 has been performed using ordinary perturbation theory at framing zero. At one loop the result is zero for both WL due to the planarity of the contour, so moving to three loops the possible uplift of the classical degeneracy. At two loops the result reads At three loops, there is evidence that some diagrams are non-vanishing so they could give rise to a different result for the two WL. In [1], a particular triangle diagram with three scalar vertices has been computed and the result turns out to be nonvanishing and opposite in sign for the two WL, in agreement with the all-loop identity (3.1). Here, we perform a systematic investigation at three loops in the range-three color sector. From a careful analysis it turns out that in this sector the only non-trivial contributions are the ones drawn in Fig. 4. Moreover, thanks to identity (3.1) we can focus only on the evaluation of W ψ 1 . At one loop the gauge propagator (A.14) contains a total derivative term that could be removed by a gauge transformation. Therefore, being the WL a gauge invariant observable, we expect that this kind of contributions coming from diagrams (a), (c) and (e) sum up to zero. In the main body of the calculation we are going to neglect these terms, while we prove their actual cancellation in Appendix E. This is in fact a non-trivial check of the calculation. From the experience gained at two loops, in the calculation it is convenient to pair diagrams containing a one-loop gauge propagator with the ones where the gauge propagator is substituted by a scalar loop. Therefore, we are going to discuss them in pairs. We concentrate on contributions proportional to N 0 N 2 1 N 2 , since terms proportional to the other color structure N 1 N 2 2 N 3 can be easily inferred from the first ones. Diagrams (a) and (b). We start by considering the first two diagrams in Fig. 4 for which we need the third order expansion of the Wilson loops, which is proportional to The terms involving A (1) and A (2) give rise to contributions to the range-three color structures N 0 N 2 1 N 2 and N 1 N 2 2 N 3 , respectively. Focusing only on the first color class, we have where we have defined 3 Summing the two contributions relevant simplifications occur and the remaining integrals can be computed in a completely analytical way. We refer the reader to Appendix F for details in the resolutions of the integrals. Here we only quote the final result after expanding at small Diagrams (c) and (d). These diagrams contain two-loop corrections to the fermion propagator. In momentum space, for both flavors it is given by where is the gauge correction expanded at small , whereas is the scalar correction. Here, Yukawa vertices in (A.23) have been used. We can now insert these results into the WL expression and, after integrating over the contour parameters the sum of the two integrals gives Diagrams (e) and (f ). To compute diagram (e) and (f) we need the fourth order expansion of the WL operators that is proportional to (we consider only terms for the To evaluate diagram (e) it is sufficient to make the substitution Performing contractions and omitting the gauge-dependent part, for the ψ 1 -loop we obtain where we have defined and "+cyclic" means +(1 → 2 → 3 → 4 → 1)+(1 ↔ 3, 2 ↔ 4)+(1 → 4 → 3 → 2 → 1). Combining the two diagrams we can write The final result. We are now ready to sum the contributions from (a) to (f) and obtain the final result for the fermionic ψ 1 -loop. We note that divergent contributions from diagrams (a)+ (b) and (e) + (f) exactly cancel leading to a finite, non-vanishing result. Including also the contributions coming from the lower triangle in the WL (the A (2) part), it reads We note that this is a real result, in agreement with the general arguments of Appendix C that ensure the reality of the WL expectation values at any perturbative order. Moreover, the result does not exhibit maximal transcendentality. According to identity (3.1) the result for the ψ 2 -loop differs simply by an overall minus sign. Therefore, if we now consider the linear combination (2.12) at range-three we can write The comparison with the matrix model result cleansed from the framing contributions at three loops, eq. (4.16), necessarily implies a 1 = a 2 . We have then proved that the classical degeneracy of fermionic WL gets uplifted at three loops and the quantum 1/2 BPS WL in N = 4 CS-matter theories is given by Discussion In this paper we have identified the correct linear combination of fermionic Wilson loops that corresponds to the quantum 1/2 BPS operator in N = 4 CS-matter theories associated to necklace quivers. Working on the first nodes of the quiver, we have found the result in eq. (5.19). The analysis can be straightforwardly generalized to any site and we obtain 2r 1/2 BPS WL with similar structure. Corresponding string solutions exist [3] and can be compared to localization predictions. Our result solves the puzzle arisen in [3]. The expectation value of 1/2 BPS Wilson loops in N = 4 CS-matter theories can be exactly evaluated through localization procedure and reduced to a matrix integral. The relevant configurations for the holographic description of 1/2 BPS Wilson loops are well understood (see [3] and reference within) and amenable, in principle, of concrete calculations. On the field theory side the story instead is more convoluted, due to a classical degeneracy in the 1/2 BPS sector that seems to call for a quantum resolution. More precisely, for circular quivers, two apparently independent 1/2 BPS Wilson loops can be constructed at field theory level that are indistinguishable at localization level, due to their classical cohomological equivalence. On the other hand, at holographic level there is no evidence of this classical degeneracy, suggesting its uplift due to honest quantum mechanical corrections [3]. Uplift is indeed detected at three loops, where the explicit perturbative computation distinguishes the two different 1/2 BPS Wilson loops and only the combination (5.19) coincides with the matrix integral result. A general analysis of the perturbative series for the two fermionic WL has revealed two important properties. First, there is an easy relation between the expectation values of the two operators, as they always coincide at even orders and are opposite at odd orders. Second, the result obtained at framing zero is always real at any perturbative order. These properties have important consequences when we match the perturbative result with the localization prediction. In fact: • At any odd order the matrix model expansion exhibits just pure imaginary contributions. On the other hand, as we have mentioned, whatever the 1/2 BPS linear combination is, the perturbative result at framing zero is always real at any order. Matching the two results allows then to conclude that odd order terms in the localization calculation have a framing origin induced by the consistency of the procedure that necessarily require to work at framing one. We have supported this prediction with a direct three-loop calculation done at non-vanishing framing. Our analysis thus enlightens the role of framing in the localization procedure, extending the results of [13] to the N = 4 CS-matter case. In analogy with the ABJ(M) case, we expect the framing contributions to exponentiate, so that the expectation values of WL at framing zero should be obtained by taking the modulus of the matrix model expansion. In particular, this implies that the correct quantum BPS operators have vanishing contributions at odd orders if computed in ordinary perturbation theory with no framing. • The all-loop relation between the expectation values of the two WL, eq. (3.1), suggests that potential uplifts can arise only at odd orders, if non-vanishing contributions appear there. As we have discussed in this paper, three loops is indeed the first odd order where this happens. There, the request to have a three-loop vanishing contribution to W 1/2 at framing zero, as suggested by the localization prediction, necessarily leads to the conclusion that the average (5.19) is the correct combination where unwanted terms cancel. More generally, the arguments above allow to conclude that (5.19) is the exact 1/2 BPS operator at all-loop orders. In fact, whatever the non-vanishing contributions are that appear at higher odd orders for the two WL, they will be always real and opposite in sign. The linear combination (5.19) is then the only one that has vanishing odd-order terms. We have taken advantage of working with different gauge groups in each site. This has allowed to focus only on one specific color sector where the number of non-vanishing diagrams is reasonably small. We cannot easily conclude anything in the orbifold case (N 0 = N 1 = ...) [24] since contributions from all the other sectors should be included. In particular, we cannot conclude that at three-loops we obtain a non-vanishing result, although it seems quite natural. We remark that in this case an elegant formulation of the theory also exists in terms of a Fermi-gas description [25], which allows for efficient Wilson loop average computations. It would be nice to identify suitable limits that admit all-order comparisons with perturbation theory. Our results indicate that the straightforward localization procedure hides sometimes delicate questions regarding the quantum nature of (composite) field operators and the choice of a regularization scheme. In the present case, while combination 1 2 (W ψ 1 + W ψ 2 ) is enhanced to a true 1/2 BPS operator with a well-defined holographic dual, the other independent combination (W ψ 1 −W ψ 2 ) would deserve a closer inspection. This operator seems not to be 1/2 BPS and not detectable by localization. Although it is cohomologically trivial at classical level, its expectation value is non-vanishing at three loops, it is real and, quite unexpectedly, of lower transcendentality (see eq. (5.18)). Moreover, it is reasonable to expect that it will be non-trivially corrected also at higher orders and from our general power counting arguments the complete result at framing zero should be a real function of the couplings given by an odd-order expansion. We do not have a priori arguments to exclude the appearance of divergent contributions. However, our three-loop calculation seems to suggest that divergences might be absent, given that at this order the two fermionic WL turn out to be separately finite. This might be an indication that some supersymmetry survives. It would be interesting to further investigate the physical meaning of this operator and find its dual brane configuration. A Conventions and Feynman rules We work in euclidean three-dimensional space with coordinates x µ = (x 1 , x 2 , x 3 ). The set of gamma matrices satisfying {γ µ , γ ν } = 2δ µν is chosen to be Useful identities are Spinorial indices are lowered and raised as (γ µ ) α β = ε αγ (γ µ ) δ γ ε βδ , where In addition, are symmetric matrices. We conventionally choose the spinorial indices of chiral fermions to be always up, while the ones of antichirals to be always down. Therefore In order to study BPS WL in N = 4 supersymmetric Chern-Simons-matter theories associated to linear quivers it is sufficient to concentrate "locally" on three quiver nodes U (N 0 ) × U (N 1 ) × U (N 2 ). We will then consider the gauge-matter theory for this group. The action relevant for two-loop calculations is (Γ = e −S ) Tree-level fermion propagator One-loop fermion propagator The interaction vertices 1) Gauge cubic vertices (from (−S)) 2) Gauge-fermion cubic vertex from (−S) (we only need ψ (1) vertex) 3) Yukawa couplings. From the action in [26] suitably rotated to Euclidean space we read (from (−S) and only terms relevant for our calculation) Finally, we recall our color conventions. We work with hermitian generators for U (N A ) gauge groups (A = 0, 1, 2), satisfying B Useful identities on the unit circle We parametrize a point on the unit circle Γ as Simple identities that turn out to be useful along the calculation are 3) We now consider bilinears constructed in terms of c spinors in [3]. These are different for the two kinds of femionic WL. The ψ 1 -loop: In this case we have More generally, we can write The ψ 2 -loop: In this case we have with DD = i k , and the corresponding bilinears are More generally, we can write We note a sign difference in the µ = 1, 2 bilinears of the two WL (formulae (B.9, B.10) vs. (B.16, B.17)). C Parity and reality of a generic WL diagram Here we prove that for any loop diagram at order (1/k) L with n S contour insertions of the scalar bilinears, the number n γ of fermion bilinears (cγc) that get produced after γ-algebra reduction has the same parity of L + n S . This result is crucial to prove identity (3.1) in the main text. To this end, we consider a diagram containing n S scalar, 2n F fermion and n A gauge couplings from the WL expansion (see Fig. 2). Moreover, we assume that the bulk of the diagram is built up with i A cubic gauge vertices, i S esa-scalar vertices, i Y Yukawa couplings, i AF gauge-fermion vertices, i AS cubic and j AS quartic gauge-scalar vertices, i AG cubic gauge-ghost vertices, and I A gauge, I G ghost, I S scalar and I F fermion propagators, respectively. These assignments are summarized in Table 1. From the structure of the vertices we have the following constraints We begin by proving the following statement where n is the total number of initial gamma matrices (coming from fermionic propagators and i AF vertices) distributed in n F bilinears, and n ε is the total number of initial epsilon tensors (coming from gauge propagators and cubic gauge vertices). Now, taking into account the Feynman rules in Appendix A the power L in the coupling constant 1/k is given by where the last identity in (C.1) has been used. Moreover, the number n of original gamma matrices (coming from fermion propagators and i AF vertices) and the number n ε of original ε tensors (coming from gauge propagators and i A vertices) are where the second identity in (C.1) has been used. Merging results (C.3) and (C.4) we finally obtain identity (C.2) that allows us to trade the parity of L+n S with that of n+n ε . We then study the two cases, L + n S even or odd, by separately discussing the four possible configurations (L + n S ) even ⇒ (1a) (n, n ε ) = (even, even) (1b) (n, n ε ) = (odd, odd) (L + n S ) odd ⇒ (2a) (n, n ε ) = (even, odd) (2b) (n, n ε ) = (odd, even) and prove that in the first two configurations n γ turns out to be even, whereas in the last two ones it is odd. In case (1a), the condition that the total number of gamma matrices n must be even implies that the matrices can be distributed among an arbitrary (but ≤ n F ) number of bilinears containing an even number of matrices times an even number of bilinears containing an odd number of matrices. Therefore, taking into account reductions (3.5, 3.6) that follow from gamma matrix identities, the initial structure of the contribution from this diagram can be sketchily written as After performing all the products, the planarity of the contour implies that nonvanishing contributions will arise only from terms containing an even total number of epsilon tensors. In fact, any string of an odd number of tensors can be always reduced to a linear combination of products of Kronecker deltas times one epsilon tensor that would be necessarily contracted with external indices. Therefore, in the product of the square brackets in (C.5) we can have an even number of ε(cγc) from the first set of brackets times an even number of ε(cc) from the second set. But since the total number of second type of brackets is even, this implies having an even number of (cγc) as well. Therefore, the only non-vanishing products will contain a total number n γ of (cγc) bilinears which is even. Otherwise, we can have an odd number of ε(cγc) from the first set of brackets times an odd number of ε(cc) from the second set. But since the total number of second type of brackets is even, this implies having an odd number of (cγc) from the second set. Therefore, this leads still to a total number n γ which is (odd + odd) = even. Let's consider case (1b). Since the number n of gamma matrices is odd, this time we have an odd number of bilinears containing an odd number of matrices. The sketchy structure of the result is (odd # of ε) × [(cc) + ε(cγc)] · · · [(cc) + ε(cγc)] any #≤n F × [ε(cc) + (cγc)] · · · [ε(cc) + (cγc)] odd # (C. 6) Again, performing all the products, the only non-vanishing contributions come from strings containing a total even number of epsilon tensors. This requires having an even number of ε(cγc) from the first set of brackets times an odd number of ε(cc) from the second set. But since the total number of second type of brackets is odd, this also implies having an even number of (cγc). In conclusion, the only non-vanishing products will contain a total number n γ of (cγc) bilinears which is even. Alternatively, we can have an odd number of ε(cγc) from the first set of brackets times an even number of ε(cc) from the second one, which implies having an odd number of (cγc). In total, we still end up with an even number n γ . Therefore we have proved that for L + n S even, planarity implies n γ even. A similar analysis can be applied to the case where L + n S is odd. For instance, if we consider (2a) case, the general structure of the contribution reads In order to realize a string containing an overall even number of epsilon tensors, we can take an even number of ε(cγc) from the first set of brackets times an odd number of ε(cc) from the second one. But since the number of brackets in the second set is even, this implies having an odd number of (cγc) as well. In total we have (even + odd) number of (cγc) bilinears, leading to n γ odd. The same conclusion is reached if we alternatively take an odd number of ε(cγc) from the first set of brackets times an even number of ε(cc) from the second one that comes together with an even number of (cγc). The analysis of case (2b) goes similarly and we are led to the conclusion that for L + n S odd, planarity implies n γ odd. We have then proved that n γ always has the same parity of L + n S . We conclude this Appendix with an analysis of the reality of the perturbative expansion of fermionic WL. We will prove that the result at any order is always real, as a consequence of the planarity of the contour and the fact that we work at framing zero. In order to prove it, we apply counting arguments similar to the ones used above, this time keeping track of the different sources of the immaginary unit i. Focusing on W ψ 1 in (2.8) we first notice that from expansion of the Wilson loop we have a factor i n A +2n F . Moreover, as explained in Section 3 each fermionic bilinear can be always reduced to a linear combination of expressions (B.8-B.11). However, the planarity of the contour eventually rules out the appearance of γ 3 bilinear. Since all the other ones contain an i factor, we can count an additional immaginary unit for each of the n F structures. We are thus left with an overall power i (n A +n F ) (mod 2) . Next we count the i factors coming from internal vertices and propagators, getting a further power i I F +I A +i AS +i Y +i AG . Putting everything together we are left with a total power i p with Making repeated use of identities (C.1) this can be rewritten as But, as discussed above, I A + i A = n , which is the number of initial epsilon tensors. Therefore we have an overall i n . Any other tensor coming from γ-algebra reduction always enters with an additional i (see identities in Appendix A). We thus have a total factor (i ) n +m and, from planarity and at framing zero, we must have n + m = even. Therefore, we end up with an even number of i and the result is always real, independently of the pertubative order. Thanks to identity (3.1) this result extends trivially to W ψ 2 . D Useful formulae for the matrix model analysis The expression for B 4 (Λ A ) and C 4 (Λ A , Λ A+1 ) appearing in the expansion of Q A are given by Consider now the gaussian model defined by the matrix integral dΛ e −αTr(Λ 2 ) (D. 3) The expectation values that we have used in our analysis are and One possibile way to get rid of the derivatives is to first Feynman parametrize I(2,1,1) and integrate over the internal point w. After expanding in the contour integrations can be performed and we obtain the final result We evaluate the two different trigonometric structures in the first line of (F.20) separately. The first term, after Mellin-Barnes parametrization, turns out to yield the same trigonometric integral as the one found in (F.14) and can be elaborated exactly as beforẽ
11,753
2016-06-22T00:00:00.000
[ "Physics" ]
Magnetic Resonance Imaging-Guided Treatment of Equine Distal Interphalangeal Joint Collateral Ligaments: 2009–2014 Objectives To determine the outcome of treating distal interphalangeal joint collateral ligament (DIJCL) desmopathy using magnetic resonance imaging (MRI)-guided ligament injection. Methods Medical records of 13 adult horses diagnosed with DIJCL desmopathy using low-field MRI and treated by MRI-guided ligament injection of mesenchymal stem cells and/or platelet-rich plasma (PRP) were reviewed. Information collected included signalment, MRI diagnosis, treatment type, time to resolution of lameness, and level of exercise after treatment. Results Collateral ligament inflammation was diagnosed as a cause of lameness in 13 horses. MRI was used to guide the injection of the injured DIJCL. All lameness attributed to DIJCL desmopathy resolved with the resulting level of performance at expected (10) or less than expected (3). Conclusion and clinical relevance Injection of the DIJCL can be safely completed in horses standing in a low-field magnet guided by MRI as previously demonstrated in cadaver specimens. The positive response in all horses suggests that administration of stem cells or PRP along with rest and appropriate shoeing may be a safe and useful treatment for DIJCL desmopathy. Methods: Medical records of 13 adult horses diagnosed with DIJCL desmopathy using low-field MRI and treated by MRI-guided ligament injection of mesenchymal stem cells and/or platelet-rich plasma (PRP) were reviewed. Information collected included signalment, MRI diagnosis, treatment type, time to resolution of lameness, and level of exercise after treatment. results: Collateral ligament inflammation was diagnosed as a cause of lameness in 13 horses. MRI was used to guide the injection of the injured DIJCL. All lameness attributed to DIJCL desmopathy resolved with the resulting level of performance at expected (10) or less than expected (3). conclusion and clinical relevance: Injection of the DIJCL can be safely completed in horses standing in a low-field magnet guided by MRI as previously demonstrated in cadaver specimens. The positive response in all horses suggests that administration of stem cells or PRP along with rest and appropriate shoeing may be a safe and useful treatment for DIJCL desmopathy. Keywords: magnetic resonance imaging, lameness-equine, desmitis, regenerative medicine inTrODUcTiOn Treatment of tendon and ligament injuries by injection with either mesenchymal stem cells (MSC) or platelet-rich plasma (PRP) is reported to improve healing and eventual outcome (1, 2). These techniques have been used in tendons and ligaments of the metacarpus and metatarsus, where ultrasound can be used to direct the injection (1). Resolution of tendon and ligament fiber disruption is often monitored using sequential ultrasonography, which indicates improved fiber structure following natural disease and experimentally created lesions (3,4). Magnetic resonance imaging (MRI) has recently identified and characterized tendon and ligament injuries in the horse's foot as a cause of lameness (5)(6)(7)(8)(9). Characteristic MRI findings of desmopathy in the distal interphalangeal joint collateral ligament (DIJCL) include ligament enlargement, changes in border definition, and increased signal intensity within the ligament (10). These changes correspond to degenerative changes observed histologically, including collagen degeneration, fissure formation, and fibrocartilaginous metaplasia, and are often accompanied with osseous change in the ligament insertion (10,11). Treatments for tendon and ligament injury in the foot include rest, anti-inflammatory treatment, including intra-synovial injection of corticosteroids, hyaluronic acid, interleukin receptor antagonist, 1 supportive shoeing, and shockwave treatment (12,13). Newer approaches to treat tendons and ligaments using regenerative medicine therapies include PRP and MSC therapy (2,14,15). PRP injection improved the ultrasonographic appearance and organization of the linear fiber pattern in a surgical model of superficial digital flexor tendon injury (16). Bone marrowderived MSC treatment improved histological signs of healing and structural organization in a collagenase model of superficial digital flexor tendon injury (4). A large retrospective study looking at MSC therapy for superficial digital flexor tendon injury in National Hunt racehorses in the United Kingdom reported an 82% long-term success rate (2). Application of regenerative therapies to DIJCL lesions is limited due to limited access in the foot. Although ultrasonography, radiography, or computed tomography may assist in localizing these therapies to lesions within the foot, each technique has distinct disadvantages. Ultrasonography gives a limited view of the DIJCL, and lesions diagnosed by MRI have been seen at the distal insertion, where the ligament cannot be seen ultrasonographically (10,11). Radiography can assist placement of a needle in the expected area of the lesion; however, the ligament and lesion are not visible, making this a blind technique relying on anatomic understanding rather than direct visualization (17). Computed tomography with contrast enhancement can be used to specifically locate the site for injection; however, this must be performed under general anesthesia and does not offer detection of new fluid at the injection site postinjection (18). Magnetic resonance imaging-guided techniques used successfully in human medicine for biopsies and treatment of tumors (19,20). Recently, a technique of using MRI to guide injection of the DIJCL was developed and validated in horse feet from cadavers (21). We hypothesized that naturally occurring lesions in the DIJCL within the hoof can be accurately injected via MRI guidance in the standing horse. Additionally, we hypothesized that the injection technique would not compromise healing of the DIJCL healing. MaTerials anD MeThODs animals Client-owned horses with unilateral lameness localized to the hoof via diagnostic nerve blocks were admitted to the Marion duPont Scott Equine Medical Center at the Virginia-Maryland College of Veterinary Medicine. All horses included in the study had moderate-to-severe uniaxial DIJCL desmopathy. When affected ligaments had evidence of severe injury or previous 1 IRAP II, Arthrex Vet Systems, Naples, FL, USA treatments such as rest, anti-inflammatory treatments, and corrective shoeing were not successful, ligament injection using MRI guidance was offered as a treatment. Either MCS combined with PRP or PRP alone was selected for injection based on the clinician's preference and cost to the client. Diagnostic Magnetic resonance imaging The diagnosis of DIJCL lesions associated with foot lameness was made using a 27-T magnet 2 with horses in a standing position using a hoof coil (see text footnote 2). The MRI examination of the feet was completed, as previously described, using proton density-weighted spin echo, T1-weighed gradient echo sequence (T1W-GE), T2-weighted fast spin echo, and short-tau inversionrecovery fast spin echo sequence (STIR FSE) sequences (22). Proton density-weighted spin echo scans were completed in a transverse plane; T1W-GE scans in transverse, frontal, and sagittal planes; T2-weighted fast spin echo scans in dorsal and transverse planes and STIR FSE scans in transverse, dorsal, and sagittal planes. Preparation of Platelet rich Plasma Blood (450 ml) was aseptically collected in citrate-phosphatedextrose-adenine anticoagulant from each patient for PRP processing. Blood was then processed to produce 12 ml of PRP (5-to 7-fold increase in platelets with 0.02-to 0.05-fold white blood cells) via centrifugation. Preparation of Mesenchymal stem cells Bone marrow aspirate (n = 7) was collected aseptically from the patient's left tuber coxa or mid sternum, as previously described (23). In brief, heparinized bone marrow was collected from the patient, the aspirate was centrifuged and cultured in low-glucose Dulbecco's Modified Eagle Medium supplemented with 10% fetal bovine serum, 300 μg of l-glutamine/ml, 100 U of sodium penicillin, and 100 μg of streptomycin sulfate/ml at 37°C in a 5% carbon dioxide atmosphere with 90% humidity for media supplementation every 48 h. Cells were passaged at 80% confluence, and passage 1 or 2 was used for injection. Once adequate cells were prepared, the injection was scheduled, and on the day of injection, cells were trypsinized, washed with phosphate buffer solution four times, and suspended in autologous PRP for injection (5 million cells/ml). Mri-guided injection Local anesthesia of the digital nerves was completed at the level of the proximal sesamoid bones. Sedation for the procedure was predominately titration of intravenous detomidine and butorphanol used during routine MRI examinations of the feet and as needed to prevent movement of the horse. Aseptic preparation and wrapping of the injection site were completed prior to putting the limb in the magnet. T1W-GE and STIR FSE sequences were completed prior to injection. Injection of each DIJCL was completed using a 16-gauge intravenous catheter over a needle (3 cases) or a 16-gauge non-ferromagnetic (titanium) needle 3 (10 cases) designed for biopsy guided by MRI. The needle placement utilized the previously described technique (21). Briefly, needle insertion was at a point between the common digital extensor tendon and the dorsal edge of the collateral cartilage and directed perpendicular to the solar surface (Figure 1). Injections were well tolerated by the horses, and there was no evidence that the horse had any sensation during the injection. Firm resistance was present as the needle penetrated the ligament. The needle was inserted until it was in contact with the coffin bone. There was resistance to injection until the needle was withdrawn a few millimeters. T1-weighted gradient echo sequences were repeated as the needle was advanced identifying the position of the needle and allowing for any needed correction of its position. In the first three cases, 16-gauge catheter was used with its needle inserted and the needle withdrawn once the catheter was in position. Catheter removal was sometimes necessary, and a new catheter redirected into the target. Subsequently, a series of progressive scans was devised to slowly advance a titanium needle, while correcting the angle using the sequential scans as the needle was placed in real-time (see text footnote 2). Once, in place, the stylet was withdrawn from the needle, and 1-2 ml of PRP or MSC suspended in PRP was injected. Postinjection transverse STIR FSE sequences were completed in an attempt to document increased fluid signal in the DIJCL. 3 Puncture needle, Invivo, Gainesville, FL, USA rehabilitation Postinjection recommendations included absolute stall rest for 1 month with subsequent increased exercise by walking in hand for 2 months and bar shoe application to both front feet. Turnout was not recommended until walking and trotting under saddle was completed without evidence of lameness. Outcome Measures Follow-up was completed by contacting owners by telephone, email, or by examination 1 year or longer after the injection. Information requested at the time of follow-up included whether the horse was sound, and whether the level of exercise was as expected or less than expected ( Table 1). A recheck MRI was completed in four horses to evaluate any change in the MRI abnormalities. resUlTs Signalment, duration and grade of lameness, type of treatment, and outcome are presented in Table 1. All horses were referred, and all had a history of lameness localized to the foot. Lameness resolved with a palmar digital nerve block in three horses, with an abaxial sesamoid block in four horses, up to a low four-point block in one horse and a history of lameness localized to the foot without specifying the nerve block in five horses. One horse had evidence of injury to the collateral ligament on a previous ultrasound examination. Ten horses had no resolution of the lameness after previous treatments, including variable periods of rest, intra-articular treatment of the affected distal interphalangeal joint with triamcinolone, and/or hyaluronic acid (seven), oral phenylbutazone administration (one), and intravenous disodium tiludronate (two). Based on the history of localization of the lameness to the foot and subsequent identification of injury to the DIJCL desmopathy of the DIJCL were considered the primary cause of lameness in 13 horses. Abnormal MRI signal was found in T1W-GE (9), STIR (7), T2 FSE (12), and PD (11) sequences. Images from one horse were not available to evaluate all sequences. Five horses had DIJCL insertion fossa enlargement, one had cyst formation at the insertion, and one had focal bone inflammation at the insertion. Desmopathy in the horse with hind limb desmopathy was associated with a previous distal phalanx fracture, which affected the collateral ligament at its insertion. Two horses had two 500 mg doses of flunixin meglumine postinjection, while the remaining horses had no postinjection treatment. No complications after the DIJCL injections were reported. All lameness due to injuries of the DIJCLs resolved. The time for resolution of the lameness ranged from 6 to 18 months ( Table 1). Four horses had follow-up MRI examinations, which revealed a partial (two) or complete (two) resolution of the abnormal signal (Figure 2). The presence of improved but still abnormal signal in follow-up MRI examinations of two horses could not be specifically identified as a response to the needle insertion. Lameness in one horse due to DIJCL desmopathy, navicular bone edema, and navicular bursitis recurred. Repeat MRI 3 months after the initial diagnosis showed marked improvement in the injected collateral ligament, but navicular bone edema and bursitis were unchanged. Lameness resolved with injection of the navicular bursa, but subsequently the horse was retired due to recurring front limb lameness. Lameness in the affected limb in one horse resolved, but the horse was subsequently retired due to rear limb lameness. DiscUssiOn Surgical and medical intervention for tendon and ligament injuries has been used for many decades. The goal is to stimulate a healing in an environment, which heals with scar tissue or does not heal due to vascular compromise (24). Lesions found in an injured DIJCL and deep digital flexor tendon consist of disorganized matrix and poor collagen maturation (6,10,25). The inability of some of these lesions to resolve may be due to a lack of inflammatory response and/or lack of blood supply with no stimulus for fiber regeneration or remodeling. Surgical splitting of tendon and ligament lesions is an attempt to release acute core lesion edema and is used to stimulate healing in avascular scar tissue in chronic injuries (24,26). Because the needle can be accurately placed in the DIJCL using MRI guidance, this disruption of the abnormal tissue may stimulate a cellular response and increased vascularity in the ligament as observed with tendon and ligament splitting without the use of MSC or PRP. Although prolonged stall rest with appropriate shoeing has been successful in resolving lameness in up to 60% of horses with or without additional medical therapy (9), all horses in our study had prior rest, which was not successful in resolving the lameness prior to injection. Needle placement in the DIJCL is improved using MRI guidance compared with use of radiographs or ultrasound (21). The needle position is identified as focal decreased signal intensity in the tissue (Figure 1). The technique has been improved by use of overlapping two slice T1W-GE sequences, which can be completed in real time sequentially as the needle is advanced (see text footnote 2). This allows visualization of and redirection of the needle placement as it is being advanced into the ligament. Additionally, this decreases the time required for the injection and helps complete needle placement with minimal redirection. Use of navigational ultrasound imaging, which can correlate previous MRI images with ultrasound for real-time guidance during interventional therapies, may provide an additional method for injection of the DIJCL (27). This appears to be an advantage for lesions identified by high-field MRI that can subsequently be treated without general anesthesia. However, it is not clear if this technique will allow accurate injection in the distal DIJCL, where ultrasound cannot penetrate the hoof. Response to rest and corrective shoeing for DIJCL injuries is reported at 5 of 17 horses (29%) returning to full work with an additional 2 horses improved for light work and breeding (8). Rest, medical treatment, and supportive shoeing resulted in 60% resolution of lameness in horses with DIJCL desmopathy (9). Horses with DIJCL desmopathy commonly have concurrent changes in the navicular bone, deep digital flexor tendon, and impar desmopathy making it difficult to determine the source of pain (13,28). Horses with deep digital flexor lesions detected by MRI have a worse prognosis that horses with other types of injury, including DIJCL desmopathy. Seven of the 13 horses in this study had concurrent lesions in the navicular bone (6) and distal phalanx (1). None of the horses in this group had deep digital flexor tendon lesions likely improving the chance of successful treatment (13). Although concurrent lesions in seven horses could have contributed to the lameness, the DIJCL changes seen on MRI were considered the primary problem, which required treatment based on the history and character of the lesions. This study confirms the ability to accurately inject the DIJCL in standing horses using sedation and local anesthesia for lowfield MRI. Although the treatment appears beneficial, this study did not confirm efficacy, as there is no direct comparison for cases treated with rest, as previously reported (13,28). Furthermore, there was no control for the potential stimulation created by needle placement alone. cOnclUsiOn Magnetic resonance imaging-guided injection of stem cells or PRP into the DIJCL is a safe and repeatable technique in standing horses. Further studies of regenerative medical therapy for the DIJCL are needed to confirm the benefit of this therapy. aUThOr cOnTriBUTiOns Both authors, NW and JB, were involved with all aspects of the study, including diagnosis, treatment, data collection, review of the data, and authoring the manuscript. acKnOWleDgMenTs Funding for the open access from the Virginia Tech Open Access Subvention Fund is acknowledged.
3,945.2
2016-09-05T00:00:00.000
[ "Medicine", "Biology" ]
Random Beam-Based Non-Orthogonal Multiple Access for Low Latency K-User MISO Broadcast Channels In this paper, we propose random beam-based non-orthogonal multiple access (NOMA) for low latency multiple-input single-output (MISO) broadcast channels, where there is a target signal-to-interference-plus-noise power ratio (SINR) for each user. In our system model, there is a multi-antenna transmitter with its own single antenna users, and the transmitter selects and serves some of them. For low latency, the transmitter exploits random beams, which can reduce the feedback overhead for the channel acquisition, and each beam can support more than a single user with NOMA. In our proposed random beam-based NOMA, each user feeds a selected beam index, the corresponding SINR, and the channel gain, so it feeds one more scalar value compared to the conventional random beamforming. By allocating the same powers across the beams, the transmitter independently selects NOMA users for each beam, so it can also reduce the computational complexity. We optimize our proposed scheme finding the optimal user grouping and the optimal power allocation. The numerical results show that our proposed scheme outperforms the conventional random beamforming by supporting more users for each beam. Introduction In recent years, various forms of wireless applications have emerged as the performance of wireless communication systems significantly improved.This includes a variety of wireless applications such as real-time remote control, inter-vehicle communication, autonomous driving, and augmented reality, most of which require high reliability and low latency characteristics.In the cases of wireless factory system control, inter-vehicle communication, and autonomous driving, it is very important to satisfy high reliability and low-latency characteristics because transmission errors or delays can cause great damage or risk.To support these wireless applications, the upcoming beyond-5G communication system defines a variety of target performances, including end-to-end communication delay of 1 ms, 10 Gbps transmission rate, and a 90% reduction in energy usage [1].Furthermore, a significant increase in the number of wireless devices brings challenges to the environment, where many devices directly communicate with each other, which is sensitive to latency.Thus, it becomes very important to investigate wireless transmission technologies that serve large numbers of devices with low latency. According to the CISCO report [2], the number of device-to-device (D2D) connections is expected to reach 14.7 billion by 2023.As the number of devices participating in wireless networks rapidly increases, various technologies are being studied to support the communication of a large number of devices, which triggers various problems such as latency and increased signaling complexity.The latency of wireless communication systems can be divided into (1) end-to-end latency (2) user plane latency (3) control plane latency [3,4].End-to-end latency is comprised of wireless propagation delay, processing delay, queuing delay, retransmission delay, and computational delay.Furthermore, user plane latency is defined as the time spent transmitting a single message from the transmitter's application layer to the receiver's application layer.Control plane latency is defined as the amount of time the terminal takes to activate.Meanwhile, reliability is usually defined as the successful transmission probability of a certain size of a message in a given time [5]. One way to support a large number of devices is non-orthogonal multiple access (NOMA) [6][7][8], which allows multiple users to share the same radio resources unlike the conventional orthogonal multiple access (OMA) that exclusively uses the radio resources.The NOMA can be classified into two categories: power-domain NOMA and code-domain NOMA.In the case of downlink power-domain NOMA, the transmitter uses different powers to serve multiple users.In power-domain NOMA, the transmitter simply transmits the superposed one of the users' signals.Then, a user with a better channel can decode the other users' signals, so it can subtract them from the received signal, i.e., successive interference cancelation (SIC).In this case, the transmitter allocates smaller power to the user with better channel. The NOMA is widely studied in many scenarios.The authors of [9] proposed intelligent reflecting surface (IRS)-assisted NOMA to support cell edge users in cellular systems.The authors of [10,11] exploit machine learning techniques to optimize NOMA.Furthermore, the authors of [12] considered uplink cellular communication scenarios and analysed the ergodic sum rate gain of NOMA compared to orthogonal multiple access (OMA).The authors of [13] proposed the uplink network NOMA scheme for the uplink coordinated multi-point transmission (CoMP), where a CoMP user and multiple NOMA users are served simultaneously.Meanwhile, the authors of [14] proposed the resource allocation scheme for NOMA to guarantee the quality of service (QoS) in multibeam satellite industrial Internet of things, and the authors of [15] adopted NOMA for multiple-input multiple-output (MIMO) multi-user visible light communication systems. In this paper, we propose random beam-based non-orthogonal multiple access for low latency multiple-input single-output (MISO) broadcast channels, where there is a target signal-to-interference-plus-noise power ratio (SINR) for each user.In our system model, there is a multi-antenna transmitter with its own single antenna users, and the transmitter selects and serves some of them.For low latency, the transmitter exploits random beams [16], which can reduce the feedback overhead for the channel acquisition, and each beam can support more than a single user with NOMA.The basic idea of random beam-based NOMA is presented in [17], but in [17], we mainly consider a simple case in which each beam can support at most two users with equal powers, and the exact power allocation for each beam is not revealed.Our contributions can be summarized as follows: • We propose random beam-based NOMA generalizing the basic idea of [17], where each beam can support multiple users with the optimal power allocation.We identify the feedback information for each user to implement random beam-based NOMA; each user should feed (1) a selected beam index, (2) the corresponding SINR, and (3) the channel gain, while the conventional random beamforming requires (1) a selected beam index and (2) the corresponding SINR feedback for each user. • We formulate a joint user selection and power optimization problem for random beambased NOMA and show that equal power allocation across the beams can reduce the computational complexity and reduces the feedback overhead. • With the equal power allocation across the beams, we show that our optimization problem can be divided into sub-optimization problems at all beams.We solve each sub-optimization problem and find the optimal user selection and power allocation. • In the simulation part, we evaluate our random beam-based NOMA and show that our proposed scheme well exploits multiuser diversity provided by the multiple users and increases the performance of the conventional random beamforming. The remainder of this paper is organized as follows.In Section 2, we explain our system model and summarize the conventional random beamforming with a QoS constraint.In Section 3, we propose random beam-based NOMA, and in Section 4, we optimize our proposed scheme.In Section 5, we evaluate our proposed scheme, and in Section 6, we conclude our paper. System Model Figure 1 illustrates our system model.There is a single transmitter equipped with M antennas with its own K single-antenna users, among which the transmitter selects and serves some of them.Let G ⊂ [K] be a user group selected at the transmitter, where [K] is the set of integers less than or equal to K, i.e., [n] {1, . . ., K}.Then, the received signal at the arbitrary selected user k ∈ G becomes where h k ∈ C M×1 is a channel between the transmitter and the user k, whose elements are independent and identically distributed (i.i.d.) circularly symmetric complex Gaussian random variables with zero mean and unit variance, i.e., h k ∼ CN (0, I M ).Furthermore, x ∈ C M×1 is the transmitted signal at the transmitter, and n k is an additive white complex Gaussian noise at the user k with zero mean and unit variance, i.e., n k ∼ CN (0, 1).Meanwhile, we assume that the transmitter exploits linear beamforming vectors to serve the selected users, so the transmitted signal is constructed by where v i ∈ C M×1 is a beamforming vector for the user i such that v i 2 = 1, and x i is a data symbol for the user i such that E|x i | 2 = P i with P i the power allocation for user i. Denoting by P the transmitter's total power budget, it should be satisfied that tr(xx † ) = P. Conventional Random Beamforming with a QoS Constraint To enjoy the multiplexing gain, the transmitter should exploit the channel state information (CSI).The CSI acquisition at the transmitter is not easy in practice, however, and the imperfect CSI causes the severe performance degradation.One way to circumvent this difficulty is to use random beams with user diversity in opportunistic manners. The authors of [16] proposed random beamforming and showed that the random beamforming can achieve the optimal multiplexing gain when the number of users in-creases with the transmit power.The procedure of the conventional random beamforming can be summarized as follows: • The transmitter broadcasts M random beams to the users. • Each user chooses the best beam and feeds the beam index and one scalar value, which represents the performance of the selected beam, back to the transmitter.In this case, it is assumed that perfect CSI is available at users. • From the collected feedback information, the transmitter selects multiple users. • The transmitter serves the selected users with random beams.In this case, the transmitter selects a single user for each beam and allocates equal power to the selected users at all beams. For the conventional random beamforming, the transmitter exploits M orthogonal random beams v 1 , . . ., v M ∈ C M such that v i ⊥ v j whenever i = j.In this case, we assume that each random beam is a unit vector, i.e., v m 2 = 1 for all m ∈ [M].After the transmitter's random beam broadcasting, each user finds the closest beam to its own channel; the user k returns I(k), where I(k) is a selected beam indicator for the user k given by When the user k is served by the mth random beam, i.e., v m , and the transmitter allocates equal power to the selected users, the signal-to-interference-plus-noise power ratio (SINR) of the user k becomes In this case, the user k's signal-to-noise power ratio (SNR) becomes while the user k's interference-to-noise power ratio (INR) becomes Thus, the achievable rate when the user k is served by the mth beam is given by and the sum achievable rate from the selected users becomes To maximize (8), the user k feeds the selected beam and the corresponding SINR as follows: After collecting the feedback values from the users, i.e., {I(k), SINR k } K k=1 , the transmitter selects the best user for each beam, which can achieve the highest SINR with the beam. Let s m be the selected user index from the beam m.Then, s m can be obtained as follows so the SINR at the mth beam becomes In this paper, we consider a quality of service (QoS) constraint, where each user's SINR should exceed a target SINR denoted by γ.Then, the sum achievable rate at the transmitter becomes where 1(•) is the indicator function that returns one when the event happens and zero otherwise. Non-Orthogonal Multiple Access with Random Beamforming In this paper, we implement non-orthogonal multiple access with random beamforming, where each beam can serve more than a single user.When the transmitter serves multiple users with the same beam, the transmitter simply superposes the transmitted signal, and the multiple users adopt the successive interference cancelation (SIC). As each random beam can serve multiple users, we denote by G m ∈ [K] the set of users served with the mth random beam, i.e., v m , and assume that the sub-user groups G 1 , . . ., G M become pairwise disjoint sets such that Then, the transmitted signal x given in (2) changes to where the power constraint becomes In this case, the received signal at the user s served from the mth beam, i.e., s ∈ G m , becomes as follows: where h † s v m x s is the user s's desired signal, and h † s v m ∑ i∈G m \{k} x i is the interference within the same beam, i.e., the intra-beam interference.Furthermore, the term h † s ∑ i∈[M]\{m} ∑ j∈G i v i x j is the interference from other beams, i.e., the inter-beam interference. With the NOMA, the interference within the same beam is managed with the SIC of the users.To denote the decoding order at the user group G m , we define a sequence that is a permutation of a sequence of all user indexes belonging to G m as follows: where | • | is the cardinality of a set.Meanwhile, without loss of generality, we assume that Sensors 2021, 21, 4373 Then, for any (i, j) such that 1 m can decode the user π (j) m 's signal, so it can subtract it from the received signal.Now, we consider the user π (l) m in the user group G m .From ( 16), we can rewrite the user π (l) m 's received signal as follows: Then, the user π m can decode the signal of the users π from the received signal, so after SIC, the received signal (19) becomes From (20), we obtain the user π m 's SINR as follows: where Meanwhile, when l = 1, Equation (21) simply becomes For notational simplicity, in the denominator of (21), we define the inter-beam interference denoted by I inter-beam as follows Then, for low-latency NOMA, our proposed random beam-based NOMA allocates the same power for each beam, i.e., so that the transmitter can omit the power allocation across the beams, which requires more feedback overhead and heavy computational complexity. With the equal beam power allocation (24), the inter-beam interference at the user π Thus, the SINR given in (21) becomes Note that with given power allocation, the SINR of the user π m in ( 26) is only represented by both the user's channel gain (i.e., h k 2 ) and the effective channel gain with the selected beam (i.e., Γ k ).This fact means that each user's feedback information should be the selected beam index, the corresponding effective channel gain, and the channel gain, i.e., for user k, the feedback information becomes Thus, given sub-user grouping {G 1 , . . ., G M } and the power allocation the transmitter's achievable sum rate with the target SINR γ becomes Optimization of the Proposed Random Beam-Based NOMA In this section, we optimize our proposed random beam-based NOMA. To maximize the sum achievable rate in (29), we need to find the optimal sub-user groups and the optimal power allocation as follows Then, as we can observe in (26), the power allocation at each beam is independent of the allocated power for the other beams.Thus, the problem P1 can be divided into independent M sub-problems, each of which corresponds to user selection and power allocation for each beam.The transmitter only selects users and allocates powers for each beam; for the mth beam, the transmitter solves the following problem: Thus, the problem P1 can be solved by Algorithm 1. First, the transmitter initializes the sub-user groups G 1 , . . ., G M from the users' selected beam indexes in the feedback information.For the mth beam, the initial sub-user group G m becomes Algorithm 1: The proposed solution of our random beam-based NOMA. Result: The sub-user grouping G 1 , . . ., G M and the power allocation P (1) The set of all users that select the mth beam π Numerical Result In this section, we evaluate our proposed random beam-based NOMA.In Figure 2, we compare the achievable sum rate of the conventional random beamforming and our proposed random beam-based NOMA with respect to the number of users when the number of transmit antennas is four, and the transmit SNR is 10dB (i.e., P = 10).The target SINR is fixed to one, i.e., γ = 1.As we can see in Figure 2, the performance of the conventional random beamforming is saturated as the number of users increases because the target SINR is achieved for all beams.Since the transmitter can support at most four users, the maximum achievable rate with the conventional random beamforming becomes M log 2 (1 + γ) = 4.In our proposed random beam-based NOMA, however, the achievable rate increases as the number of users increases because our proposed scheme can support more users at each beam with NOMA. In Figure 3, we compare the achievable sum rate of the conventional random beamforming and our proposed random beam-based NOMA with respect to the transmit SNR when there are total fifty users (i.e., K = 50), and the target SINR is fixed to one, i.e., γ = 1.As we can see in Figure 3, the performance of the conventional random beamforming is saturated as target SINR is achieved as the transmit SNR increases.In our proposed random beam-based NOMA, however, the achievable rate increases as the transmit power increases because each beam can support more users with more transmit power. when the number of transmit antennas is four, and the transmit SNR is 10dB (i.e., P = 10) The target SINR is fixed to one, i.e., γ = 1.As we can see in Figure 2, the performance of the conventional random beamforming is saturated as the number of users increases because the target SINR is achieved for all beams.Since the transmitter can support at most four users, the maximum achievable rate with the conventional random beamforming becomes M log 2 (1 + γ) = 4.In our proposed random beam-based NOMA, however, the achievable rate increases as the number of users increases because our proposed scheme can support more users at each beam with NOMA. In Figure 3, we compare the achievable sum rate of the conventional random beam forming and our proposed random beam-based NOMA with respect to the transmit SNR when there are total fifty users (i.e., K = 50), and the target SINR is fixed to one, i.e., γ = 1 As we can see in Figure 3, the performance of the conventional random beamforming is saturated as target SINR is achieved as the transmit SNR increases.In our proposed random beam-based NOMA, however, the achievable rate increases as the transmit power increases because each beam can support more users with more transmit power. The number of total users (i.e., K) In Figure 4, we compare the achievable sum rate of the conventional random beam forming and our proposed random beam-based NOMA with respect to the number of users when the number of transmit antennas is four, and the transmit SNR is 10dB (i.e., P = 10).The target SINR is fixed to one, i.e., γ = 1.As we can see in Figure 2, the performance of the conventional random beamforming is saturated as the number of users increases because the target SINR is achieved for all beams.Since the transmitter can support at most four users, the maximum achievable rate with the conventional random beamforming becomes M log 2 (1 + γ) = 4.In our proposed random beam-based NOMA, however, the achievable rate increases as the number of users increases because our proposed scheme can support more users at each beam with NOMA. In Figure 3, we compare the achievable sum rate of the conventional random beamforming and our proposed random beam-based NOMA with respect to the transmit SNR when there are total fifty users (i.e., K = 50), and the target SINR is fixed to one, i.e., γ = 1.As we can see in Figure 3, the performance of the conventional random beamforming is saturated as target SINR is achieved as the transmit SNR increases.In our proposed random beam-based NOMA, however, the achievable rate increases as the transmit power increases because each beam can support more users with more transmit power. The number of total users (i.e., K) In Figure 4, we compare the achievable sum rate of the conventional random beamforming and our proposed random beam-based NOMA with respect to the number of users In Figure 4, we compare the achievable sum rate of the conventional random beamforming and our proposed random beam-based NOMA with respect to the number of users when the number of transmit antennas is six, and the transmit SNR is 10dB (i.e., P = 10).The target SINR is fixed to one, i.e., γ = 1.As we can see in Figure 4, the performance of the conventional random beamforming is saturated as the number of users increases because the target SINR is achieved for all beams.In this case, the transmitter can support at most six users, so the maximum achievable rate with the conventional random beamforming becomes M log 2 (1 + γ) = 6.In our proposed random beam-based NOMA, however, the achievable rate increases as the number of users increases because our proposed scheme can support more users at each beam with NOMA. In Figure 5, we compare the achievable sum rate of the conventional random beamforming and our proposed random beam-based NOMA with respect to the transmit SNR when the number of transmit antennas is six (i.e., M = 6), and there are total 200 users (i.e., K = 200).In this case, the target SINR is fixed to one, i.e., γ = 1.As we can see in Figure 5, the performance of the conventional random beamforming is saturated as target SINR is achieved as the transmit SNR increases.In our proposed random beam-based NOMA, however, the achievable rate increases as the transmit power increases because each beam can support more users with more transmit power. when the number of transmit antennas is six, and the transmit SNR is 10dB (i.e., P = 10).The target SINR is fixed to one, i.e., γ = 1.As we can see in Figure 4, the performance of the conventional random beamforming is saturated as the number of users increases because the target SINR is achieved for all beams.In this case, the transmitter can support at most six users, so the maximum achievable rate with the conventional random beamforming becomes M log 2 (1 + γ) = 6.In our proposed random beam-based NOMA, however, the achievable rate increases as the number of users increases because our proposed scheme can support more users at each beam with NOMA. In Figure 5, we compare the achievable sum rate of the conventional random beamforming and our proposed random beam-based NOMA with respect to the transmit SNR when the number of transmit antennas is six (i.e., M = 6), and there are total 200 users (i.e., K = 200).In this case, the target SINR is fixed to one, i.e., γ = 1.As we can see in Figure 5, the performance of the conventional random beamforming is saturated as target SINR is achieved as the transmit SNR increases.In our proposed random beam-based NOMA, however, the achievable rate increases as the transmit power increases because each beam can support more users with more transmit power.In Figure 6, we show the achievable sum rate of our proposed random beam-based NOMA with respect to the number of users for various target SINR (i.e., γ) when the transmitter has four antennas, and SNR is 10 dB.As shown in Figure 6, our proposed random beam-based NOMA improves the conventional random beamforming when the target SINR is fixed by supporting multiple users for each beam.Furthermore, we can observe that the effect of the optimal power allocation becomes larger when the target SINR for each user is small.This is because with the smaller target SINR, each beam can support more users with NOMA.Conventional random beamforming (OMA) Random beam based NOMA (optimal power) Random beam based NOMA (equal power) [16] The number of total users (i.e., K) Conventional random beamforming (OMA) Random beam based NOMA (optimal power) Random beam based NOMA (equal power) [16] (a) γ = 0.5 (b) γ = 0.7 Figure 6.The achievable sum rate of our proposed random beam-based NOMA with respect to the number of users for various target SINRs (i.e., γ) when the transmitter has four antennas, and SNR is 10 dB. Conclusions In this paper, we proposed random beam-based NOMA for low latency MISO BCs, where there exists a target SINR for each user.For low latency, the transmitter exploits random beams, so each user can reduce the channel feedback overhead, while each beam can support more than a single user with NOMA.We established the joint optimization problem of user scheduling and power allocation for random beam-based NOMA.By allocating equal powers across the beams, we reduced the feedback overhead from each user, and showed that the joint optimization can be divided into several sub-optimization problems at all beams.We found the optimal power allocation and user scheduling for each sub-optimization problem, and we showed that our proposed random beam-based NOMA increases the performance of the conventional random beamforming by supporting more than a single user at each beam. :end The decoding order according to the relationship (18).for l = 1 to |G m | do Find P Find the largest G m ⊂ G m to satisfy the power constraint (34) end Figure 2 .ConventionalFigure 3 . Figure 2. The performance of our proposed random beam-based NOMA with respect to the number of users when the transmitter has four antennas, and SNR is 10 dB. Figure 2 . Figure 2. The performance of our proposed random beam-based NOMA with respect to the number of users when the transmitter has four antennas, and SNR is 10 dB. Figure 2 .ConventionalFigure 3 . Figure 2. The performance of our proposed random beam-based NOMA with respect to the number of users when the transmitter has four antennas, and SNR is 10 dB. Figure 3 . Figure 3.The performance of our proposed random beam-based NOMA with respect to the transmit SNR when the transmitter has four antennas, and the number of users is 50. Figure 4 .Figure 5 . Figure 4.The performance of our proposed random beam-based NOMA with respect to the number of users when the transmitter has six antennas, and SNR is 10 dB. Figure 4 . Figure 4.The performance of our proposed random beam-based NOMA with respect to the number of users when the transmitter has six antennas, and SNR is 10 dB. Figure 4 .Figure 5 . Figure 4.The performance of our proposed random beam-based NOMA with respect to the number of users when the transmitter has six antennas, and SNR is 10 dB. Figure 5 . Figure 5.The of our proposed random beam-based NOMA with respect to the transmit SNR when the transmitter has six antennas, and the number of users is 200.
6,197.8
2021-06-26T00:00:00.000
[ "Computer Science", "Engineering" ]
SAR ATR of Ground Vehicles Based on ESENet : In recent studies, synthetic aperture radar (SAR) automatic target recognition (ATR) algorithms that are based on the convolutional neural network (CNN) have achieved high recognition rates in the moving and stationary target acquisition and recognition (MSTAR) dataset. However, in a SAR ATR task, the feature maps with little information automatically learned by CNN will disturb the classifier. We design a new enhanced squeeze and excitation (enhanced-SE) module to solve this problem, and then propose a new SAR ATR network, i.e., the enhanced squeeze and excitation network (ESENet). When compared to the available CNN structures that are designed for SAR ATR, the ESENet can extract more e ff ective features from SAR images and obtain better generalization performance. In the MSTAR dataset containing pure targets, the proposed method achieves a recognition rate of 97.32% and it exceeds the available CNN-based SAR ATR algorithms. Additionally, it has shown robustness to large depression angle variation, configuration variants, and version variants. Introduction Synthetic aperture radar (SAR) has played a significant role in surveillance and battlefield reconnaissance, thanks to its all-day, all-weather, and high resolution capability.In recent years, SAR automatic target recognition (ATR) of ground military vehicles has received intensive attention in the radar ATR community.However, SAR images usually have low resolution and they only contain the amplitude information of scattering centers.Thus, it is challenging to identify the targets in SAR images. The MIT Lincoln Laboratory proposed the standard SAR ATR architecture, which consists of three stages: detection, discrimination, and classification [1].In the detection stage, simple decision rules are used to find the bright pixels in SAR images and indicate the presence of targets.The output of this stage might include not only targets of interests, but also clutters, because the decision stage is far from perfect.On the following discrimination stage, a discriminator is designed to solve a two-class (target and clutter) classification problem and the probability of false alarm can be significantly reduced [2].On the final classification stage, a classifier is designed to categorize each output image of the discrimination stage as a specific target type. On the classification stage, there are three mainstream methods: template matching methods, model-based methods, and machine learning methods.For the template matching methods [3,4], the template database is generated from training samples according to some matching rules and the best match is then found by comparing each test sample to the template database.The common matching rules are the minimum mean square error, the minimum Euclidean distance, and the maximum correlation coefficient, etc.In these template matching methods, the initial SAR images or sub-images cut from initial SAR images are served as templates.However, the SAR images are sensitive to azimuth angle, depression angle, and target structure.When there is large difference between the training and test samples, the recognition performance will severely decrease.Additionally, such methods suffer from severe overfitting [5].Model-based methods were proposed to solve the above problem [6,7].In the model-based methods, SAR images are predicted by computer-aided design model and the modeling procedure is usually complicated. SAR ATR algorithms that are based on machine learning methods can be further divided into two types, i.e., feature-based methods and deep learning methods.Feature-based methods [8,9] require features to be manually extracted from SAR images, while deep learning methods automatically extract features from SAR images.Thus, deep learning methods avoid the designing of feature extractors.As a typical deep learning structure, convolutional neural network (CNN) has been successfully applied in various fields, e.g., SAR image classification [10] and satellite image classification [11].Particularly, CNN-based methods outperform others in SAR ATR tasks due to its unique characteristics that are suitable for two-dimensional image classification [12]. The MSTAR dataset serves as a benchmark for SAR ATR algorithms evaluation and comparison [13].However, there is high-correlation between the target type and clutter in the MSTAR dataset, i.e., the SAR images of a specific target type may correspond to the same background clutter.It was demonstrated that, even if the target and shadow regions are removed, a traditional classifier still achieves high recognition accuracy (above 99%) for the remaining clutters [14].It may be impossible that the target location may change in real world situations, and various background clutters instead of a fixed type should accompany the corresponding SAR image.Therefore, we exclude such correlation by target region segmentation [15] and generate the MSTAR pure target dataset for fair comparison and an evaluation of SAR ATR algorithms. The key factors in improving the recognition performance of SAR ATR algorithms that are based on CNN include: (i) SAR image preprocessing to extract features more effectively and easily; and, (ii) designing effective network structures that make full use of the extracted features from SAR images. Ding et al. [16] augmented the training set by image rotation and shifting to alleviate over-fitting for SAR image preprocessing.Chen et al. generated the augmented training set by cropping the initial 128 × 128 MSTAR images to 88 × 88 patches randomly [12].Wagner enlarged the training set by directly adding distorted SAR images to improve the robustness [17].Lin et al. cropped the initial MSTAR images to 68 × 68 patches in order to reduce the computation burden of CNN [18] and Shang et al. cropped the initial MSTAR images to 70 × 70 patches [19].Wang et al. used a despeckling subnetwork to suppress speckle noise before inputting SAR images into a classification network [20]. For the designing of CNN structure for SAR ATR, a traditional CNN structure that consists of convolutional layers, pooling layers and softmax classifier was proposed [16,[21][22][23].Later, Chen et al. designed A-convent, where the number of unknown parameters is greatly reduced by removing the fully-connected layer [12].Wagner replaced the softmax classifier in the traditional CNN structure by a SVM classifier and achieved high recognition accuracy [17,24].Lin et al. proposed CHU-Nets, where a convolutional highway unit is inserted into the traditional CNN structure and the classification performance is improved in a limited-labeled training dataset [18].Shang et al. added an information recorder to CNN to remember and store the spatial features of the samples, and then used spatial similarity information of the recorded features to predict the unknown sample labels [19].Kechagias-Stamatis et al. fused a convolutional neural network module with a sparse coding module under a decision level scheme, which can adaptively alter the fusion weights that are based on the SAR images [25].Pei et al. proposed a multiple-view DCNN (m-VDCNN) to extract the features from target images with different azimuth angles [26]. Generally, CNN is a data-driven model and each pixel of the training and test samples directly participates in feature extraction.The correlation between the clutter in the training and test sets cannot be ignored, since input SAR images consist of both target region and clutter region.Additionally, for the available SAR ATR algorithms that are based on CNN, the softmax classifier directly applies the features that were extracted by convolutional layers.However, CNN may automatically learn the useless feature maps, which prevent the classifier from effectively utilizing significant features [27,28].Therefore, the available SAR ATR algorithms that are based on CNN ignore the negative effects of the feature maps with little information, and the recognition performance may degrade. We propose a novel SAR ATR algorithm based on CNN to tackle the above-mentioned problems.The main contributions includes: (i) an enhanced Squeeze and Excitation (SE) module is proposed to suppress feature maps with little information in CNN by allocating different weights to feature maps according to the amount of information they contain; and, (ii) a modified CNN structure, i.e., the Enhanced Squeeze and Excitation Net (ESENet) incorporating the enhanced-SE module is proposed.The experimental results on the MSTAR dataset without clutter have shown that the proposed network outperforms the available CNN structures designed for SAR ATR. The remainder of this paper is organized, as follows.Section 2 introduces the Squeeze and Excitation module.Section 3 introduces a novel SAR ATR method based on the ESENet, and discusses the mechanism of the enhanced-SE module, together with the structure of the ESENet in detail.Section 4 presents the experimental results to validate the effectiveness of the proposed network, and Section 5 concludes the paper. Squeeze and Excitation Module A typical CNN structure consists of a feature extractor and a classifier.The feature extractor is a multilayer structure that is formed by stacking convolutional layers and pooling layers.The feature maps of different hierarchies are extracted layer by layer, and then feature maps of the last layer are applied by the classifier for target recognition.In a typical feature extractor, the feature maps in the same layer are regarded as having the same importance to the next layer.However, such an assumption is usually violated in practice [29].Figure 1 shows 16 feature maps extracted by the first convolutional layer for a typical CNN structure applied in a SAR ATR experiment.It is observed that some of the feature maps, e.g., the second feature map in the first row, only have several bright pixels, and contain less target structural information than others. directly applies the features that were extracted by convolutional layers.However, CNN may automatically learn the useless feature maps, which prevent the classifier from effectively utilizing significant features [27,28].Therefore, the available SAR ATR algorithms that are based on CNN ignore the negative effects of the feature maps with little information, and the recognition performance may degrade. We propose a novel SAR ATR algorithm based on CNN to tackle the above-mentioned problems.The main contributions includes: (i) an enhanced Squeeze and Excitation (SE) module is proposed to suppress feature maps with little information in CNN by allocating different weights to feature maps according to the amount of information they contain; and, (ii) a modified CNN structure, i.e., the Enhanced Squeeze and Excitation Net (ESENet) incorporating the enhanced-SE module is proposed.The experimental results on the MSTAR dataset without clutter have shown that the proposed network outperforms the available CNN structures designed for SAR ATR. The remainder of this paper is organized, as follows.Section 2 introduces the Squeeze and Excitation module.Section 3 introduces a novel SAR ATR method based on the ESENet, and discusses the mechanism of the enhanced-SE module, together with the structure of the ESENet in detail.Section 4 presents the experimental results to validate the effectiveness of the proposed network, and Section 5 concludes the paper. Squeeze and Excitation Module A typical CNN structure consists of a feature extractor and a classifier.The feature extractor is a multilayer structure that is formed by stacking convolutional layers and pooling layers.The feature maps of different hierarchies are extracted layer by layer, and then feature maps of the last layer are applied by the classifier for target recognition.In a typical feature extractor, the feature maps in the same layer are regarded as having the same importance to the next layer.However, such an assumption is usually violated in practice [29].Figure 1 shows 16 feature maps extracted by the first convolutional layer for a typical CNN structure applied in a SAR ATR experiment.It is observed that some of the feature maps, e.g., the second feature map in the first row, only have several bright pixels, and contain less target structural information than others. In a typical CNN structure, all of the feature maps with different importance in the same layer equally pass through the network.Thus, they make equal contributions to recognition and such an equal mechanism disturbs the utilization of important feature maps that contain more information.We could apply the SE module, which allocates different weights to different feature maps in the same layer, to enhance significant feature maps and suppress others with less information [29].In a typical CNN structure, all of the feature maps with different importance in the same layer equally pass through the network.Thus, they make equal contributions to recognition and such an equal mechanism disturbs the utilization of important feature maps that contain more information.We could apply the SE module, which allocates different weights to different feature maps in the same layer, to enhance significant feature maps and suppress others with less information [29]. Figure 2 illustrates the structure of a SE module.For an arbitrary input feature map tensor U:U ∈ R W×H×C , where W × H represents the size of the input feature map and C represents the number of input feature maps, the SE module transforms U into a new feature map tensor X, where X shares the same size with U, i.e., X ∈ R W×H×C .r is a fixed hyperparameter in a SE module. Output Feature Maps ReLU Figure 2 illustrates the structure of a SE module.For an arbitrary input feature map tensor U: , where W × H represents the size of the input feature map and C represents the number of input feature maps, the SE module transforms U into a new feature map tensor X, where X shares the same size with U, i.e., . r is a fixed hyperparameter in a SE module.The computation of a SE module includes two steps, i.e., the squeeze operation sq F and the excitation operation ex F .The squeeze operation obtains the global information of each feature map, while the excitation operation automatically learns the weight of each feature map.A simple implementation of the squeeze operation is global average pooling.For the feature map tensor , such a squeeze operation outputs a description tensor C z R ∈ , where the cth element of z is denoted by: where c u represents the cth feature map of U. The excitation operation is denoted by the following nonlinear function: where δ is the rectified linear unit (ReLU) function, σ is the sigmoid activation function, , r is a fixed hyperparameter, and s is the automatically-learned weight vector, which represents the importance of feature maps.It can be seen from Equations ( 1) and ( 2) that the combination of the squeeze operation and the excitation operation learns the importance of each feature map independently from the network.Finally, the cth feature map that is produced by the SE module is denoted by: where c s represents the weight of c u and ( , ) F u s represents the product of them. As discussed above, the SE module computes and allocates weights to the corresponding feature maps.The feature maps with little information will be suppressed after being multiplied by the weights that are much less than 1, while the others will remain almost unchanged after being multiplied by the weights near 1.The computation of a SE module includes two steps, i.e., the squeeze operation F sq and the excitation operation F ex .The squeeze operation obtains the global information of each feature map, while the excitation operation automatically learns the weight of each feature map.A simple implementation of the squeeze operation is global average pooling.For the feature map tensor U ∈ R W×H×C , such a squeeze operation outputs a description tensor z ∈ R C , where the cth element of z is denoted by: where u c represents the cth feature map of U. The excitation operation is denoted by the following nonlinear function: where δ is the rectified linear unit (ReLU) function, σ is the sigmoid activation function, r , r is a fixed hyperparameter, and s is the automatically-learned weight vector, which represents the importance of feature maps.It can be seen from Equations ( 1) and ( 2) that the combination of the squeeze operation and the excitation operation learns the importance of each feature map independently from the network.Finally, the cth feature map that is produced by the SE module is denoted by: where s c represents the weight of u c and F scale (u c , s c ) represents the product of them. As discussed above, the SE module computes and allocates weights to the corresponding feature maps.The feature maps with little information will be suppressed after being multiplied by the weights that are much less than 1, while the others will remain almost unchanged after being multiplied by the weights near 1. SAR ATR Based on ESENet In this section, we will propose the Enhanced-SE module according to the characteristics of the SAR data, and then design a new CNN structure for SAR ATR, namely the ESENet. Figure 3 shows main steps of the training and test stages to give a brief view of the proposed method.Firstly, image segmentation is utilized to remove the background clutter [15,30] In what follows, we will explain the mechanisms of the ESENet in detail. SAR ATR Based on ESENet In this section, we will propose the Enhanced-SE module according to the characteristics of the SAR data, and then design a new CNN structure for SAR ATR, namely the ESENet. Figure 3 shows main steps of the training and test stages to give a brief view of the proposed method.Firstly, image segmentation is utilized to remove the background clutter [15,30].In what follows, we will explain the mechanisms of the ESENet in detail. Overall Structure of the ESENet In this part, we will discuss the characteristics and general layout of the proposed ESENet.As shown in Figure 4, the ESENet consists of four convolutional layers, three max pooling layers, a fully-connected layer, a SE-module, an enhanced-SE module, and a LM-softmax classifier [31].There are 16 5 × 5 convolutional kernels in the first convolutional layer, 32 3 × 3 convolutional kernels in the second convolutional layer, 64 4 × 4 convolutional kernels in the third convolutional layer, and 64 5 × 5 convolutional kernels in the last convolutional layer.Batch normalization [32] is used in the first convolutional layer to accelerate the convergence.A max pooling layer with pooling size 2 × 2 and stride size 2 is added after the first convolutional layer, the SE module, and the enhanced-SE module, respectively.The SE module is inserted in the middle of the network to preliminarily enhance the important feature maps.An enhanced-SE module is inserted before the last pooling layer to further suppress higher-level feature maps with little information.Subsequently, dropout is added to the third convolutional layer and the last convolutional layer.The fully-connected layer has 10 nodes.Finally, we apply the LM-softmax classifier for classification.Below, we will introduce the key components of the proposed network in detail. Overall Structure of the ESENet In this part, we will discuss the characteristics and general layout of the proposed ESENet.As shown in Figure 4, the ESENet consists of four convolutional layers, three max pooling layers, a fully-connected layer, a SE-module, an enhanced-SE module, and a LM-softmax classifier [31].There are 16 5 × 5 convolutional kernels in the first convolutional layer, 32 3 × 3 convolutional kernels in the second convolutional layer, 64 4 × 4 convolutional kernels in the third convolutional layer, and 64 5 × 5 convolutional kernels in the last convolutional layer.Batch normalization [32] is used in the first convolutional layer to accelerate the convergence.A max pooling layer with pooling size 2 × 2 and stride size 2 is added after the first convolutional layer, the SE module, and the enhanced-SE module, respectively.The SE module is inserted in the middle of the network to preliminarily enhance the important feature maps.An enhanced-SE module is inserted before the last pooling layer to further suppress higher-level feature maps with little information.Subsequently, dropout is added to the third convolutional layer and the last convolutional layer.The fully-connected layer has 10 nodes.Finally, we apply the LM-softmax classifier for classification.Below, we will introduce the key components of the proposed network in detail. SAR ATR Based on ESENet In this section, we will propose the Enhanced-SE module according to the characteristics of the SAR data, and then design a new CNN structure for SAR ATR, namely the ESENet. Figure 3 shows main steps of the training and test stages to give a brief view of the proposed method.Firstly, image segmentation is utilized to remove the background clutter [15,30] Overall Structure of the ESENet In this part, we will discuss the characteristics and general layout of the proposed ESENet.As shown in Figure 4, the ESENet consists of four convolutional layers, three max pooling layers, a fully-connected layer, a SE-module, an enhanced-SE module, and a LM-softmax classifier [31].There are 16 5 × 5 convolutional kernels in the first convolutional layer, 32 3 × 3 convolutional kernels in the second convolutional layer, 64 4 × 4 convolutional kernels in the third convolutional layer, and 64 5 × 5 convolutional kernels in the last convolutional layer.Batch normalization [32] is used in the first convolutional layer to accelerate the convergence.A max pooling layer with pooling size 2 × 2 and stride size 2 is added after the first convolutional layer, the SE module, and the enhanced-SE module, respectively.The SE module is inserted in the middle of the network to preliminarily enhance the important feature maps.An enhanced-SE module is inserted before the last pooling layer to further suppress higher-level feature maps with little information.Subsequently, dropout is added to the third convolutional layer and the last convolutional layer.The fully-connected layer has 10 nodes.Finally, we apply the LM-softmax classifier for classification.Below, we will introduce the key components of the proposed network in detail. Enhanced Squeeze and Excitation Module We discovered that, if the original SE module is inserted directly into a CNN designed for SAR ATR, most of the weights output by the sigmoid function become 1 (or almost 1), thus the feature maps remain almost unchanged after being multiplied by the corresponding weights.Accordingly, the original SE module cannot effectively suppress the feature maps with little information. To solve this problem, a modified SE module is proposed, i.e., the enhanced-SE module.Firstly, although global average pooling could compute global information of the current feature map, its accurately apperceiving ability is limited.Thus, we design a new layer with learnable parameters to apperceive global information regarding the current feature map, which is realized by replacing the global average pooling layer by a convolutional layer whose kernel size is the same as the size of the current feature map.Additionally, the first fully-connected layer is deleted, thus the apperceived global information directly joins the computation of the final output weights. The sigmoid function is utilized to avoid numerical explosion by transforming all the learned weights to (0,1) in the original SE module, which is defined by reference [33], Although the sigmoid function is monotonically increasing, all of the large weights are transformed to almost 1 (e.g., the weight 2.5 becomes 0.9241 after the sigmoid transformation).Such transformation is helpless for the network in distinguishing the importance of different feature maps.To solve the above problem, we design a new function, i.e., the enhanced-sigmoid function, where a is the shift parameter, b is the scale parameter, q is the power parameter, and s(x) is the original sigmoid function.If a = 0, b = 1, q = 1, then p(x) is the same as s(x).For a = 0, b = 1, q = 2, the comparison between the sigmoid function and Figure 5 shows the enhanced-sigmoid function.If the input value falls in (−5,5), then the output of the enhanced-sigmoid function is smaller than the output of the sigmoid function (e.g., the weight 2.5 becomes 0.8540 after the enhanced-sigmoid transformation, which is obviously smaller than 0.9241). Enhanced Squeeze and Excitation module We discovered that, if the original SE module is inserted directly into a CNN designed for SAR ATR, most of the weights output by the sigmoid function become 1 (or almost 1), thus the feature maps remain almost unchanged after being multiplied by the corresponding weights.Accordingly, the original SE module cannot effectively suppress the feature maps with little information. To solve this problem, a modified SE module is proposed, i.e., the enhanced-SE module.Firstly, although global average pooling could compute global information of the current feature map, its accurately apperceiving ability is limited.Thus, we design a new layer with learnable parameters to apperceive global information regarding the current feature map, which is realized by replacing the global average pooling layer by a convolutional layer whose kernel size is the same as the size of the current feature map.Additionally, the first fully-connected layer is deleted, thus the apperceived global information directly joins the computation of the final output weights. The sigmoid function is utilized to avoid numerical explosion by transforming all the learned weights to (0,1) in the original SE module, which is defined by reference [33], Although the sigmoid function is monotonically increasing, all of the large weights are transformed to almost 1 (e.g., the weight 2.5 becomes 0.9241 after the sigmoid transformation).Such transformation is helpless for the network in distinguishing the importance of different feature maps.To solve the above problem, we design a new function, i.e., the enhanced-sigmoid function, where a is the shift parameter, b is the scale parameter, q is the power parameter, and ( ) s x is the original sigmoid function.If a = 0, b = 1, q = 1, then ( ) p x is the same as ( ) s x .For a = 0, b = 1, q = 2, the comparison between the sigmoid function and Figure 5 shows the enhanced-sigmoid function.If the input value falls in (−5,5), then the output of the enhanced-sigmoid function is smaller than the output of the sigmoid function (e.g., the weight 2.5 becomes 0.8540 after the enhanced-sigmoid transformation, which is obviously smaller than 0.9241).Figure 6 shows the structure of the enhanced-SE module with the above modification.Figure 7 shows an illustrative comparison between the feature maps output by the SE module and the enhanced-SE module in a SAR ATR task.Obviously, many feature maps become blank in Figure 7b, indicating that the enhanced-SE module suppresses feature maps with little information more effectively than the original SE module.Figure 6 shows the structure of the enhanced-SE module with the above modification.Figure 7 shows an illustrative comparison between the feature maps output by the SE module and the enhanced-SE module in a SAR ATR task.Obviously, many feature maps become blank in Figure 7b, indicating that the enhanced-SE module suppresses feature maps with little information more effectively than the original SE module. Output Feature Maps ReLU Other Components in the ESENet The convolutional layer and pooling layer are the basic components in a typical CNN structure [34].The convolutional layer often acts as a feature extractor, which convolutes the input with a convolutional kernel to generate the new feature map.The pooling layer is a subsampling layer that reduces the number of trainable parameters of the network.By subsampling, the structural feature of the current layer is maintained and the impact of the deformed training samples on feature extraction is reduced. Neural networks are essentially utilized to fit the data distribution.If the training and test sets have different distributions, the convergence speed will decrease and the generalization performance will degrade.To tackle this problem, batch normalization is added behind the first convolutional layer of the ESENet to accelerate network training and improve the generalization performance. Dropout is a common regularization method that is utilized in deep neural networks [35].This technique randomly samples the weights from the current layer with probability p and prune them out, similar to the ensemble of sub-networks.Usually, it is adopted in the layer with a large number of parameters to alleviate overfitting.In the proposed ESENet, the fully-connected layer has a small Other Components in the ESENet The convolutional layer and pooling layer are the basic components in a typical CNN structure [34].The convolutional layer often acts as a feature extractor, which convolutes the input with a convolutional kernel to generate the new feature map.The pooling layer is a subsampling layer that reduces the number of trainable parameters of the network.By subsampling, the structural feature of the current layer is maintained and the impact of the deformed training samples on feature extraction is reduced. Neural networks are essentially utilized to fit the data distribution.If the training and test sets have different distributions, the convergence speed will decrease and the generalization performance will degrade.To tackle this problem, batch normalization is added behind the first convolutional layer of the ESENet to accelerate network training and improve the generalization performance. Dropout is a common regularization method that is utilized in deep neural networks [35].This technique randomly samples the weights from the current layer with probability p and prune them out, similar to the ensemble of sub-networks.Usually, it is adopted in the layer with a large number of parameters to alleviate overfitting.In the proposed ESENet, the fully-connected layer has a small number of parameters, while the third convolutional layer and the forth convolutional layer contain most of the trainable weights.Thus, we apply dropout in the two layers with p = 0.5 and p = 0.25, respectively. Additionally, we replace the common softmax classifier by the LM-softmax classifier, which could improve the classification performance by adjusting the decision boundary of features that were extracted by CNN. Parameter Settings and Training Method We apply the gradient decent technique with weight decay and momentum in the training process [36], which is defined by: where ∆θ i+1 is the variation of θ in the (i + 1)th iteration, ε is the learning rate, α is the momentum coefficient, β is the weight decay coefficient, and ∂L ∂θ is the derivative of loss function L with respect to θ.In this paper, the base learning rate is set to 0.02, α is set to 0.9, and β is set to 0.004, respectively.Subsequently, we adopt a multi-step iteration strategy, which updates the learning rate to be ε = ε/10 if the iteration number reaches 1000, 2000, and 4000, etc.Additionally, we adopt a common training method that subtracts the mean of training samples from both the training and test samples to accelerate the convergence of CNN.In the enhanced-SE module, a is set to 0, b is set to 1, and q is set to 2. In the SE module, r is set to 16. Dataset Description The training and test datasets are generated from the MSTAR dataset that was provided by DARPA/AFRL [13].The dataset was collected by Sandia National Laboratory SAR sensor platform in 1995 and 1996 using an X-band SAR sensor.It provides a nominal spatial resolution of 0.3 × 0.3 m in both range and azimuth, and the image size is 128 × 128.The publicly released datasets include ten categories of ground military vehicles, i.e., armored personnel carrier: BMP-2, BRDM-2, BTR-60, and BTR-70; tank: T62, T72; rocket launcher: 2S1; air defense unit: ZSU-234; truck: ZIL-131; and, bulldozer: D7. The MSTAR dataset consists of two sub-datasets for the sake of performance evaluation in various scenarios, i.e., the standard operating conditions (SOC) dataset and the extended operating conditions (EOC) dataset.The SOC dataset consists of ten target categories at 17 • and 15 • depression angles, respectively, as shown in Table 1.As a matter of routine [12,21], images at 17 • depression angle serve as training samples and images at the 15 • depression angle serve as test samples. The EOC dataset includes EOC1 and EOC2, i.e., large depression variation dataset and variants dataset.There are four target categories in EOC1, including 2S1, BRDM-2, T-72, and ZSU-234.Images at 17 • depression angle serve as training samples and images at 30 • depression angle serve as test samples, as shown in Table 2.There are two target categories in EOC2, i.e., configuration variants and version variants.For the configuration variants, the training samples include BMP2, BRDM-2, BTR-70, and T-72, and the test samples only include variants of T72.For version variants, the training samples include BMP-2, BRDM-2, BTR-70, and T-72, and the test samples include variants of T72 and BMP-2.Detailed information is listed in Tables 3 and 4, respectively. Network Structures for Comparison Traditional CNN and A-convnet structures are designed according to the size of the input image by referring to the structures given in references [12,21] for the convenience of comparison.Subsequently, structures yielding the highest classification accuracy are selected as optimal ones, as shown in Figure 8a,b, respectively.Data augmentation methods, such as translation and rotation, are not applied in this paper. Effect of Clutter and Data Generation Reference [14] shows that, although the target region has been removed from original MSTAR images, the nearest neighbor classifier still achieves high classification accuracy, proving that the clutter in the training and test images of the MSTAR dataset is highly correlated.Reference [15] also proves that background clutter in the MSTAR dataset will disturb the recognition results of CNN.The target region is segmented out from the original SAR images according to references [15] and [30] to mitigate the impact of background clutter on network training and testing, as shown by Figure 9.The original 128 × 128 image is cropped to 60 × 60 to reduce the computational cost, because the target only occupies a small region at the center of the original image.By this means, the pure target dataset utilized in this paper is generated. Results of SOC Table 5 shows the recognition results of ESENet and other CNN structures for comparison under SOC.We replace the enhanced-SE module in the ESENet with an original SE module and obtain the SENet for comparison in Table 5 to validate the effectiveness of the proposed enhanced-SE module.Table 6 provides the confusion matrix of ESENet. Effect of Clutter and Data Generation Reference [14] shows that, although the target region has been removed from original MSTAR images, the nearest neighbor classifier still achieves high classification accuracy, proving that the clutter in the training and test images of the MSTAR dataset is highly correlated.Reference [15] also proves that background clutter in the MSTAR dataset will disturb the recognition results of CNN.The target region is segmented out from the original SAR images according to references [15,30] to mitigate the impact of background clutter on network training and testing, as shown by Figure 9.The original 128 × 128 image is cropped to 60 × 60 to reduce the computational cost, because the target only occupies a small region at the center of the original image.By this means, the pure target dataset utilized in this paper is generated. Effect of Clutter and Data Generation Reference [14] shows that, although the target region has been removed from original MSTAR images, the nearest neighbor classifier still achieves high classification accuracy, proving that the clutter in the training and test images of the MSTAR dataset is highly correlated.Reference [15] also proves that background clutter in the MSTAR dataset will disturb the recognition results of CNN.The target region is segmented out from the original SAR images according to references [15] and [30] to mitigate the impact of background clutter on network training and testing, as shown by Figure 9.The original 128 × 128 image is cropped to 60 × 60 to reduce the computational cost, because the target only occupies a small region at the center of the original image.By this means, the pure target dataset utilized in this paper is generated. Results of SOC Table 5 shows the recognition results of ESENet and other CNN structures for comparison under SOC.We replace the enhanced-SE module in the ESENet with an original SE module and obtain the SENet for comparison in Table 5 to validate the effectiveness of the proposed enhanced-SE module.Table 6 provides the confusion matrix of ESENet. Results of SOC Table 5 shows the recognition results of ESENet and other CNN structures for comparison under SOC.We replace the enhanced-SE module in the ESENet with an original SE module and obtain the SENet for comparison in Table 5 to validate the effectiveness of the proposed enhanced-SE module.Table 6 provides the confusion matrix of ESENet. As shown in Table 5, the recognition accuracy for traditional CNN, A-convnet, SENet, and ESENet under SOC is 94.79%, 95.04%, 96.63% and 97.32%, respectively.Although the background clutter has been removed from SAR images, the ESENet still obtains good recognition performance, as the recognition rate of all types of targets exceeds 90%.Table 5 shows that SENet outperforms the traditional CNN structures for SAR ATR by inserting the SE module into a common CNN structure.Moreover, comparisons between the SENet and ESENet show that the enhanced-SE module outperforms the SE module in facilitating the feature extraction of CNN in a SAR ATR task.For a typical test sample, the feature maps of ESENet before and after transformation by the SE and the enhanced-SE modules are shown in Figure 10.It is observed that the feature maps of the second convolutional layer that pass through the SE module almost unchanged.However, in the third convolutional layer, the feature maps with little information are effectively suppressed when they pass through the enhanced-SE module. Results of EOC1 Subsequently, the EOC1 dataset is utilized to evaluate the performance of ESENet under large depression angle variation.As shown in Table 7, the recognition accuracy of traditional CNN, the A-convnet, the SENet, and the ESENet is 88.44%, 89.05%, 90.27%, and 93.40% respectively, which shows that the ESENet outperforms the others under EOC1.However, the accuracy of the EOC1 experiment is lower than that of the SOC experiment.As expected, the large difference between the training and the test samples decreases the recognition accuracy, because the SAR image is sensitive to the variation of viewing angles. Results of EOC1 Subsequently, the EOC1 dataset is utilized to evaluate the performance of ESENet under large depression angle variation.As shown in Table 7, the recognition accuracy of traditional CNN, the A-convnet, the SENet, and the ESENet is 88.44%, 89.05%, 90.27%, and 93.40% respectively, which shows that the ESENet outperforms the others under EOC1.However, the accuracy of the EOC1 experiment is lower than that of the SOC experiment.As expected, the large difference between the training and the test samples decreases the recognition accuracy, because the SAR image is sensitive to the variation of viewing angles.Table 8 shows the confusion matrix of the ESENet under EOC1.It is observed that the recognition accuracy of T72 rapidly decreases, which might be caused by the similarity between T72 and ZSU-234 for large depression angle variation.As shown in Figure 12, the SAR image of T72 at 30 • depression angle exhibits a configuration similarity to ZSU-234 at 17 • depression angle.Table 8 shows the confusion matrix of the ESENet under EOC1.It is observed that the recognition accuracy of T72 rapidly decreases, which might be caused by the similarity between T72 and ZSU-234 for large depression angle variation.As shown in Figure 12, the SAR image of T72 at 30° depression angle exhibits a configuration similarity to ZSU-234 at 17° depression angle.Variants recognition plays a significant role in SAR ATR.We test the network's ability to distinguish objects with similar appearance in the experiments under EOC2.For the configuration variants dataset that is introduced in Table 3, Table 9 shows the recognition accuracy of the above-mentioned four networks and Table 10 shows the confusion matrix of the ESENet.Obviously, the ESENet outperforms the others and the recognition accuracy is increased by 3% as compared with traditional CNN.For the version variants dataset that is introduced in Table 4, Table 11 lists the recognition accuracy of the four networks, and Table 12 lists the confusion matrix of the ESENet.It is observed that the ESENet has the best recognition performance among the four CNN structures. Results of EOC2 Variants recognition plays a significant role in SAR ATR.We test the network's ability to distinguish objects with similar appearance in the experiments under EOC2.For the configuration variants dataset that is introduced in Table 3, Table 9 shows the recognition accuracy of the above-mentioned four networks and Table 10 shows the confusion matrix of the ESENet.Obviously, the ESENet outperforms the others and the recognition accuracy is increased by 3% as compared with traditional CNN.For the version variants dataset that is introduced in Table 4, Table 11 lists the recognition accuracy of the four networks, and Table 12 lists the confusion matrix of the ESENet.It is observed that the ESENet has the best recognition performance among the four CNN structures. Conclusions Feature extraction plays an important role in the task of SAR ATR.This paper proposed the ESENet to solve the problem that feature maps with little information being automatically learned by CNN will decrease the SAR ATR performance.In this framework, we designed a new enhanced-SE module.The enhanced-SE module could enhance the ability of CNN in suppressing feature maps with little information by computing and allocating different weights to the corresponding feature maps.For the preprocessed MSTAR dataset, experiments have shown that the ESENet achieves higher recognition accuracy than traditional CNN structure and A-convent, and that it exhibits robustness to large depression angle variation, configuration variants, and version variants. Future work will be focused on network optimization, multi-channel CNN structure designing for multi-dimensional feature extraction, and improving the network robustness to the distorted datasets. Figure 1 . Figure 1.Illustration of feature maps learned by convolutional neural network (CNN) in a synthetic aperture radar automatic target recognition (SAR ATR) experiment. Figure 1 . Figure 1.Illustration of feature maps learned by convolutional neural network (CNN) in a synthetic aperture radar automatic target recognition (SAR ATR) experiment. Figure 2 . Figure 2. The structure of the Squeeze and Excitation (SE) module. Figure 2 . Figure 2. The structure of the Squeeze and Excitation (SE) module. . Subsequently, the segmented training images are input into the ESENet to learn weights, and all of the weights in the ESENet are fixed when the training stage ends.After that, the ESENet is used for classification.During the test stage, the segmented test images are input into the ESENet to obtain the classification results.The correlation between the clutter in the training and tests is excluded, because the clutter irrelevant to the target does not join the training and test stages of the ESENet. Subsequently, the segmented training images are input into the ESENet to learn weights, and all of the weights in the ESENet are fixed when the training stage ends.After that, the ESENet is used for classification.During the test stage, the segmented test images are input into the ESENet to obtain the classification results.The correlation between the clutter in the training and tests is excluded, because the clutter irrelevant to the target does not join the training and test stages of the ESENet. Figure 3 . Figure 3. Overview of the proposed SAR ATR method. Figure 3 . Figure 3. Overview of the proposed SAR ATR method. Figure 3 . Figure 3. Overview of the proposed SAR ATR method. Figure 4 . Figure 4. Structure of the Enhanced Squeeze and Excitation Net (ESENet).Figure 4. Structure of the Enhanced Squeeze and Excitation Net (ESENet). Figure 5 . Figure 5.Comparison between the sigmoid function and the enhanced-sigmoid function. Figure 5 . Figure 5.Comparison between the sigmoid function and the enhanced-sigmoid function. Figure 6 . Figure 6.Structure of the enhanced-SE module. Figure 7 . Figure 7. Visualization of feature maps output by the SE module (a) and the enhanced-SE module (b). Figure 8 . Figure 8. Optimal structures of traditional CNN (a) and A-convnet (b) for the 60 × 60 input image. Figure 8 . Figure 8. Optimal structures of traditional CNN (a) and A-convnet (b) for the 60 × 60 input image. Figure 8 . Figure 8. Optimal structures of traditional CNN (a) and A-convnet (b) for the 60 × 60 input image. Figure 10 . Figure 10.Visualization of feature maps in the ESENet.(a) input image; (b) mean of training samples; (c) input image with the mean of the training samples removed; (d) feature maps of conv2; (e) feature maps of conv2 after passing through the SE module; (f) feature maps of conv3; and, (g) feature maps of conv3 after passing through the enhanced-SE module. Figure 10 .Figure 10 .Figure 11 . Figure 10.Visualization of feature maps in the ESENet.(a) input image; (b) mean of training samples; (c) input image with the mean of the training samples removed; (d) feature maps of conv2; (e) feature maps of conv2 after passing through the SE module; (f) feature maps of conv3; and, (g) feature maps of conv3 after passing through the enhanced-SE module.For the purpose of illustration, we present a sample of BTR-70 in Figure11a, which is misclassified to BTR-60 by the SENet, while the ESENet correctly classifies it.The feature maps of the third Figure 11 . Figure 11.Feature maps of conv3 in SENet and ESENet.(a) input image; (b) input image with the mean of training samples removed; (c) feature maps of conv3 in the SENet; (d) feature maps of conv3 in the SENet after passing through the SE module; (e) feature maps of conv3 in the ESENet; and, (f) feature maps of conv3 in the ESENet after passing through the enhanced-SE module. Table 1 . Training and test samples for the standard operating conditions (SOC) experiments setup. Table 2 . Number of training and test samples for extended operating conditions (EOC)-1 (large depression variation). Table 3 . Number of training and test samples for EOC-2 (configuration variants). Table 4 . Number of training and test samples for EOC-2 (version variants). Table 5 . Recognition accuracy comparison under SOC. Table 6 . Confusion matrix of ESENet under SOC.
10,404.8
2019-06-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Analysis of Food Security in Regional Context: West Java Provincial Government as a Case Study Food security in Indonesia is still experiencing problems, especially in the West Java region, starting from the problem of food availability and the ability of the community to meet food needs. The goal of food security is access for every household or individual to obtain food for the purposes of a healthy life with requirements for receiving food in accordance with prevailing values or culture by taking into account socio-economic conditions, access and availability of food. This study describes the condition of food security in the West Java Region as an implementation of food security policies issued by the Regional Government of West Java. This study uses a qualitative method. The data collected is sourced from primary and secondary data. The primary data comes from interviews with informants, including the heads of farmer groups, millers, rice and grain marketers and related agencies at the provincial level of West Java. Secondary data is sourced from the West Java Province Agriculture and Food Crops Office, the West Java Provincial Food Security Agency, and the West Java Regional Division of Bulog (Divre). The results of the study show that the food security policy of the West Java regional government in realizing food security has not made any significant changes to the yields of food crops obtained in West Java, even though the Regional Regulation of West Java Province No. 4 of 2012 concerning Regional Food Independence and other policies has been enacted in terms of food policy. Introduction World food needs are increasing along with the increase in world population (Rosenthal, 2009).The increasing population is not the only problem hindering national food security (Khalil Ahmad and Amjad Ali, 2016).The reduction in agricultural land converted into settlements, and industrial land has become a distinct threat and challenge for the Indonesian people to become an independent nation in the food sector.Food is a very basic need for every human being to be able to carry out daily activities in order to sustain life (Ma, 2015).Food is also a basic right for every citizen (Ma, 2015).As a basic need, food has a very important meaning and role in the life of a nation.The availability of food that does not meet people's needs can create economic instability (Abdul Manap & Ismail, 2019).Various social and political upheavals can also occur if food availability is disrupted (Fauzin, 2021;Ma, 2015).Critical food conditions can even endanger national economic stability (Purwaningsih, 2008).Adequate food supply is a complex issue related to the interests of many people with diverse backgrounds and socio-culture (Abdul Manap & Ismail, 2019).Given this, the government's role is needed to bridge these various interests, starting from production to consumption.The food sector is the main leading sector that must be developed by the Indonesian government (Khairiyakh et al., 2016).It is based on a number of considerations.First, Indonesia has natural potential that can be developed as agricultural land.Second, most of the population lives in rural areas whose livelihoods are in the agricultural sector.Third, the need for high technology and scientific induction designed to develop agriculture without causing damage.Fourth, the availability of labor in the agricultural sector is quite abundant.Fifth, the threat of food shortages which can be fulfilled independently from domestic products, so that they do not have to depend on foreign agricultural products whose prices will one day become expensive.The food problem is not only the central government's main concern but also the Regional Government of West Java, which is known as one of the provinces with agrarian potential (Purwaningsih, 2008).Based on data from the Regional Food Security Agency (BKPD) for West Java Province, several districts/cities in this province are still vulnerable, even very food insecure.As opposed to food insecurity, food security is a condition that is defined as the availability of food in sufficient quantities, distributed at affordable prices, and safe for consumption for every citizen to support their daily activities at all times.From this definition, it can be concluded that food security is an integrative concept between food availability, distribution and consumption.Meanwhile, food insecurity will arise when the food conditions experienced by a region, community, or household at a certain time are not sufficient to meet physiological needs for growth and public health.Food insecurity can repeatedly occur at any time (chronic) and can also occur due to emergencies, such as natural disasters or social disasters.(Wan, 2017) West Java is the province with the largest population in Indonesia, which is more than 18 percent of Indonesia's total population.West Java's population growth rate is around 1.54% (2021) and tends to decrease every year.However, in terms of quantity, the population continues to increase from year to year.One of the important things related to the population is food availability.The problem of food availability covers the production, distribution and consumption of food logistics.The government, both at the central and regional levels, is making various efforts to fulfill food security programs and the availability of food logistics (Purwaningsih, 2008).Food security is the access of every household or individual to obtain food for a healthy life with the requirements of receiving food in accordance with prevailing values or culture by taking into account socio-economic conditions, access and availability of food.Thus, food security must be controlled by stabilizing price fluctuations so people can buy food (Jamaludin, 2022).Dimensions in food security include availability, utilization, access to socioeconomic culture, and access to infrastructure.If the concept of food security is examined, food security does not only concern aspects, quantity but also food quality, safety and nutrition (Lestari, 2019).There are 8 (eight) fundamental matters related, both directly and indirectly, to food security that must be considered, namely: 1.The household is the most important unit of attention in meeting national and community, and individual food needs; 2. The state's obligation to guarantee the right to food for every citizen who is gathered in the smallest unit of society to obtain food for survival; 3. Availability of food includes aspects of adequate quantity of food and guaranteed quality (food quality); 4. Food production which greatly determines the amount of food as an activity or process of producing, processing, processing, making, preserving, repackaging and or changing the form of food; 5. Food quality is determined on the basis of food safety criteria, nutritional content and trade standards for food and beverage ingredients; 6.Food safety (food safety) is the condition and efforts needed to prevent food from possible biological, chemical and other contaminants that can disturb, harm and harm human condition; 7.Even the distribution of food is an important dimension of food justice for the people whose size is largely determined by the degree of the state's ability to guarantee citizens' food rights through the food production distribution system it has developed.The principle of even distribution of food ensures that the national food system must be able to guarantee the right to food for every household without exception; 8. Affordability of food shows the equal degree of freedom of access and control that every household has in fulfilling their right to food.This principle is one of the dimensions of food justice that is important to pay attention to (Moyo & Thow, 2020).This study focuses on research on food security policies in West Java.In general, this research aims to describe the condition of food security in West Java as the implementation of food security policies that the Regional Government of West Java has issued.In particular, the results of this study explore food policies that the regional government of West Java has issued with data from the West Java Central Bureau of Statistics (BPS) regarding the food needs and fulfillment of the people of West Java Research Methods This research method uses qualitative methods.The data collected is sourced from primary and secondary data.The primary data comes from interviews with informants, including the heads of farmer groups, millers, rice and grain marketers and related agencies at the provincial level of West Java.Secondary data is sourced from the West Java Province Agriculture and Food Crops Office, the West Java Provincial Food Security Agency, and the West Java Regional Division of Bulog (Divre). Results and Discussion The population of West Java in 2021 from the projected population is 48,220,094 people.The largest population is in Bogor Regency, namely 5.73 million people (12.07 percent), Bandung Regency with 3.69 million people (7.75 percent) and Bekasi Regency with 3.58 million people (7.42 percent).West Java's population growth rate in 2021 is 1.57percent.It shows an increase in the population of 0.68 million people compared to 2020.The district/city with the highest population growth rate is Bekasi Regency (3.86 percent), Depok City (3.45 percent) and Bekasi City (2.64 percent), while the lowest is in Cianjur Regency (0.27 percent) (BPS, West Java 2021) West Java's population growth rate for the 2017-2021 period shows a yearly slowdown.However, in terms of quantity, the population of West Java is still increasing every year.Changes in the structure and size of the population will greatly affect consumption patterns and the population's food needs.One thing that needs to be anticipated in realizing food security is the balance between population growth and growth in food production.If population growth is faster than production, there will be food scarcity.To avoid food scarcity and achieve food security and the availability of logistics, of course, cannot be separated from the government's obligation to regulate through policies that support food security so that every food need is met, especially for the people of West Java, either through policies in the form of Regional Regulations (PERDA) or other binding regulations regarding food policy.Food policy guarantees food security, including food supply, diversification, security, institutions and organization.Therefore, this policy is needed to increase food self-sufficiency.Development that ignores self-sufficiency in the population's basic needs will become dependent on other countries.It means that the country will become a country that is not sovereign in terms of food (Suryana, 2014).Fulfillment of food needs is important and strategic in order to maintain state sovereignty and not depend on food imports from other countries.A country's dependence on food imports, especially when it comes from developed countries, will result in decision-making on all aspects of life being no longer free and independent and, therefore, not sovereign.Conceptually, food security based on Law No. 18 of 2012 is a condition of fulfilling food for the state down to individuals, which is reflected in the availability of sufficient food, both in quantity and quality, safe, diverse, nutritious, equitable and affordable and does not conflict with religion, belief, and culture of the community, to be able to live a healthy, active and productive life in a sustainable manner. The Regional Government of West Java has attempted to make food policies to regulate and fulfill the food availability of the people of the West Java region, including by making food policy regulations as follows: a.Based on the food policy that has been made by the Regional Government of West Java Province, according to researchers, the Regional Government of West Java has realized that food policy must be prioritized in order to be able to support all the needs of the community in terms of food availability in West Java which basically emphasizes increasing food production and expanding land agriculture. National Food Security is, of course, inseparable from domestic/local food security.Regarding this, regional autonomy is expected to maximize the role of local governments in improving the agribusiness sector in realizing national food security.It is possible for the West Java government in particular or Indonesia, in general, to achieve food sovereignty first because most of Indonesia's population makes their living in the agricultural sector.The climate in Indonesia only knows two seasons, namely rainy and dry.It allows paddy fields to be planted all year round.Sixty percent of the food reserves on the Equator are in Indonesia.It indicates that Indonesia is indeed a strategic place in the food sector.It can be planted throughout the season as long as there is soil, sun and rain.We have very fertile land.These conditions are a great opportunity for the Indonesian state to achieve food sovereignty.Law No. 18 of 2012 mandates that the essence of food development is to fulfill basic human needs and that the government and society are responsible for realizing food security.This law also explains the concept of food security, its components and the parties that participate in realizing food security.This law has been spelled out in several regulations on the Strategic Plan (RENSTRA) of the Regional Food Security Agency for West Java Province for 2013-2018, referring to Government Regulations (PP), among others: a. PP No. 68 of 2002 concerning food security which regulates and includes aspects of food availability, food reserves, food diversification, prevention and overcoming of food problems; b.PP No. 69 of 1999 concerning food labels and advertisements which regulate guidance and supervision in the field of food labels and advertisements in the context of creating honest and responsible food trade; and c.PP No. 28 of 2004 regulates food safety, quality and nutrition, import and export to the territory of Indonesia, supervision and development, as well as community participation regarding food quality and nutrition matters. Development in West Java in the second phase of the Regional RPJP or Regional RPJM 2013-2018 demands more attention, not only to deal with unresolved problems but also to anticipate changes that will arise in the future.The development policy in the field of food security in West Java based on the 2013-2018 Medium Term Development Plan (RPJM) Strategic Plan (RENSTRA) of the Regional Food Security Agency for West Java Province is to increase food availability, access and security.This policy is carried out through the Food Security Improvement program with the following targets: a. increased production and productivity of staple food rice, corn and soybeans; b. decreased rate of crop loss; c. reduction of community food insecurity; d. orderly distribution and trading of rice; e. increasing the diversity of consumption, food quality and decreasing dependence on the staple food rice; f. improving food safety controls.Even the Regional Government of West Java has issued various Food policies other for achieving self-sufficiency and food availability, including the following: a. Regional Regulation of West Java Province No. 27 of 2010 concerning the Protection of Sustainable Food Agricultural Land; b.Regional Regulation of West Java Province No. 4 of 2012 concerning Regional Food Self-Reliance; c. Regulation of the Governor of West Java No. 67 of 2013 concerning Guidelines for the Implementation of Regional Regulations of West Java Province No. 4 of 2012 concerning. The field data, which is part of the influence of the implementation of the West Java regional government's food policy policies, are as follows (researchers took sample data regarding rice/rice plants as the most basic of food ingredients); The area of paddy fields in West Java is the third largest in Indonesia after East Java and Central Java, which is around 12 percent of the national paddy field area.The harvested area of West Java rice plants planted with rice in 2021 is 1,604,109 hectares.The widest is Indramayu Regency (227,051 hectares), followed by Karawang Regency (197,916 hectares) and then Subang Regency (163,947 hectares).In the 2016-2020 period, the standard area of land planted with rice in irrigated rice fields decreased yearly.In contrast, the standard area of non-irrigated paddy fields planted with rice has increased except in 2021.In total, the area of non-irrigated paddy fields has not significantly changed from year to year.In economics, inflation increases prices in general and is continuously related to market mechanisms.Inflation is measured by calculating the percentage change in the rate of change in the price index.Calculation of inflation also goes through a process that is quite long and complicated.In addition to collecting market price data for various commodities, calculating inflation also involves calculating the weight chart from the Cost of Living Survey (SBH).The results of the SBH are in the form of the average expenditure of respondents for each commodity into a weighing chart that measures the size of a commodity that affects inflation.For example, rice is a commodity with a large number on the weight chart because it is consumed by almost the entire population, so even a slight change in the price of rice will greatly affect the inflation rate.The Regional Government of West Java Province always strives to achieve self-sufficiency and food security through food policies.The Provincial Government of West Java, in early 2023, will carry out the Food Security Revolution Plan initiated directly by the Governor of West Java.Based on the data that has been described regarding the food policy of the Regional Government of West Java and field data, which is nothing but part of the results of implementing the food policy, the researcher is of the opinion that: a. Overall there was no significant change in the yields of food crops obtained in West Java, even though the Regional Regulation of West Java Province No. 4 of 2012 concerning Regional Food Independence was enacted.b.Based on field data, even though the Regional Regulation of West Java Province No. 27 of 2010 concerning the Protection of Sustainable Food Agricultural Land has been enacted, every year, there is still a decline/narrowing of agricultural land, especially in the food sector; c.Based on field data, after the enactment of West Java Province Regional Regulation No. 4 of 2012 concerning Regional Food Self-Reliance and West Java Governor Regulation No. 67 of 2013 concerning Instructions for Implementation of West Java Provincial Regulation No. 4 of 2012 concerning Regional Food Independence, there has been an increase in food production and expanding the harvest area even though it has experienced a decline in 2021; d.After the adoption of the Decree of the Governor of West Java No. 501/Kep.566-Rek/2016concerning the Working Group of the West Java Provincial Food Security Council for 2016-2020, in the 2016-2020 period, there has been a significant increase in Food Productivity.e.Based on field data, the food inflation rate generally continues to creep up even though it will decline in 2021.No West Java Regional Government policy supports/focuses on the problem of food inflation even though food inflation can occur every month, especially before holidays such as Eid al-Fitr and others. Conclusion The regional government of West Java Province has attempted to deal with food problems so that food availability in West Java is fulfilled by issuing food policies, one of which is by enacting West Java Provincial Regulation No. 4 of 2012 concerning Food Independence.As a whole, there is no significant change in the results of food products obtained in West Java, even though the Regional Regulation of West Java Province No. 4 of 2012 concerning Regional Food Independence and other policies regarding food policy has been enacted.Governor of West Java No. 501/Kep.566Rek/2016Concerning the 2016-2020 West Java Provincial Food Security Council Working Group so that the realization is optimized as best as possible with additional innovations so that West Java food security can be achieved.The West Java Provincial government should pay more attention to the inflation of food ingredients that occurs every year in West Java, especially ahead of religious holidays, one of which is with policies that focus on and support these problems. Regional Regulation of West Java Province No. 27 of 2010 concerning Protection of Sustainable Food Agricultural Land (West Java Provincial Gazette of 2010 No. 27 Series E, Supplement to West Java Provincial Gazette No. 90.b.Regional Regulation of West Java Province No. 2 of 2009 concerning the Mid-Term Development Plan (RPJM) of the Province of West Java Year 20082013 (Regional Gazette of West Java Province of 2009 No. 2 Series E, Supplement to Regional Gazette of West Java Province No. 60) as amended by Regulation West Java Province Region No. 25 of 2010 concerning Amendments to West Java Province Regional Regulation No. 2 of 2009 concerning West Java Province Regional Medium Term Development Plans 2008-2013 (West Java Provincial Gazette of 2010 No. 25 Series E, Supplement Regional Gazette of West Java Province No. 88); c.Regional Regulation of West Java Province No. 3 of 2012 concerning the Formation of Regional Regulations (West Java Provincial Gazette of 2012 No. 3 Series E, Supplement to West Java Provincial Gazette No. 117); d.Regional Regulation of West Java Province No. 4 of 2012 concerning Regional Food Self-Reliance (West Java Provincial Gazette of 2012 No. 4 Series E, Supplement to West Java Provincial Gazette No. 118); e. Regulation of the Governor of West Java No. 67 of 2013 concerning Guidelines for the Implementation of Regional Regulations of West Java Province No. 4 of 2012 concerning Regional Food Independence (West Java Provincial Gazette of 2013 No. 67 Series E); f.Governor of West Java Decree No. 501/Kep.566-Rek/2016Concerning the Working Group of the West Java Provincial Food Security Council for 2016-2020.
4,795
2023-01-27T00:00:00.000
[ "Agricultural and Food Sciences", "Economics", "Environmental Science", "Political Science" ]
Air Quality Measurement Based on Double-Channel Convolutional Neural Network Ensemble Learning Environmental air quality affects people's life, obtaining real-time and accurate environmental air quality has a profound guiding significance for the development of social activities. At present, environmental air quality measurement mainly adopts the method that setting air quality detector at specific monitoring points in cities and timing sampling analysis, which is easy to be restricted by time and space factors. Some air quality measurement algorithms related to deep learning mostly adopt a single convolutional neural network to train the whole image, which will ignore the difference of different parts of the image. In this paper, we propose a method for air quality measurement based on double-channel convolutional neural network ensemble learning to solve the problem of feature extraction for different parts of environmental images. Our method mainly includes two aspects: ensemble learning of double-channel convolutional neural network and self-learning weighted feature fusion. We constructed a double-channel convolutional neural network, used each channel to train different parts of the environment images for feature extraction. We propose a feature weight self-learning method, which weights and concatenates the extracted feature vectors, and uses the fused feature vectors to measure air quality. Our method can be applied to the two tasks of air quality grade measurement and air quality index (AQI) measurement. Moreover, we build an environmental image dataset of random time and location condition. The experiments show that our method can achieve nearly 82% accuracy and a small mean absolute error (MAE) on our test dataset. At the same time, through comparative experiment, we proved that our proposed method gained considerable improvement in performance compared with single channel convolutional neural network air quality measurements. Introduction Environmental air quality is closely related to human production and life.The decline of air quality is likely to cause ecological damage and induce human diseases.At present, air quality monitoring mainly adopts the method of setting up monitoring stations in several specific locations in the city, using the air quality detector to regularly sample and measure air pollutants, and finally obtaining the air quality index through calculation and analysis.This method is easy to be limited by time and space, can only obtain air quality at specific monitoring points in specific time.It is arXiv:1902.06942v3[cs.CV] 19 Feb difficult to obtain the air quality information of the random location in real time, and the measurement cost is high.How to obtain the air quality index in real time and accurately is a subject worth studying. Image-based air quality measurement is a method that use image processing algorithm to extract environmental image features and estimate air quality index based on image features.In recent years, with the rapid development of deep learning technology, using deep learning technology to complete identification, detection and other tasks is efficient.Environmental images under different air quality grades are often different to some extent, therefore, it is feasible and valuable to use deep convolutional neural network to extract features of environmental images and complete the measurement of real-time air quality index at random site.Compared with the traditional air quality measurement method, the air quality measurement based on image and deep learning can obtain the air quality at any time and any place, which has the advantages of real-time and low cost, has been widely concerned by the academic circle in recent years. At present, the existing air quality measurement methods related to image or deep learning are mainly divided into two types: method based on traditional image processing or deep learning.The methods which based on traditional image processing [1,7], are use traditional machine learning algorithms for feature extraction, such as edge detection, direction gradient histogram, etc.The extracted features are analyzed and calculated to get air quality measurement values.The imagebased deep learning methods [2,3,4,5] The main contributions of this paper: 1. We construct a double-channel convolutional neural network structure to perform feature extraction for different parts of the environmental image. 2. We propose a weighted feature fusion method, and a feature weight self-learning method to select excellent features. 3. We apply the double-channel convolutional neural network and weighted feature fusion to the classification and regression tasks, complete the tasks of air quality grade measurement and air quality index measurement. 4. Through experiments, we prove the effectiveness of the proposed method, and demonstrate the influence of different weights and different network structure on system performance. The rest of this paper is organized as follows.Section 2 is related work, mainly introduces the development of deep learning and related research on air quality measurement.Section 3 introduces the air quality measurement algorithm based on double-channel convolutional neural network ensemble learning, mainly introduces the structure of the double-channel convolutional neural network, the weighted feature fusion and the self-learning method of feature weight.Section 4 is the experimental part, mainly introduces our training and testing methods, shows our experimental results, and compares our system with different network structure, different feature weights ratio, analyzes the existing problems.Section 5 is the conclusion, which concludes the paper and describes the main direction of the future work. The research on deep learning can be traced back to 1989, when LeCun applied BP algorithm to multi-layer neural network [8].With the LeNet-5 model proposed by LeCun in 1998 [9], the basic structure of deep neural network was formed.In 2006, Geoffrey Hinton, a professor at the university of Toronto, formally proposed the concept of deep learning [10], and deep learning has entered a period of rapid development.Alex proposed AlexNet in 2012 [11], built the first big convolutional neural network, and adopted the ReLu activation function instead of Sigmod, managed to avoid the problem of gradient disappeared in neural network training, the performance of image recognition is much better than traditional methods.VGG [12], GoogleNet [13], ResNet [14] and other network Weighted Feature Fusion and Weights Self-Learning We found in our observation that, considering the images input of two channels, the content of some images of the sky is relatively simple, generally simple sky, or containing a small number of clouds or trees, the image complexity is relatively low; The content of the building is relatively rich in composition, and there are a variety of different buildings, roads and trees, etc., and the image complexity is relatively high.Due to the different complexity of the two images, the feature complexity extracted by the two channels is also different, and the effect weight on the final measurement result is different. Therefore, considering that the image features of the upper and under channels may have different proportion of influence on the measurement results, we propose a method of weighted feature fusion.Before sending the output features of the two feature layers into the classification layer, the weighted feature fusion is carried out first.The weight value is multiplied by the output feature vectors of the upper and under channels by two constants, and then the two vectors are concatenated.The formula of feature fusion is as equation ( 1): Where When training the feature weight, the objective loss and the weight loss constraint function are combined to form the joint loss function, optimizing joint loss function to adjust the weights parameter value.Finally, multiply the two weights obtained after training a by the feature vectors extracted from each channel, and concatenate two weighted features. Air Quality Measurement For air quality measurement tasks, we start from the two directions of classification and regression, consider applying our double-channel convolutional neural network in two aspects, air quality grade measurement and air quality index measurement. Air Quality Grade Measurement Air quality grade measurement is essentially a classification and recognition task.According to the 6 grades of air quality, the corresponding environmental images are divided into 6 categories and classified in the fully connection layer.Softmax was used for the activation function to conduct one-hot operation on all kinds of labels to obtain the predicted probability value of each grade, and the maximum probability was taken as the measurement result of each grade.At the same time, we put forward a calculation method of air quality index according to the prediction probability of each grade, like equation (3): Where and are respectively the upper and lower limits of the predicted grade air quality index, and is the predicted probability of the predicted grade.According to the calculation, we can get the calculated value of air quality index. Air Quality Index Measurement Based on the idea of regression, we consider the direct measurement of air quality index. Therefore, we add a 1-node fully connection layer after the above double-channel convolutional neural network, the AQI value corresponding to the environment image was used as the training label to conduct regression training.With directly measure the air quality index, we can calculate the air quality grade according to the air quality index value.The loss function adopts the mean square error between the predicted value and the labeled value: Where is the number of training images, is the labeled value of the ith image air quality index, and ( ) is the predicted value. Dataset In order to establish an effective dataset, we used the method of manual collection, to randomly of feature fusion layer are frozen, and the negative feedback stochastic gradient descent is adopted to train other network parameters of the double-channel neural network.At the same time, we adopted dropout [15] with probability of 0.5 to prevent network overfitting at the last convolution layer of each channel convolutional neural network. Feature Fusion Weight Training When the training of the double-channel convolutional neural network meets the requirements and the loss function value no longer decreases significantly, we stopped the first step of training and froze the network parameters.Next, only two fusion weights of feature fusion layer were trained, and negative feedback stochastic gradient descent was used to update 1 and 2 .After a certain number of iterations, the training was completed. Training environment configuration, we adopt Intel Xeon e5-2650<EMAIL_ADDRESS>NVIDIA Tesla K40c GPU hardware environment; In terms of software, we adopt TensorFlow1.10.0 deep learning framework and Python3.5 programming language.In terms of training Settings, batch-size was 128, learning rate was 1-e4, the training period was about 3300 epochs, and the number of iterations was 11,000.The first 10,000 times are used to train the parameters of the double-channel convolutional neural network.The last 1000 iterations are used to adjust the feature weights. Test Methods and Evaluation Criteria We used test-time augmentation during the testing.That is, random crop was also used in the test, and voting mechanism and average mechanism were introduced.In the test, the above image preprocessing is carried out on the test image first, and each image is subject to 20 times of random crop without random horizontal transformation.For each image, we get 20 groups test data to feed into the trained model for prediction.Finally, 20 predicted results were obtained for each image.For the task of grades classification, the voting mechanism was adopted to take most of the predicted class grades in the 20 predicted results as the final prediction classification.For the AQI measurement task, the average mechanism was adopted, and the mean value of 20 predicted values was taken as the measurement result. Two evaluation criteria, mean accuracy and mean absolute error (MAE), were used to evaluate the classification accuracy.The mean accuracy is shown in the following formula (5), that is, the ratio of the predicted correct sample number to the total sample number.where is the total number of test samples, is the ith sample predicted grade, is the ith sample labeled grade. At the same time, because of the particularity of problem, labeling information collected from the nearest location of stations as well as the time is the most closed to the hour of the air quality index value, it is difficult to obtain the location accurate air quality index value, so the annotation information itself has a little error; In addition to the limitation of time and space, the measurement error of the measuring instrument itself makes the images of different grades at the critical point of air quality grade also have the problem of inaccuracy, and the difference is slight.Therefore, we use the mean absolute error (MAE) as the second evaluation standard, that is, to calculate the mean value of the absolute value of the difference between the predicted grade and the true grade of each sample.The formula is as equation ( 6): For the air quality index regression problem, MAE is also used as the evaluation criteria to measure the mean deviation between the measured value and the real AQI value.At the same time, we introduce the mean deviation rate (MDR) as another evaluation criteria of index.The formula of MDR is as equation (7).Where is the ith a sample air quality index predicted value, is the ith a sample air quality index of true value.As can be seen from table 5, system performance increases and decreases for different feature weights.For the improper feature weights, the performance of the system is obviously decreased compared with DCEW-C.Relatively speaking, some performance criterions have been improved MDR = with more appropriate feature weights.Therefore, adopting the method of weight self-learning is beneficial for the system to automatically find the appropriate feature weight.The double-channel convolutional neural network using the weight self-learning method (DCSLW-C) has improved its performance in terms of accuracy and MAE, compared with DCEW-C and the assigned weights feature fusion method.shown in figure 6.On the whole, the classification method using the double-channel convolutional neural network shows obvious advantages in terms of the accuracy of grade measurement, and the regression method using the double-channel convolutional neural network obtains lower MAE.This advantage is further enhanced with the adoption of self-learning weights. Figure6. Algorithms performance comparison Meanwhile, in the experiment, we also analyzed the samples that testing failed.DCSLW-C and DCSLW-R partial measurement wrong samples as shown in figure 7. We found that most wrong samples were high air quality grade samples, and the difference between these images with different grades was small, and there was a similarity with the error category.At the same time, due to the label limitations mentioned above, the labels were approximately collected at the nearest time and place monitoring points, so there are certain errors in the label itself, which are also part of the reasons for the measurement errors. Figure7.Some measurement wrong images Conclusion In this paper, we propose an air quality measurement algorithm based on double-channel convolutional neural network ensemble learning, design a double-channel convolutional neural network structure to measure the air quality grade and index of environmental images.At the same time, we propose a self-learning method of weighted feature fusion.Experimental results show that our method is feasible, achieve certain accuracy and small mean absolute error.On the basis of the double-channel convolutional neural network, the performance is further improved by using the weighted feature fusion method.Compared with other methods, our method achieves better performance.Meanwhile, it is found in the experiment that it is difficult to distinguish the images of adjacent grades and the images of different grades with similar contents.How to recognize such samples will be an important direction of future work. Figure 1 . Figure 1.Air quality measurement based on double-channel convolutional neural network ensemble learning algorithm diagram 3 Conv1a3. 1 structures were proposed one after another, which further enhanced the feature extraction ability of deep convolutional neural network.Using deep convolutional neural network to extract image features and complete classification and recognition has become the primary choice and important research direction for more and more researchers.In essence, air quality measurement is an imagebased classification or regression task.Deep convolutional neural network can be used to effectively extract environmental image features and complete the identification task of air quality as shown in the image.In recent years, the measurement of air quality using deep learning method has attracted much attention in academic circles.In the study of air quality measurement related to deep learning, Chao Zhang[2] built a convolutional neural network, improved the convolutional layer activation function and classification layer activation function, proposed an improved activation function of convolutional neural network EPAPL, and used a Negative Log-Log Ordinal Classifier to replace softmax Classifier in the classification layer, used the environment image to train their network model for classification prediction, completed measurement task of 2.5 and 10 in six grades; Avijoy Chakma et al [3] used convolutional neural network training images for feature extraction, combined with random forest classification, and classified the air quality shown in the images into three grades of good, middle, and bad.Nabin Rijal [4] adopts a method of neural network ensemble learning, they used three different convolutional neural networks, VGG16, InceptionV3 and Resnet50 to respectively conduct regression training on image 2.5 values.The predicted values of the input 2.5 of the three networks were input as feature into a feedforward network fortraining to predict the image 2.5 values.Jian Ma[5] combined the dark channel prior theory[6], firstly extracted the dark channel from the image, trained two convolutional neural networks respectively with the original image and the dark channel images, and identified the good and bad air quality of the image in three grades.Xiaoguang Chen et al[7] proposes a traditional image processing algorithms and deep learning combining approach.First, they counted distribution characteristics of image pixel values, statistics the proportion of high brightness points (pixel value > 128) to all pixel points of each image, use edge detector statistics all image the proportion of edge points to all pixels.The two values of each image as input feature to train the BP neural network, to predict air quality index value.Considering the different composition information of different parts of the environmental image, we constructed a double-channel convolutional neural network based on the method of deep convolutional neural network and the idea of ensemble learning to extract features from different parts of the image, proposed a double-channel convolutional neural network ensemble learning algorithm for air quality measurement.3Double-Channel Convolutional Neural Network Ensemble LearningAlgorithm for Air Quality MeasurementAlex proposed AlexNet[11] has achieved good performance in image recognition tasks, in view of the air quality measurement in essence for image recognition tasks, therefore, on the basis of AlexNet[11], we constructed a double-channel convolutional neural network for feature extraction of the sky and building parts of the environmental image, weighted and fused the extracted features, proposed a double-channel convolutional neural network ensemble learning algorithm for air quality measurement.It is composed of two feature extraction convolutional neural networks, a weighted feature fusion layer and a classification layer, as shown in table 1. Table1.Double-channel convolutional neural network structure Upper channel image 64*64*3 Under channel image 64*64*Double-Channel Convolutional Neural Network Structure The structure of the double-channel convolutional neural network is shown in table 1.It is composed of upper and under channel sub-convolutional neural networks, each channel convolutional neural network contains five convolution layers, two pooling layers and one fully connection layer.The first three convolution layers adopt 5*5 convolution kernel, and the last two convolution layers adopt 3*3 convolution kernel for feature extraction of image; Maximum pooling is used in each pooling layer to extract important features from down sampling; The 512-nodes fully connection layer is used to output feature vectors extracted from each network for feature fusion and prediction.For different components of the environment image, the double-channel convolutional neural network adopts the strategy of ensemble learning to receive different parts of the image simultaneously in the upper and under channels for training.Before inputting the environment image into the double-channel convolutional neural network for training, the environment image should be preprocessed first, and the partial sky image and the partial building image should be segmented.For each image, the horizontal central axis average segmentation method is adopted to divide the image into the image mainly containing the upper half of the sky and the image mainly containing the under half of the building.Among them, the upper channel convolutional neural network focuses on feature extraction of the sky.In each round of iterative training, the images of the upper half with more sky elements after cutting are input into the upper channel convolutional neural network for training; The convolutional neural network of the under channel focuses on feature extraction of the building part.In each round of iterative training, the image of the under half with more building elements after cutting is input into the convolutional neural network of the under channel for training.After feature extraction at the fully connection layer of the last layer of each subnetwork, the feature vectors of the upper and under parts were weighted and fused, and the feature vectors containing complete features of the upper and under channels were used for recognition. 1 and 2 are weight values of upper and under channels respectively; and are feature vectors extracted by upper and under channels respectively; is global features after feature fusion.On the basis of artificially assigning feature weights, we propose a self-learning method for feature weights, which also participate in training.In the initial stage, two weights 1 and 2 are set to 0.5 by adopting the strategy of balancing weights.In the training of the double-channel convolutional neural network, we only train other network parameters of the double-channel convolutional neural network with the weights value is frozen.After the training of the doublechannel convolutional neural network, other parameters of the network should be frozen and the two weights are trained to find the appropriate feature ratio.In feature weights training, considering the proportional relationship between the two weights, we propose a weight loss constraint function to limit the training of weight values.The weight loss constraint function is defined as equation (2): Figure3. Dataset air quality index distribution 4 Figure3.Dataset air quality index distribution 4. 3 . 4 Double-Channel Convolutional Neural Network Air Quality Index Measurement Performance Analysis In addition, from the perspective of regression, we conducted experiments on the direct measurement of the air quality index (AQI) with double-channel convolutional neural network, and classified the images to corresponding grade according to the predicted results.Similarly, we respectively used double-channel self-learning weight for regression (DCSLW-R)、double-channel equal weight for regression (DCEW-R)、single channel for regression (SC-R) for AQI measurement experiments.The experimental results of AQI prediction are shown in figure 4. For the task of AQI measurement, the performance difference between the double-channel convolutional neural network with equal weight and self-learning weight is small, and they have respective advantages in MAE and MDR.It can be seen from the figure 4 that in the AQI measurement, the prediction of lower air quality index is more accurate, and the accuracy decreases with the increase of the index.Compared with single channel convolutional neural network, the measurement error of double channel convolutional neural network is obviously lower.The partial AQI prediction results of DCSLW-R are shown in figure 5. Figure4. Figure4.AQI measurements result environmental images at different time of each day in the Beijing area, established a In the study of this task, we screened the dataset.Due to the quality problem of the image itself, we manually removed the images with poor image quality, poor weather conditions, and inappropriate shooting time, such as sunset time light is poor, night by the influence of street lights.Considering the unbalanced dataset samples, images with low air quality index is much more than images with high air quality index, easy to cause an effect to training, makes the final model can be the index of high image recognition rate is too low, so we chose a relatively evenly number of each grades, finally formed a with 567 images dataset, partial sample image is shown in figure2.Among them, 465 training images and 102 test images were included.The number of images of each category is shown in table 3, and the distribution of AQI is shown in figure 3. Table3.Dataset air quality grade distribution 2 Double-Channel Convolutional Neural Network Air Quality Grade Recognition Performance Analysis Based on the proposed double-channel convolutional neural network, we first tested model performance on recognition air quality grade (double channel equal weight for classify, DCEW-C) under the condition of equal weight, at the same time, we use the following methods for performance comparison: (1) single channel convolutional neural network training and testing on the whole image (baseline); (2) only the upper channel convolutional neural network is used for training and testing the upper part of the image; (3) only the under channel convolutional neural network is used for training and testing the under part of the image.The test results of the above methods are shown in table4.Due to the large similarity between adjacent air quality grade images, we also introduced the Neighbor accuracy as another reference It can be seen from table 4 that the method using double-channel convolutional neural network is much better than the method only applied a single convolutional neural network in terms of accuracy, neighbor accuracy and MAE.For the single-channel convolutional neural network, the under channel convolutional neural network has achieved a high accuracy, but due to the complexity of the under half of the image, its neighbor accuracy is poor.For the single-channel convolutional neural network, due to the extraction of the whole image information, it performs better in the neighbor accuracy.The image features of the upper channel convolutional neural network are relatively simple, so it obtained the lower MAE.For the proposed double-channel convolutional neural network (DCEW-C), because it takes into account different parts of the image information, and adopts the strategy of separately extracting ensemble learning, it has achieved a great performance improvement compared with the singlechannel convolutional neural network.Compared with the optimal performance of each single channel convolutional neural network, the accuracy was improved by more than 6 percentage points, 3 Double-Channel Convolutional Neural Networks with Different Feature Fusion Weights Performance Analysis On the basis of the equal weight double-channel convolutional neural network for air quality grade recognition, we explore the performance under the weighted fusion of two-channel features.Considering from two aspects, we have conducted experiments with different assigned weights and with double-channel self-learning weight for classify (DCSLW-C).For the weight assignment, we studied the system performance with the feature weight ratio of upper and under channels at 3:7, 4:6, 5:5, 6:4 and 7:3 respectively.The experimental results are shown in table5.
6,024.2
2019-02-19T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Aspergillus niger boosted heat stress tolerance in sunflower and soybean via regulating their metabolic and antioxidant system ABSTRACT Plants can be severely affected by heat stress due to rapid environmental changes. Use of endophytic fungi is a new tool to protect crops from the environmental stresses. Here we claim a potent endophyte isolated from Sonchus asper L. The aim was to explore the stress adaptive mechanism of sunflower and soybean mediated by Aspergillus niger (SonchL-7) under high temperature. Inoculation with A. niger boosted plant height, biomass and chlorophyl contents, while significantly curtailed the concentration of lipid peroxidation and reactive oxygen species (ROS) under thermal stress at 40°C. Moreover, the ROS-scavenging activities, like ascorbic acid oxidase (AAO), catalase (CAT), glutathione reductase (GR), superoxide dismutase (SOD) and peroxidase (POD) were augmented. Also, proline and phenolics were enhanced in the tested crops, while ABA concentration was significantly reduced. These positive results suggested that A. niger can be used as a heat-stress ameliorative tool for crops in the future. Introduction Thermal stress is a persistent abiotic stress in the present age, causing severe loss to agricultural crops (Ismail et al. 2018;Ismail et al. 2019). Rising in average global temperature also results in drought and saline conditions by speed up vaporization from the soil surface. Being sessile in nature, plants are exposed to stresses (abiotic and biotic) leading to an enlarged accumulation of reactive oxygen species (ROS). The ROS cause oxidative damages to the chlorophylls, macro-molecules, and disturb cell functional-integrity resulting in premature cell death and apoptosis (Ashraf and Harris 2004). These ROS species are defused by local antioxidants, located in biological membranes except H 2 O 2 (Abd_Allah et al. 2018). To avoid oxidative damages, all plants have self-activated-defense-mechanisms (SADMs) like compatible osmolytes and antioxidant systems (Mhamdi et al. 2010). Antioxidant systems work in synchronization in order to detoxify reactive oxygen species. Plant hormones can be worked as signaling molecules during various stresses (Hamayun et al. 2015;Hamayun et al. 2017;Bilal et al. 2018;Khan et al. 2018;Gul Jan et al. 2019;Jan et al. 2019). Abscisic acid (ABA) is known to be the foremost hormone respond to lack of water and heat stress by regulating stomatal opening and closing (Yoon et al. 2009). High atmospheric humidity, low concentration of CO 2 and light cause stomatal opening while, dry air, high concentration of CO 2 , darkness and stress-induced ABA result in closing of stomata (Lorenzo et al. 2004). It also plays significant role in seed germination, maturation and senescence of vascular plants, including Arabidopsis thaliana (Waqas et al. 2012). Proline and phenolics are accumulated in higher plants during abiotic stresses and function as osmolytes, protein and membrane stabilizers, buffer of cellular redox reactions and as ROS foragers (Khushdil et al. 2019;Muhammad et al. 2019;Nusrat et al. 2019). Moreover, proline also helps in balancing the ratio of NADP + /NADPH necessary for cellular metabolism, maintenance of cellular-acidosis and protein-compatible-hydrotrope. Proline breaks down after stress that generates huge amount of reducing agents used for ATP synthesis in mitochondrial oxidative phosphorylation. ATPs are then used in stress retrieval and refurbishing of damages (Ashraf and Foolad 2007). Phenolics are known to function as herbivore restriction, antioxidants and attract plant pollinating insects (Rodriguez et al. 2009). Endophytes are endosymbiotic partners in majority of plants, which reside inside the plant tissues permanently or for the time being with no visible sign of harm (Mehmood et al. 2019a(Mehmood et al. , 2019b. They are branded to have a role in the refurbishment of restraint plant growth under various abiotic and biotic stresses by providing resistance, reduce ailment severity, accelerate mineral absorption and improve biomass synthesis of their host plant (Mehmood et al. 2019a(Mehmood et al. , 2019b. Plants without hosting endophytic fungi are more prone to environmental constraints, like variation in temperatures, salinity and drought (Ismail et al. 2019;Kang et al. 2019;Khushdil et al. 2019). Aspergillus species were formerly known to have the capacity to tolerate arsenic toxicity by modulating the activities of SOD, CAT, GR and the contents of malondialdehyde, proline and thiol (Ismail et al. 2018;Ismail et al. 2019). The present work was an attempt to investigate the stress adaptive tools for sunflower and soybean arbitrated by endophytic fungus A. niger under heat stress. Materials and methods Rice (Oryza sativa) cultivar Fakhr-e-Malakand was cultivated in pots having 30 ml of 0.8% (w/v) water-agar medium. The pots were placed in a growth chamber with day/night cycle of 14 h at 28 ± 0.3°C and 10 h at 25 ± 0.3°C. The relative humidity of the growth chamber was set at 70%. We used rice seedlings (a quick replier) for preliminary screening to see the effect of endophytic fungi (Misra and Dwivedi 2004). The potent species were separated and applied to the sunflower and soybean. Sunflower and soybean were chosen as these are the two best oil producing crops grown worldwide, facing heat stress nowadays. Soybean (Swat 84) and sunflower (ICI Hyson 33 Australia) varieties were grown in autoclaved sand for two weeks in growth chambers set at 25°C and 40°C. The setting of the growth chamber was same as mentioned earlier. All experiments were performed in three biological replicates. Fungal isolation Endophytes were isolated from the Sonchus asper in District Nowshera Khyber Pakhtunkhwa, Pakistan using standardized protocol of Benizri et al. (1998). S. asper is a wild plant that has the ability to stand the hot summer of the area, therefore, it was explored to isolate potent endophytic fungi for the amelioration of heat stress. After isolation on Hagem media, endophytic fungi were purified on potato dextrose agar (PDA) plates. Fungal isolates were inoculated on Czapek media in a shaking incubator and their biomass was collected after 7 days of incubation. Isolation of fungal DNA Isolation of A. niger DNA was carried according to an established protocol of Khan et al. (2008). The mycelium (200 mg) of A. niger was suspended in micro-centrifuge tube containing 500 μl of a bead beating solution and 5% of sodium dodecyl sulfate. The tube was vortexed and centrifuged for 10 min at 11,000 g and 4°C. The supernatant was transferred to a new tube and equal volumes of phenol: chloroform: isoamyl alcohol (25:24:1) was added to it. The sample was vortexed and centrifuged for 5 min. Again, the collected supernatant was transferred to a new tube and added an equal volume of chloroform: isoamyl alcohol (24:1). The tube was then centrifuged for 5 min at 10,000 g. The supernatant was transferred to a new eppendorf tube, and 2.5 volumes of isopropanol were added to precipitate DNA. The tube was incubated in a refrigerator for 1 h, and centrifuged for 10 min at 14,000 g. The pellet was washed twice with cold 70% ethanol, air-dried and added 40 µl TE buffer. The purity of the extracted DNA and its quantity was measured by Thermo Scientific Nano Drop spectrophotometer at 260 nm (Chen and Kuo 1993). The experiment was done in triplicate. Identification of potent species Fungal isolate was identified by following the method of Khan et al. (2008) with some modifications. 18S rRNA was amplified by using primers ITS1 (5 ′ -TCC GTA GGT GAA CCT GCG G-3 ′ ) and ITS4 (5 ′ -TCC TCC GCT TAT TGA TAT GC-3 ′ ). The sequence attained, was then subjected to BLASTn1 for sequence homology approximation. Neighbor Joining (NJ) tree was constructed for the phylogenetic analysis using MEGA 7 software. Preliminary screening of isolated fungus Preliminary screening was carried out on rice seedlings at two leaf stage. A 100 µl of fungal culture filtrate was applied on the tip of rice seedlings grown in water-agar medium (0.8%) in a growth-chamber (operated with settings as mentioned earlier) for one week. Growth attributes, including total chlorophyl content, root and shoot length, fresh and dry weight of root and shoot, were measured and compared with distilled water-and Czapek broth-treated controlled plants (Warrier et al. 2013). Inoculation of sunflower and soybean with A. niger Fresh biomass of A. niger was grown in a 250 ml conical flask with 50 ml of Czapek broth and transferred to shaking-incubator set at 28°C for 1 week at 120 rpm. Filter paper was used to separate the fungal biomass from the Czapek broth. One mg fresh endophytic fungi bio-mass per 100 g of autoclaved sand, was applied to pot comprising H. annuus (variety Hysun-33) and G. max (variety Swat-84) seeds (9 seeds/pot), then transferred for 14 days into growth chambers fixed at normal temperature at 25°C and high temperature at 40°C. Ten milliliter Hoagland Solution (half strength) was given to plants at two days interval. Growth attributes were assessed after 14 days of cultivation in growth chambers (Misra and Dwivedi 2004). The chlorophyl contents of wheat leaves were quantified by chlorophyl meter (Spad-502 plus, Japan). The experiment was accomplished in triplicates. Determination of antioxidants The activity of catalase (CAT, EC 1.11.1.6) in the seedlings of H. annuus and G. max was determined using Luck (1974) protocol. Fresh leaves of sunflower and soybean (2 g each) were ground in 10 ml of phosphate buffer (phosphate buffer consisted of NaCl = 8 g/L, KCl = 0.2 g/L, Na 2 HPO 4 = 1.44 g/L and KH 2 PO 4 = 0.24 g/L) and then spun at 10,000 rpm for 5 min. About, 40 µl of supernatant was mixed with 3 ml of H 2 O 2 -phosphate buffer. Optical density was measured at a wavelength of 240 nm. H 3 PO 4 buffer was used as blank. The quantity of enzyme needed to lower OD by 0.05 per gram of plant biomass at 240 nm was considered as one unit. Kar and Mishra (1976) method was used for the activity of peroxidase (POD, EC 1.11.1.7). The mixture contained H 2 O 2 (50 μmoles), phosphate buffer (125 μmoles pH 6.8), pyrogallol (50 μmoles) and 1 ml of the 20X diluted enzyme extract in a final volume of 5 ml. The amount of purpurogallin developed was measured at 420 nm and the concentration of POD was calculated as EU mg −1 protein. Oberbacher and Vines (1963) method was used for the estimation of ascorbate oxidase (AAO, EC 1.10.3.3) activity in soybean and sunflower seedlings. Fresh leaves (0.1 g) of soybean and sunflower were ground in phosphate buffer (2 ml) and centrifuged for five minutes at 3000 g. Approximately, 3 ml of the substrate solution (8.8 mg of ascorbic acid in 300 ml phosphate buffer, pH 5.6) was mixed with 100 µl of supernatant. The OD 265 was finally observed at an interval of 30 s for 5 min. One unit of AAO was calculated by a decrease in OD 265 of 0.05 per minute. Beyer and Fridovich (1987) protocol was adopted for the analysis of superoxide dismutase (SOD, EC 1.15.1.1) activity by measuring decrease in the absorbance of nitroblue tetrazolium (NBT). The amount of SOD was determined as enzyme unit (EU) mg −1 protein and 1 unit of enzyme was defined as the amount of protein causing a 50% inhibition of NBT reduction. Carlberg and Mannervik (1985) method was applied for the activity of glutathione reductase (GR, EC 1.6.4.2). The GR activity is assayed as NADPH dependent reduction of oxidized glutathione (GSSG), which is followed as the decrease in absorbance at 340 nm for 2 min. The 0.8-ml reaction mixture contains 0.125 mM NADPH, 1 mM GSSG, 1 mM EDTA in 100 mM potassium phosphate buffer, pH 7.0. The amount of GR was measured by applying extinction co-efficient of 0.12 mM NADPH, which was 6.2 mM −1 cm −1 , and was shown as EU mg −l protein. Determination of ABA Methodology of Yoon et al. (2009) was used to investigate ABA concentration in H. annuus and G. max. seedlings. Soybean and sunfloweŕs fresh leaves (0.5 g) were crushed in liquid N 2 and mixed with 2 ml of glacial acetic acid (GAA, 28.5 ml) and isopropanol (1.5 ml) mixture. Then via rotary evaporator, the mixture was filtered and dehydrated. Then Determination of phenolics and proline Methodology of Cai et al. (2004) was applied to investigate total phenolics in H. annuus and G. max. Sample (0.2 ml) was mixed with Folin-Ciocalteu reagent (0.5 N) and kept for 4 min at 25°C. Sodium carbonate (75 g/L) was added to the mixture for neutralization and heated at 100°C for 1 min. This mixture was then kept in the dark for 2 h. Optical density was taken at 650 nm. Different concentrations of gallic acid (Sigma Aldrich; 100, 200, 300, 500, 600, 700, and 900 mg/ml) were made to draw a standard curve. Bates et al. (1973) procedure was used with minor modifications, to analyze total proline. Fresh leaves (0.1 g) of G. max and H. annuus were grinded in 4 ml sulpho-salicylic acid (3%) and incubated at 5°C for 24 h. The mixture was then centrifuged at 3000 rpm for 5 min. Then 2 ml supernatant was mixed with 2 ml of acid ninhydrin and heated at 100°C for 1 h. Toluene (4 ml) was added to the mixture and OD was taken at 520 nm. Various concentrations of proline (Sigma Aldrich; 2, 4, 6, 8, and 10 μg/ml) were used to plot standard curve. Optical density was noted at 520 nm. Determination of Primary metabolites Proteins were determined in H. annuus and G. max seedlings using methodology of Lowry et al. (1951). Homogenate of filtered leaf sample (0.1 ml) was mixed with 0.1 ml of 2M NaOH. The mixture was heated at 100°C to hydrolyse. The resulted hydrolysate was cooled and mixed with 1 ml of complex-forming solution [complex-forming solution = 2% (w/v) Na 2 CO 3 in distilled H 2 O, 1% (w/v) CuSO 4 .5 H 2 O in distilled H 2 O, 2% (w/v) sodium potassium tartrate in distilled H 2 O] and let the mixture stand for 10 min at room temperature. About, 1 ml of Folin reagent was added and the mixture was vortexed. The mixture was allowed to stand for 30 min at room temperature. The absorbance was finally read at 650 nm using spectrophotometer. Different concentrations of bovine serum albumin (BSA) (Sigma Aldrich; 20, 40, 60, 80, and 100 µg/ml) was applied to draw a standard curve. Van Handel (1985) methodology was followed for the analysis of total lipids. Crushed leaf sample was centrifuged and about 1 ml of the supernatant was taken and transferred to the reaction tube containing 1 ml chloroform/methanol (1:2 v/v) solution. The mixture was heated in a water bath at 90°C until complete evaporation. Concentrated sulfuric acid (2 ml) was added to the precipitate formed and mixed with the help of vortex for 2 min and then heated on a water bath at 90°C. The mixture was cooled and added 5 ml vanillin-phosphoric acid reagent. The tube containing mixture was allowed to stand for 30 min at room temperature for color development. Finally the tube was mixed and the absorbance was measured at 525 nm. Various concentrations of pure canola oil (10, 40, 70, 100, 130, and 160 µg/ml) was used to draw a standard curve. Mohammadkhani and Heidari (2008) protocol was applied to determine soluble sugar in H. annuus and G. max leaves. Homogenate of filtered leaf sample (1 ml) was mixed with 10 µl of 5% phenol and 1 ml of 98% sulfuric acid. The mixture was vortexed and allowed to stand for 1 h. The absorbance was finally measured at 485 nm by using spectrophotometer. Different concentrations of glucose (Sigma Aldrich; 20, 40, 60, 80, and 100 µg/ml) were taken to plot a standard curve. Statistical analysis Experiments were accomplished in triplicates. ANOVA was applied to analyze the data. Means of all values were equated by DMRT (Duncan Multiple Range Test) at p < 0.05, via SPSS-20 (SPSS Inch., Chicago, IL, USA). Fungal isolation and their screening on Oryza sativa seedlings A total of ten different endophytic fungi were isolated from S. asper that was initially screened on rice seedlings for their growth promoting potential. SonchL-7-treated rice seedlings have more chlorophyl content (4.9%), shoot (11.7%) and root length (21.9%), fresh shoot (11.9%) and root weight (61.8%) and, dry shoot (30.5%) and root (15.7%) weight as compared to Czapek-treated control seedlings (Table 1). Molecular identification of isolated fungus Nucleotide sequence of ITS region of SonchL-7 isolate was compared with allied strains by applying BLAST search program. 18S rRNA sequence revealed maximum resemblance (49%) with A. niger. Phylogenetic consensus tree was constructed from 12 taxa (11 reference and 1 clone) by neighbor joining (NJ) method using MEGA 7 software (Figure 1). Our endophyte SonchL-7 formed a clad with A. niger reinforced by 49% bootstrap value in the harmony tree. Sequence homology and phylogenetic analysis confirmed that A. niger was our isolate. ITS sequence of 18S rRNA was submitted to Gen-Bank database under accession No. MH577053. Growth features of sunflower and soybean cocultured with A. Niger Growth attributes, including total chlorophyl content, root and shoot length, fresh and dry weight of root and shoot of experimental plants (A. niger-co-cultured) grown for two weeks, at 25°C and 40°C were measured. A significant increase in most of the growth attributes were found in sunflower and soybean seedlings inoculated with A. niger as compared to A. niger-free seedlings at 25°C and 40°C. Analysis of physiological features We found an enhanced level of H 2 O 2 and lipid peroxidation in endophyte-free sunflower and soybean seedlings, subjected to heat stress (i.e. 40°C). High concentration of H 2 O 2 is known to have negative impact on membrane integrity. Quantification of lipid peroxidation was measured in terms of malondialdehyde (MDA) production. During normal temperature (25°C), A. niger reduced H 2 O 2 levels in sunflower and soybean by 21% and 67% respectively while, at thermal stress (40°C) this decrease was 45.8% and 37% respectively. A significant decrease was also found in the amount of MDA in A. niger-inoculated sunflower and soybean. At 25°C this decrease in the amount of MDA was equal (61%) in the sunflower and soybean seedlings, while at 40°C the decrease was more in sunflower (49%) than in soybean (30%) seedlings (Figure 2). Soybean and sunflower both showed an enhanced level of proline and phenolic contents under heat stress when co-cultured with A. niger as compared to control plants. Maximum increase (61%) in proline content was noted in the seedlings of sunflower whereas, the fungal strain was less responsive in soybean seedlings in terms of increase in proline content and the increase was 26% as compared to inoculated sunflower Figure 1. Phylogenetic tree construction (using 12 taxa, 11 reference and 1 clone) for the molecular identification of isolate SonchL-7 using neighbor joining (NJ) method. 49% bootstrap value confirmed isolate SonchL-7 as A. niger. plants. In contrast the total phenolics content was significantly augmented (34%) in the seedlings of soybean as compared to sunflower (14%) (Figure 3). A decrease was also noted in the amount of ABA, in the leaves of endophyte-inoculated soybean and sunflower seedlings at normal as well as at high temperature (40°C) as compared to non-inoculated plants. Soybean seedlings cocultured with A. niger has 15% lesser ABA as compared to control plants at room temperature while, at 40°C, endophyte-inoculated soybean has 18% lower of ABA content as compared to control plants. Similarly, A. niger-inoculated sunflower has 7% lesser ABA content at 25°C while, 17% lesser ABA content at 40°C as compared to endophyte-free plants ( Figure 5). Total lipids, proteins and soluble sugars of sunflower and soybean were determined via spectrophotometer. We noticed an increase of 21% and 23% in the total lipid contents of A. niger-inoculated soybean and sunflower respectively, at 25°C as compared to uninoculated plants. At 40°C, A. niger-associated soybean (48%) and sunflower (39%) also has more lipids as compared to control seedlings ( Figure 6A). Whereas, endophyte-inoculated soybean has 51% and sunflower has 35% more protein at 40°C while, at 25°C soybean has36% and sunflower has 44% more proteins as compared to endophyte-free plants ( Figure 6B). At 40°C, A. nigeraligned soybean has 11% and sunflower has 29% more soluble sugars while, at 25°C soybean has 11% and sunflower has 15% more soluble sugars as compared to control plants ( Figure 6C). Discussion Higher plants, subjected to heat stress, are known to pledge some intricate biosynthetic responses like antioxidant systems, adjustments in membrane lipids, osmotic potential, and synthesis of heat-shock-proteins (Rodriguez et al. 2004). In present study, we detected that A. niger-associated soybean and sunflower seedlings are more resistant to thermal stress than uninoculated seedlings because of improved ROS scavenging antioxidant system (including, CAT, POD, SOD, AAO, GR, proline and phenolics). Thermal stress affects plant´s biochemical, physiological and transcriptional mechanisms. High concentration of MDA in control plants indicates the amount of lipids peroxidation as a result of oxidative stress. In our study, both sunflower and soybean control seedlings exposed to heat stress, have elevated levels of MDA as compared to A. niger-associated plants. High concentrations of MDA cause membrane damaging while, endophytes including bacteria and fungi are known to reduce the impacts of abiotic stresses by reducing the level of MDA in their host plants. In a similar type of study, 27.5% decrease in the levels of MDA was found in Chickpea plants inoculated with Bacillus subtilis BERA 71 facing 200 mM NaCl salt stress (Abd_Allah et al. 2018). Our results confirmed the study of Abd_Allah et al. (2018) on chickpea that endophytes ameliorate abiotic stresses by reducing the levels of MDA and H 2 O 2 . Furthermore, thermal stress also boosts up H 2 O 2 in plants, which eventually affect membrane structural integrity and directs lipids peroxidation and hence more MDA synthesis as indicated in control soybean and sunflower seedlings (Ismail et al. 2019). We also observed high contents of chlorophyl in A. niger-inoculated sunflower and soybean at 25°C and 40°C as compared to their respective control seedlings, but the increase was not significantly different except for A. niger-inoculated sunflower seedlings kept at 25°C. The increase in chlorophyl contents of A. niger-inoculated sunflower and soybean seedlings might be due the cytokinins production by the A. niger, which has been demonstrated previously by Morrison et al. (2015). However, the production of cytokinins by the A. niger was not high enough to make a significant difference. Also, A. niger might help the host plants to uptake the Mg from the surroundings, thus enhanced the chlorophyl contents. Similar observations were made by Abd_Allah et al. (2018). The high rate of photosynthesis is associated with high content of chlorophyl contents as well as increase in leaf area in endophyte-colonized plants as compared to endophyte-free plants (Dastogeer and Wylie 2017). An endophytic fungus Piriformospora indica was also found to be growth promoter of Arabidopsis by increasing chlorophyl contents and rate of photosynthesis under abiotic stress (Sherameti et al. 2008). Endophytes help in the maintenance of photosynthetic rate of host plant by protecting their thylakoid-membrane-proteins and chlorophyl molecules from denaturation, caused by abiotic stresses (Sun et al. 2010). Phenolics are defensive compounds accumulated in higher plants against biotic and abiotic stresses (Ismail et al. 2018). The accumulation of high content of phenolics in endophyte-inoculated soybean and sunflower plants under thermal stress help in mitigating the accretion of ROS and lowering of stress effects on plants (Ismail et al. 2018). Our results are similar to the work of Abd_Allah et al. (2018) that endophytes enhance phenolics content of their host plant under abiotic stress. Proline is known to function as organic osmolyte that accrue in various plants in response to different environmental stresses including high temperature. We found high content of proline in A. niger-inoculated soybean and sunflower that has encouraging effects on enzymes, membrane integrity, free radical scavenging, buffering of cellular redox potential as well as in arbitrating osmotic modification in plants facing stress conditions. Proline has role in relieving cytoplasmic acidosis, protein compatible hydrotrope and keep suitable ratio of NADP+/NADPH harmonious with metabolism (Hare and Cress 1997). It also help in providing stress reducing agents as a result of rapid breaking after stress. These stress reducing agents are used in repairing of stress-induced damages and generate ATP (Hare et al. 1998). Our results showed the ameliorative role of endophytic fungus A. niger against heat stress, that might be due to the up-regulation of nutrient uptake and antioxidant system, and decline of ROS (Chang et al. 2016). The enzymatic antioxidant system of plants include CAT, POD, SOD, AAO and GR. The up-regulation of these antioxidants protect plant´s membrane from the hazardous effects of free radicals generated under various stresses (Abd_Allah et al. 2018). SOD is one of the crucial antioxidant enzyme having key role in scavenging of H 2 O 2 and superoxide radicals for the synthesis of hydroxyl (OH-) radicals and decline of Haber-Weiss reaction. In a similar study it was known that up-regulation of CAT, POD, SOD and GR reduced membrane dysfunction in chickpea induced by free radicals. High content of POD might have role in reducing stress toxicity by accelerating lignin biosynthesis and other protective compounds (Boerjan et al. 2003). Enhanced activity of CAT boost up immunity against abiotic stress in Brassica juncea (Mittal et al. 2012). CAT function as eliminator of H 2 O 2 to protect membranes and organelles damage because H 2 O 2 is known as signaling molecule which quickly diffuse through membrane (Bienert and Chaumont 2014). A series of redox reactions occur in the ascorbate-glutathione cycle leading to eliminate H 2 O 2 in cytosol and chloroplasts to reduce the toxic effects of oxidative-stress on plants under abiotic stress. Abscisic acid, a phytohormone accumulates in higher plants under abiotic stresses including high temperature. Elevated content of ABA was observed in Oryza sativa under thermal stress (Raghavendra et al. 2010). Thermal stress causes up-regulation of ABA biosynthesis genes while, down-regulates those genes which are responsible for ABA catabolism (Toh et al. 2008). In our study we detected low amount of ABA in sunflower and soybean inoculated with A. niger under heat stress as compared to endophyte-free seedlings. This reduction in ABA content in experimental plants might be due to down-regulation of genes responsible for ABA synthesis or acceleration of ABA degradation. Low amount of ABA in fungal endophyte associated soybean and sunflower confirmed the results of Hamayun et al. (2017). In present study enhanced concentration in total nutritious value like total lipids, proteins and sugars, of A. niger-associated soybean and sunflower was found under thermal stress, indicating the positive role of endophytic fungi for crops and suggesting their use as bio-fertilizers. In a similar study by Rodriguez and Redman (2008) pointed out the protective role of A. flavus for host plant against heat stress. Conclusion Thermal stress prevailing in the atmosphere is amongst the crucial constraints to agricultural crops, including soybean and sunflower. The present work testified a thermo-tolerant fungal strain, A. niger (MH577053), that not only enhanced growth in sunflower and soybean, but also facilitated it to resist heat stress. A. niger helped the host species to ameliorate the heat stress by boosting ROS scavenging antioxidant system, such as CAT, POD, SOD, AAO, GR, proline and phenolics. It also modulated endogenous level of ABA in sunflower and soybean as well as enhanced their nutritious values. The present study suggests the use of A. niger as a heat stress alleviating bio-agent and bio-fertilizer in the future for sustainable agriculture. Ethics approval and consent to participate Our study doesn't involve any human, animal or endangered species. Consent for publication No consent/approval at the national or international level or appropriate permissions and/or licenses for the study was required.
6,420.4
2020-01-01T00:00:00.000
[ "Environmental Science", "Biology" ]
COMPUTER SUPPORT IN NON-VERBAL COMMUNICATION SYSTEMS WITH USING GRAPHIC SINGS IN EDUCATION OF PEOPLE WITH INTELLECTUAL DISABILITIES The article presents the possibility of using a computer with specialized software to support education of students with communication disorders. It also presents the results of preliminary tests based on the original program developed for students with intellectual disabilities in moderate and significant degree of disability in the field of alternative communication with using pictograms and pcs. INTRODUCTION A large part of the population is not able to fully communicate with the speech.These are people wholly devoid of the ability to speak, or those whose speech does not meet all the features of communication functions.Therefore, they may need non-verbal means of communication, which would be complementary to or substitute for speech.The problems of people in need of alternative communication are very diverse.For many children whose speech develops over time fades away the necessity of using alternative communication.In the case of people who use this type of communication throughout their lives the understanding of language comprehension and motor skills may willed their entire existence.Speech disorders are very common among students with intellectual disability.This causes significant difficulties in communication, and gives as well as a negative impact on development in all spheres of life, including their ability to acquire knowledge in the educational process.The use of alternative communication gives students the opportunity to expand to a process of communicating, and facilitates the understanding of their statements by others.The particular importances in both the public alternative and complementary systems of communication have different graphics.These systems consist of more or less stylized drawings, which show mostly pictorial similarity to counterparts in reality [8].An important factor in the selection of appropriate graphics system is primarily the degree of understanding of the language by the child and the ability of its perception of visual sensations.In practice, however, there are brought together different systems but nevertheless using them is closely related to the use of communication support starting from ordinary arrays, and ending with tools based on PC technology.Access to aid communication is particularly important for people with autism, with movement disorders or intellectual disabilities [4]. process.Teaching them is most difficult and takes a lot of time.Making a distinguished selection can greatly facilitate the subsequent process of education.It is very rare that a disabled person is not communicating in a certain way at the start of the study.This is why; when choosing a system of signs we must take into account already existing communication skills.In the world, particularly in the United States, the United Kingdom and the Nordic countries to the most popular systems of graphic characters include: pictograms, PCS symbols, the system Rebus, leksigrams, and Bliss symbols [8].In Poland, we widely utilize the pictograms, PCS and additionally Makaton system, which is connected to the Rebus system.Although the Bliss system is the most advanced graphic system accessible to persons with nonverbal.However, it is a relatively difficult way of communication and requires special teaching methods.Differences between the symbols are small and difficult to grasp, especially by people with mental disabilities.The system is most fully benefitial for intellectually disabled people with speech and reading difficulties [6]. There is no reason why we could not combine different systems.A limited number of pictograms can sometimes be complemented with the signs coming from the PCS.When the same user predefined the system the choice is not a big problem.In general, PCS and PIC become too limited for many people, and only then Bliss symbols are introduced [5]. Pictograms Pictograms (Pictogram Ideogram Communication -PIC) consist of identikit drawings of white silhouettes against black background.Each pictogram contains a verbal description of one word above the figure (Figure 1).Currently, there are 1400 characters PIC [1].The system compris-es several classes of words.The focus is on nouns and verbs, but the system also contains pronouns, adjectives, numerals, conjunctions, prepositions, interjections and adverbs.Pictograms are not a complete system of language (as for example in Polish), but are support in science and communication, both verbal and non-verbal. Pictograms are considered a sign system friendly and easy to learn.They can be used for lying of whole sentences, but due to the limited number of them is not always a simple task.When the user needs more meanings than those offered by the system are often introduced graphic signs of a more general nature with other systems. PCS PCS (Picture Communication Symbols) is a collection of simple drawings in identifying basic words necessary for daily communication (Figure 2).The system consists of 3500 characters -simple black and white (recently introduced a color) line drawings with slogans scrawled above or below the symbols.Symbols are arranged in categories: social, people, verbs, descriptive, food, leisure, and other nouns.Some elements such as articles and prepositions are presented using traditional orthography without the linear image.A characteristic feature of these characters is that they are easily drawn.PCS is the most common graphic system in the world.Many of the symbols submitted comes in two versions: a fewer and more abstract one, which provides utility system for people who are at different levels of understanding [9]. Polish library of PCS symbols (symbols are labeled with Polish names) contains a com- puter Boardmaker program.The library of the program is enriched with new, typically Polish PCS symbols, namely: Polish cuisine, images of famous people, poems and rhymes, holidays, popular sports games, money, etc.Another advantage of this system compared with the PIC is greater than the number of characters [3]. Makaton Symbols In Makaton symbols each concept has its corresponding symbol (graphic symbol).The program (Basic) consists of approximately 450 basic symbols and about 7000 symbols supplementing additional.Symbols are always accompanied by correct grammar question (depending on the capabilities of the child / adult).The characters were created (adaptation consists in changing of the image of the certain symbols and creating new ones) in Poland in the years 2001-2003 by Bogusława Kaczmarek.Makaton symbols are black-and-white (black figure, white background) drawings, covering the importance of concepts that they represent (Figure 3).They are characterized by transparency and simplicity, which allows them manually (without having to use a printer) to draw.In addition in Makaton symbols manual sings are also used.These characters are mainly used by children whose parents wish to communicate with them in the pre-verbal stage, when they do not speak yet and the child's natural communication system is gestures. COMPUTER AIDED PROCESS OF NON-VERBAL COMMUNICATION As communication aids we use all kinds of measures that support the expression of the user.Access to them is particularly important for peo-ple with impaired mobility, people with autism who have problems with language and mental disabilities.Most frequently, they are different kinds of plaques, indicators and books with logos or pictures.However, in the era of developing computer technology they are made with modern computer aids both hardware (such as touch screens, switches, keyboard, instant messaging) and pro-speech (e.g.games, speech synthesizers, etc.) (Figure 4) [7].In the case of people using graphic signs is necessary to introduce a computer program with an appropriate system of signs.On the market today there are several programs that use systems Bliss, PIC, or PCS.An example of this type of program is Boardmaker & Speaking Dynamically Pro which supports the alternative communication by creating interactive charts, educational materials: sticks, work cards, plans for the day, and task boards with the ability to print and use them directly using a computer.The application works with a speech synthesizer, so that messages can be read aloud.In turn, software SymWord is a talking text editor that allows writing the sings by using the supplied program symbols, full of words or letters.SymWord is a tool both for people using buttons or other devices of this type (e.g.de-vice responsive to the blowing and suction), people who do not know letters and communicate with symbols, as well as for people with learning difficulties -for example, dyslexia (Figure 5) [11]. EXPLORATORY RESEARCH Purpose and organization of research The aim of the study was to determine the role of specialized classes with the use of computers in teaching alternative communication using pictograms and PCS.The research was conducted at the Special Education Centre in Kozice Dolne.A proprietary curriculum, which was completed within 45 hours individually with each of the three students involved in the study was used (Table 1).Introducing the symbols and pictograms pcs we retained a sequence, namely: • used pictograms and pcs to express "yes" and "no", • shaped in the concept of "I", • built the dictionary of the child by placing symbols of things from the nearest surroundings, symbols, names of people, symbols defining characteristics of objects, phenomena, • the corresponding terms used symbols steps. Familiarizing the pupils with the words (symbols) was done by performing the following exercise: • matching, or combining a pictogram or pcs of the concrete, • matching, or combining a pictogram or pcs of illustration, • combining two of the same pictograms or pcs, • joining of two identical or pcs pictograms, but of different sizes, • choosing from two or more pictograms or pcs specific-noted by the teacher, • exercising of understanding the meanings of the symbols through the creation of pictograms or pcs the straight assortment associative (matching objects that match each other with the function, purpose, sorting objects by groups) • conducting interviews with children on a specific topic; selecting the "words" of pictograms or pcs on the subject. There were used when the computer aids such as: Tile Puzzle, pictorial fun with Makaton, Altik, Boardmaker with Speaking Dynamically Pro, SymWord database PCS symbols. The study began with an initial diagnostic whose aim was to assess up-to-student communication skills, establish contact with students and guarantee them a sense of security.Then we developed the ability to recognize the symbol and receptive communication (recipient) and expressive (sender).The final stage of the research was to perform a final diagnosis, the purpose of which was to check the progress of students after the program implementation and effectiveness of the acquisition of communication skills. For individual learning outcomes we adopted 7 step scales, where 0 meant you lack of verbal communication skills, and 6 points is a normal development of non-verbal communication.In order to assess the variation in the size of the respondents was introduced K-factor, which was the percentage ratio of the number of points obtained after the implementation of the program of non-verbal communication (final diagnosis Dk) to the number of points obtained after the initial diagnosis (Dw) described in equation ( 1): This factor allows the assessment of the degree of changes in the student's communication skills in relation to their competence prior to testing. Analysis of test results The results of testing are shown in three tables (Table 2-4) and a radar chart (Figures 7-9). For the second test student -Karol, we also found an increase in communication skills (K = 194%).However, this increase is more than twice lower compared to Mateusz.This is due to the fact that these students have ordered different degrees of intellectual disability.Hence, despite the The rating scale from 0 to 6, wherein: 0 -the total absence of non-verbal communication skills, 6 -the normal development of non-verbal communication. in the first test student (Mateusz) it can be concluded that the activities of non-verbal communication using computer programs significantly influenced its competences in communication.For all the assumed effects of education can be seen a marked increase in skills (K factor = 460%).The rating scale from 0 to 6, wherein: 0 -the total absence of non-verbal communication skills, 6 -the normal development of non-verbal communication The rating scale from 0 to 6, wherein: 0 -the total absence of non-verbal communication skills, 6 -the normal development of non-verbal communication.The last of the tested students, Kacper, has the least-developed communication skills, in his case, although there was a slight increase in skills (K = 188%). The reason may be the fact that his perceptual capabilities are at their lowest level among all the surveyed students (a score of 2 points).Through the use of specialized computer programs can be noted that Kacper is more motivated to work longer, he is able to keep focus on the task and take a greater extent to try to agreement with the teacher. CONCLUSIONS Communication is the basis of the knowledge about yourself and the surrounding world by naming objects, phenomena, events, determining their characteristics and relationships between them.Ability to communication is a source of discovering and trying out own skills to influence the environment.It is also one of the main determinants of psychosocial development.When a child does not speak, it does not mean that he does not have anything to say.He sometimes does not know an effective way to engage in dialogue.The need to communicate with the environment is undoubtedly the most important psychological need of every human being.When speech is severely impaired we should help the child to use adequate support or alternative ways of communication. The results clearly indicate that the main objective of the curriculum realized where students had to provide a means of communication enriching their communication with the environment is achieved.Learning to use pictograms or pcs according to the rules and recommendations of content based on the preferences of child registry.A teacher must create a plan to work with the child.The number of symbols placed in the students is based on their skills, the needs and requirements.The word that fit best child's needs are included in the dictionary.The most appropriate test of students' understanding of the meanings of the pictograms or pcs is their use in natural situations of everyday life at school and at home, even for determining the tasks to be performed, or to obtain from answers to questions.It is also important that parents are involved in the process. An important place in the educational process of communication skills is occupid bys specialized computer programs.Currently, they are indispensable aids so that the teacher can easily create a variety of communication boards, plates, cards work or plans of the day.High interactivity applications of this type make the student does not feel the hardships arduous process of education or rehabilitation and eager to participate in the classes.These programs implement the principle of ludic which is extremely important for the youngest pupils.With the creative use of computer programs significantly increases the efficiency of education -developing thinking cause-and-effect relationships and communication skills.The results are achieved in less time than in case when using traditional methods. Fig. 6 . Fig. 6.The program interface this pictorial fun with Makaton Fig. 7 . Fig. 7. Graph showing the number of points gained by the student after the initial and final diagnosis, taking into account the different learning outcomes (Mateusz)in the first test student (Mateusz) it can be concluded that the activities of non-verbal communication using computer programs significantly influenced its competences in communication.For all the assumed effects of education can be seen a marked increase in skills (K factor = 460%). Fig. 8 .Fig. 9 . Fig. 8. Graph showing the number of points gained by the student after the initial and final diagnosis taking into account the different learning outcomes (Karol) Table 1 . Characteristics of examined pupils Table 2 . The results of the initial diagnosis after implementation of the program of activities of non-verbal communication (Mateusz) Table 3 . The results of the initial diagnosis after implementation of the program of activities of non-verbal communication (Karol) Table 4 . The results of the initial diagnosis after implementation of the program of activities of non-verbal communication (Kacper)
3,588.2
2014-11-23T00:00:00.000
[ "Education", "Computer Science" ]
Exploring the origin of stars on bound and unbound orbits causing tidal disruption events Tidal disruption events (TDEs) provide a clue to the properties of a central supermassive black hole (SMBH) and an accretion disk around it, and to the stellar density and velocity distributions in the nuclear star cluster surrounding the SMBH. Deviations of TDE light curves from the standard occurring at a parabolic encounter with the SMBH depends on whether the stellar orbit is hyperbolic or eccentric (Hayasaki et al. 2018) and the penetration factor ($\beta$, tidal disruption radius to orbital pericenter ratio). We study the orbital parameters of bound and unbound stars being tidally disrupted by comparison of direct $N$-body simulation data with an analytical model. Starting from the classical steady-state Fokker-Planck model of Cohn&Kulsrud (1978), we develop an analytical model of the number density distribution of those stars as a function of orbital eccentricity ($e$) and $\beta$. To do so fittings of the density and velocity distribution of the nuclear star cluster and of the energy distribution of tidally disrupted stars are required and obtained from $N$-body data. We confirm that most of the stars causing TDEs in a spherical nuclear star cluster originate from the full loss-cone region of phase space, derive analytical boundaries in eccentricity-$\beta$ space, and find them confirmed by $N$-body data. Since our limiting eccentricities are much smaller than critical eccentricities for full accretion or full escape of stellar debris, we conclude that those stars are only very marginally eccentric or hyperbolic, close to parabolic. INTRODUCTION Most galaxies harbor supermassive black holes (SMBHs) with millions to billions of solar masses at their center.Tidal disruption events (TDEs) provide a good probe to identify dormant SMBHs in inactive galaxies.A star is tidally disrupted by an SMBH when the star approaches the SMBH closely enough that the black hole's tidal force exceeds the stellar self-gravity (Hills 1975).In classical TDE theory, a star on a parabolic orbit and is tidally disrupted by the SMBH at tidal disruption radius, r t = (M BH /m * ) 1/3 r * , where M BH , m * , and r * are the black hole mass, stellar mass and radius, respectively.Subsequently, half of the stellar debris falls back to the SMBH at a rate of t −5/3 so that the bolometric luminosity is proportional to t −5/3 if the mass fallback rate equals the mass accretion rate (Rees 1988;Evans & Kochanek 1989;Phinney 1989).However, recent observations have revealed that some observed TDEs show light curves, which deviate from the t −5/3 decay rate (Gezari et al. 2012;Holoien et al. 2014;Gezari et al. 2015;Miller et al. 2015;Holoien et al. 2016;van Velzen et al. 2019).Dozens of X-ray TDEs have light curves shallower than t −5/3 (Auchettl et al. 2017), while many optical/UV TDEs are well fit by t −5/3 (e.g.Hung et al. (2017)). Some possible reasons for the deviation of the light curve from the t −5/3 law are discussed in current literature.The following are the three main reasons among them.First, the fallback debris would cause a self-crossing shock by a relativistic apsidal precession (Shiokawa et al. 2015;Piran et al. 2015;Ryu et al. 2020), outflowing a significant fraction of the debris Lu & Bonnerot (2020).Moreover, the secondary shock due to subsequently occurring collision forms an accretion disk.The bolometric luminosity at the photosphere clearly deviates from the t −5/3 decay (Bonnerot & Lu 2020).Second, even though the mass fallback rate follows the t −5/3 law, the radiative fluxes emitted from the accretion disk or disk wind modify the light curve variation.Lodato & Rossi (2011) have shown that the luminosity of the accretion disk observed in different bands may decay with diverse power-law indexes.For TDEs with well-evolved optically thick accretion disks, the observed X-ray light curves should decay following the form of a power-law multiplied by an exponential, which is caused by the Wien tail of the disk spectrum (Mummery & Balbus 2020).Moreover, when mass falls back to the SMBH at a super-Eddington rate, an outflow could be launched from the accretion disk.The luminosity of the outflow can decay with time shallower than t −5/3 (Strubbe & Quataert 2009).Final, the mass fallback rate can deviate from the standard t −5/3 rate due to external and internal properties of the tidally disrupted star, such as: its orbital eccentricity (Hayasaki et al. 2013(Hayasaki et al. , 2018;;Park & Hayasaki 2020); its orbital energy and angular momentum, the combination of these two quantities defines how deep its orbit reaches inside the tidal radius (we define a penetration factor β = r t /r p , where r p is the pericenter of the stellar orbit around the black hole); the detailed stellar internal structure and the possible survival of a stellar core during a partial tidal disruption (Guillochon & Ramirez-Ruiz 2013). In this paper, we focus on the number density distribution of the stars, which cause TDEs, as a function of orbital eccentricity and the penetration factor.Since these orbital parameters leave imprints on the observable flux, the e and β could be obtained by fitting the light curves of the observed TDE with, e.g., MOSFiT, Guillochon et al. (2018).We can presume the number density by using this observed e and β in the end.Currently about a dozen of TDEs are known with β measurements (Mockler et al. 2019;Nicholl et al. 2019;Gomez et al. 2020), but they do not get the eccentricity independently, because the MOSFiT software used is based on hydrodynamic simulations of Guillochon & Ramirez-Ruiz (2013), who only simulated the e = 1 case.We note from this analysis that all the measured values of β are close to unity.Stone & Metzger (2016) suggested that the β value of the stars originating from the empty loss-cone regime should be very close to unity, while for the full loss-cone regime β could take larger values and the number density is proportional to β −2 (the definitions of empty and full loss-cone are given in Section 2.1).As we will discuss in Section 3.3, also in the full loss-cone regime many orbits may have β ∼ 1; larger β values occur in this case, but with a smaller probability.The eccentricity could provide further constraints on this issue, but as indicated before the eccentricity was set to exactly unity for all measurements so far. In a preceding paper (Hayasaki et al. 2018) we have examined the distribution of tidally disrupted stars on the e-β plane by using N -body experiments of spherical nuclear star clusters.We found two interesting results: first, the eccentricities of the stars that causes TDEs usually take values between two critical eccentricities proposed by Park & Hayasaki (2020), which depend on β, and so there is some correlation between e and β.Second, the distributions of e and β vary with the mass ratio between stars and the central SMBH, as do the critical eccentricities.This is important for future studies using stars of different masses -here and in Hayasaki et al. (2018) we just limit ourselves to stars of equal mass.Light curve characteristics of tidal disruption flares depend on the eccentricity and its critical limits.This raises further interest in the distribution of orbital parameters. In this paper, we analytically derive the number densities of bound and unbound stars that undergo TDEs in a spherical nuclear star cluster and test them by comparison with N -body simulations.We also examine the distribution of stars on the e-β plane by estimating the allowed eccentricity range for a given β.Predicting the relative frequency of TDEs with different eccentricity e and penetration factor β should help identifying realistic values of e and β for future hydrodynamic simulations of TDEs and also the interpretation of TDE observations (constraining the dynamic processes operating in the host cluster).We construct our analytical models in Section 2 and then describe the details of N -body simulations and compare the analytical number densities to the simulation results in Section 3. We discuss our results in Section 4 and draw our conclusions in Section 5. ORBITAL PARAMETER DEPENDENCE OF THE STELLAR DISTRIBUTION In this section we first briefly review the theory of the steady-state stellar distribution around a central black hole in a star cluster of Cohn & Kulsrud (1978) (hereafter CK78).It originates from a numerical solution of the orbit-averaged Fokker-Planck equation in energy-angular momentum space.The CK78 solution describes the distribution of stars inside the cusp surrounding the central SMBH, assuming the gravitational potential is dominated by the SMBH.It is well suited as a starting point to derive the stellar distribution in a phase space of orbital eccentricity e and penetration factor β (see Section 2.2).These quantities are of interest here, because they provide relevant input parameters for hydrodynamic simulations of TDEs, which in turn provide key information about the nature of TDE light curves.For our analysis of star cluster simulations with TDEs it is an advantage to use parameters closely related to the TDE and its observational characteristics, rather than the more conventional integrals of stellar motion. In Section 2.3 we discuss the case of unbound stars experiencing TDEs.Earlier work by Magorrian & Tremaine (1999); Wang & Merritt (2004) and Stone & Metzger (2016) is based on a generalized treatment following CK78, using the Fokker-Planck equation also in the region of a galaxy unbound to the SMBH.For our purposes we choose a simpler but still useful approach in that regime.In what follows, we use subscript 'b' and 'u' to mark the quantities corresponding to bound and unbound cases, respectively. Stellar distributions in energy-angular momentum space The influence radius r h of an SMBH in the center of a star cluster is defined as the radius within which the enclosed stellar mass equals to the SMBH mass.Inside r h stars are considered as gravitationally bound to the black hole and a stellar density cusp forms.Following CK78 we characterize a stellar orbit in this region by both the specific orbital energy of a star E = v 2 /2 − GM BH /r and by its normalized squared angular momentum where v, G, J and J c are the velocity of the star, gravitation constant, specific angular momentum of the star and the corresponding circular angular momentum, respectively.In a spherically symmetric cusp with isotropic velocity dispersion, the stellar density distribution depends on the orbital energy only.1However, the assumption of isotropic velocity distribution breaks down because the SMBH removes stars with low angular momentum through TDEs.So, in classical loss-cone theory, the stellar number density n should also depend on R, and thus be a function of both E and R i.e. n = n(R, E) (Cohn & Kulsrud 1978).In phase space the loss-cone region is encompassed by R lc , where R lc is the square of the normalized loss-cone angular momentum (see equation A5).For the models described in this paper stars inside the loss-cone region can survive for no more than one orbital period, unless they find a way out before being disrupted (typically by being scattered out of the loss-cone by two-body relaxation).In reality partial tidal disruptions may occur (Zhong et al. 2022;MacLeod et al. 2013), stars may not be fully disrupted at the first passage near the tidal radius.In case of only full tidal disruptions considered in this paper the loss-cone runs out of stars quickly and n(R, E) vanishes to zero at R ≪ R lc .Simplified models based on moment equations of the Fokker-Planck equation (Amaro-Seoane & Spurzem 2001;Amaro-Seoane et al. 2004) assumed a sudden drop of n to zero at R lc , while the original work of CK78 shows the solution around and inside R lc follows a logarithmic profile and reaches zero at R 0 (defined by equation 6).Two-body relaxation encounters replenish the loss-cone by angular momentum diffusion.So, in steady state n(R, E) is determined by an equilibrium between disruption processes near the tidal radius and the replenishment process.By solution of the orbit averaged Fokker-Planck equation, taking both processes into account, CK78 found the following expression for the stellar density: where A(E) is an energy-dependent coefficient and R 0 is the square of the normalized angular momentum at the zero-boundary below which the number density goes to zero.The CK78 solution was limited to the Keplerian potential.Later works, such as Magorrian & Tremaine (1999) and Wang & Merritt (2004) that have taken the stellar potential into account also reported the logarithmic dependence on R. Our focus is on the bound stars that cause TDEs (i.e.R 0 ≤ R ≤ R lc ), the cumulative number density, N TD,b (R, E), has the same ln R-dependence as equation 2 (because they originate from the stellar cusp described by the CK78 distribution) but the normalization coefficient is different, where the new coefficient is the number of bound stars that eventually enter the tidal radius with energy between E and E + dE.The quantity R and E in equation 3 shall take the values at the disruption, because these values are relevant to our theoretical models of the e and β distributions.The upper cutoff of R comes from the condition for tidal disruption: the separation between a star and the SMBH must be less than or equal to r t .This condition is translated to R ≤ R lc at the disruption according to the equations 1, A6 and A7.The value of N TD,b (E) results from the accumulation of TDEs with time, thus it is calculated as N TD,b (E) = F (E, t)dt, where F (E, t) is the flux of stars that enter the loss-cone at time t.In this work N TD,b (E) is obtained directly from the N -body simulation (for example using the orbital energy of the TDEs recorded in the simulation, see Figure 1), so we do not discuss F (E, t) here in detail-but see e.g.Section 2.2 in Merritt (2013).Note that the normalization of all number density distributions of tidally disrupted stars presented in this paper is to the total number of disrupted stars over the time of the simulation.From N TD,b (E) we obtain for the normalization coefficient By introducing where ∆R is the cumulative change of R over one orbital period of the star, Magorrian & Tremaine (1999) where is an approximation of the analytical solution derived by Merritt (2013), whose exact form is expressed in terms of Bessel series [also see equations (44,45) in Vasiliev & Merritt (2013)].Since R 0 is very close to R lc in the case of Q < 1, the number density almost goes to zero in the loss-cone region.Q < 1 is the empty loss-cone regime; while in the Q > 1 case R 0 ≪ R lc , we are in the full loss-cone regime.We will explain how to compute Q in Section 3. Number density of bound stars In this subsection we transform the number density of bound stars in the loss-cone according to equation 3 from the standard phase space variables R and E into new variables more suitable for our analysis of TDEs.We consider transformations into the following new pairs of variables: (R, E) into (e, E), (β, E) or (e, β).All Jacobian determinants are nonsingular (see Table 1), so all pairs can be used as new independent phase space variables.We focus in the following on (e, E) and (β, E)-after integration over E the resulting distributions in e and β can be compared with N -body simulations and also be used to analyze expected TDE characteristics (see Section 3).The details of the variable transformation are presented in Appendix A, which are derived in the Keplerian regime.For the marginally bound stars (E ≈ 0), the Keplerian assumption breaks down and the variable transformations given in the Appendix may become inaccurate. Substituting the variable R in the expression of n TD,b (R, E) (equation 3) with 1 − e 2 (equation A4) and multiplying the corresponding Jacobian determinant provides the number density of the bound stars in the range of e ll ≤ e ≤ e ul : where are the boundaries of eccentricity obtained from the R = R lc limit and R = R 0 .Note that n TD,b (e, E) goes to zero outside of this eccentricity range.Substituting the variable R in the expression of N TD,b (R, E) with 2|E|/(|E t |β) (equation A8) and multiplying the corresponding Jacobian determinant, we obtain in the range of 1 ≤ β ≤ 1/g(Q), where we used equations (3) and (6) for the derivation.Note that the number density vanishes to zero at β = 1/g(Q), which corresponds to that R equals R 0 in the original number density (see equation 3).For Q ≫ 1, if ln β is negligible compared to − ln g(Q), equation 10 can be approximated as N TD,b (β, E) ≃ 2Q|E|A TD,b (E)/(|E t |β 2 ), verifying the β −2 dependence in the full loss-cone regime suggested by Stone & Metzger (2016).We also notice that for Q = 1 (g(Q) = 1/e; e: Euler's number), which corresponds to an energy value at the critical radius (Frank & Rees 1976;Amaro-Seoane et al. 2004), for fixed critical energy E = E crit we get the following results: (11) Number density of unbound stars After the pioneering work by Cohn & Kulsrud (1978); Shapiro & Marchant (1978) for bound stars there were more general papers, extending the domain of solution of the Fokker-Planck equation to the unbound stars in the galaxy.They focused on the tidal disruption event rate and studied the dependence of the event rate on the geometry of the host cluster (Magorrian & Tremaine 1999), the M BH − σ relation (Wang & Merritt 2004) and the stellar mass spectrum (Stone & Metzger 2016).All of them used the standard phase space variables of energy and angular momentum.The stellar number density in that case is written as n(E tot , R), with E tot = v 2 /2 − GM BH /r + Φ gal (r); the new term Φ gal (r) denotes the gravitational potential generated by all stars of the nuclear star cluster and the galaxy inside a radius r (here, for example, in the case of a spherical system).In the following we will argue that a direct variable transformation to our variables e and β as before is cumbersome and actually not really necessary.Let us first check the function R(β, E tot ), which defines the variable transformation from R and E tot to β and E tot .For β we need to find the pericenter distance r p as a function of E tot and R. From our definition of R in equation 1 we get from the expression for E tot above: r p is the smallest root of this equation in terms of r.Since that depends on the functional form of Φ gal (r) it is generally impossible to find an analytic solution; it has to be computed numerically for each galaxy. In the case of eccentricity, another problem occurs-the definition of e for the two-body problem has no straightforward generalization for orbits in a more general star cluster or galactic potential.Typically, orbits in galactic potentials are not closed; generalized eccentricities may be defined using the angular momentum or pericenter and apocenter (r a ), e = (r a − r p )/(r a + r p ), but it is not always a conserved quantity except for in a spherical potential.For stars, which we are interested in, when they come close to the SMBH, the two-body eccentricity will be different from a value computed far out in the galaxy.Therefore, we look at the situation of a two-body problem only, for a hyperbolic encounter between a star and the SMBH.We compute e at a place near the tidal radius, and convert the orbital energy of the star to E = E tot − Φ gal (r t ), which is positive for an unbound star.Adopting the relation between e and β for the hyperbolic orbit (equation A9), we obtain There is no simple and universal relation between e, E tot and R, due to the complicated β(R, E tot ) term, resulting in a complex expression also for the transformation R(e, E tot ).Therefore, we do not use the number density of unbound stars in the form of Magorrian & Tremaine (1999), Wang & Merritt (2004) or Stone & Metzger (2016).Instead, we use a simpler approximation for the number density of unbound stars in terms of e and β near the tidal radius, which is also appropriate for the comparison with our N -body results (see Section 3). Outside of the SMBH influence radius r h the loss-cone has negligible effect on the stellar distribution, because it is usually r t ≪ r h .Therefore the velocity distribution is close to a Gaussian along the principal axes of a velocity ellipsoid, also denoted as anisotropic Schwarzschild distribution; it allows for different velocity dispersions, e.g., in radial and tangential directions-for star clusters, see Amaro-Seoane et al. ( 2004), but see also Kazantzidis et al. (2004) for a counterexample in the galactic nuclei.In the following, we derive the N (β) based on a simple cross-sectional ansatz.The cross section for the stars that could pass within a distance r from the SMBH (with gravitational focusing) is Σ When the star's specific kinetic energy v 2 /2 at the orbital apocenter is much smaller than the gravitational potential GM BH /r at the pericenter of its orbit, as is the case discussed here, the cross section is approximately reduced to be Σ(r) ≈ 2πrGM BH /v 2 .Then the flow of stars that passes within r p can be estimated as f (< r p ) = nΣv ≈ 2πr p nGM BH /v, where n and v are the stellar number density and stellar velocity at the place from where these stars come.Thus we get the relation f (< r p ) ∝ (r p /r t )r t (see also equation 2 of Rees (1988), but note that we do not need to postulate isotropy here -it is sufficient to use the radial velocity only, because the tangential velocity is very small for loss-cone stars originating from a large distance to the black hole.Substituting β = r t /r p into the above relation, we find f hence the number density N (β) is proportional to β −2 and we write down the following expression: Then substituting the variable β in the expression of N TD,u (β, E) with E/[|E t |(e − 1)] (equation A9) and multiplying the corresponding Jacobian determinant, we have COMPARISON WITH NUMERICAL EXPERIMENTS Here we compare the distribution of tidally accreted stars in terms of e and β, which we have analytically derived in the preceding section, with N -body simulations.To get better statistical quality of the results we use in these comparisons the dependence on e and β only, rather than the joint 2D distribution in e, β (or a 3D distribution in E, e, β), for the reason of statistical noise, by integrating over all energies as follows: where we use the number densities of equations ( 8) and ( 10).For the number densities of unbound stars, we obtain N TD,u (e) and N TD,u (β) in the same way, but integrate from E = 0 to infinity.To evaluate the number densities from these equations we need to get an evaluation for Q.Equation 5shows that it requires the computation of the average angular momentum change per orbit ⟨∆J 2 ⟩. To measure ⟨∆J 2 ⟩ from the simulation, one needs to record positions and velocities of all particles at very high frequency.We did not save such data from our models, but note that Vasiliev & Merritt (2013) have done such measurements and the results generally agree with the theoretical prediction, but have large scatters (see their Figure 8).Hence, we turn to the analytical solution of ⟨∆J 2 ⟩ to construct our theoretical model.⟨∆J 2 ⟩ is computed at the apocenter of a stellar orbit, because two-body relaxation affects the orbit most strongly at the apocenter passage.This assumption can be justified because the orbiting star passes its apocenter so slowly that it has more time to interact with the surrounding stars, and also the perturbing forces may exert a non-negligible torque to the passing star (Touma & Tremaine 1997;Zhong et al. 2015).From relaxation theory (Frank & Rees 1976;Merritt 2013), we get where J c (r a ) = √ GM r a is the specific angular momentum of a circular orbit at r a , t dyn = r a /σ(r a ) is the dynamical timescale, is the local relaxation time (Spitzer 1987), σ(r) is the velocity dispersion of the stars, ρ(r) is the radial density profile of the star cluster, m = M c /N is the mass of the star, and ln Λ = ln(0.11N) is the Coulomb logarithm (Giersz & Spurzem 1994).Note that equation 17 is correct only qualitatively, as there are some cases where the assumption used in equation 17 is invalid; e.g., in the ultrasteep stellar cusps (Fragione & Sari 2018;Stone et al. 2018), but such cusps are not presented in our simulations. We introduce a dimensionless parameter, Q boost , of order unity to provide an approximate evaluation of Q. From equations ( 5) and ( 17), Q is then given by Here we have used R lc = J 2 lc /J c (r a ) 2 .One can approximate r a = a(1 + e) ≈ 2a for bound stars on highly eccentric orbits in the Keplerian potential, with the orbit's semimajor axis a.Since a = −GM BH /(2E), we conclude that for the given density profile and velocity dispersion, Q and g(Q) become a function of only E because of equation ( 7).However, the quantity −GM BH /(2E) diverges as E approaches 0. In practice, we compute the exact value of r a from the combined gravitational potential ϕ(r). Note that for unbound tidally accreted stars, according to equations 14 and 15, the number density does not depend on Q. Basic model For comparison of N -body simulations with the analytical results, we use the data of our previously published study (Hayasaki et al. 2018).We choose from that paper two models; each has particle number N = 512K and r t = 10 −5 .They are the models with largest particle number and smallest tidal radius in that parameter study, we consider them as the ones closest to a real nuclear star cluster (though still not sufficient in terms of particle number).The two models differ only by their black hole mass: one has M BH = 0.01, while the other has M BH = 0.05 (referring to models 5 and 10 of Hayasaki et al. (2018), respectively), in units where the total cluster mass is unity. Our spherical star cluster with N equal-mass stars and a star-accreting SMBH fixed at the center was initialized in the same way as in our previous papers (Hayasaki et al. 2018;Zhong et al. 2014)initially a Plummer model was used, which has a central flat core, which adjusts to the gravity of the central back hole during a few dynamical orbits, producing a cusp-like initial density distribution.More details about the time evolution and the profiles of density and velocity dispersion can be found in Zhong et al. (2014).We use dimensionless Hénon units, in which G = M c = 1 and the total energy of the system is E = −1/4 (Heggie 2014a,b).In the simulations, we take r t as a fixed accretion radius, in which all the stars are regarded as being tidally disrupted and removed from the simulations.More details can also be found in Hayasaki et al. (2018).Our N -body models adopt initially isotropic velocity distribution, therefore some stars are placed inside the loss-cone at the beginning.These stars shall cause a burst of TDEs.However, such a burst cannot last for a long time, because the loss-cone runs out of stars within one orbital period (at most, a few N -body time units), then the system enters the angular-momentum-diffusion-dominated phase.As a result, such initial surge of TDEs only accounts for less than a few percent of all the TDEs, hence, their impact on the validation of our theoretical models are negligible.After the initial adjustment, a central density cusp is established in the N -body simulation, even though the total simulation has been only about one-third of a half-mass relaxation time.Although the simulation times of these two models were less than one-third of the half-mass relaxation time, Preto et al. (2004) have shown that this is enough for the system to achieve the CK78 distribution. Before approaching our final goal, to compare the number densities according to equations 16 with N -body simulations, we will first check the quantities Q = Q(E), A TD,b (E) and A TD,u (E) because they are required for the calculation of the analytical number densities.According to the definition, A TD,b (E) depends on N TD,b (E) and Q (equation 4) , while A TD,u (E) just equals to N TD,u (E).In order to evaluate these quantities, we measure N TD,b (E) and N TD,u (E) and approximate it by double-powerlaw function.Figure 1 illustrates the results obtained from the N -body model with M BH = 0.01, measured at the end of the simulation.In both panels, the red histograms shows N -body data of bound and unbound tidally disrupted stars, as a function of their energy.The stars are distributed between E min (< 0) and E max (> 0).Note that E min (roughly −2 in model unit) is much larger than E t (−500 in model unit).This is consistent with the loss-cone theory that the stars are originating far from the tidal disruption radius.N TD,b (E) and N TD,u (E) is used to compute the analytical expressions for A TD,b (E) and A TD,u (E). To get an analytic expression for Q we use a similar method as before to approximate now the density profile and velocity dispersion profile of the star cluster using N -body data.To model the density profile, we use the following double-power-law function to fit the N -body data Then the 1D velocity dispersion is obtained via the Jeans equation (with G = 1), An example of the density and velocity dispersion profiles is depicted in Figure 2, which are measured from the simulation data when the density profile is stabilized.We also show the results of the double power law fitting on the density profile and the solution of Jeans equation for the velocity dispersion, which smooth the fluctuations in the data and are used to calculate Q.We observe that in the central part, well inside the influence radius (r ≤ 0.03), the simulated density profile show a steeper cusp (although very noisy due to low particle number) than the prediction of double-powerlaw fitting.This deviation only mildly affects our modeling, since the stellar mass in this cusp is less than 10 −3 and almost none of the disrupted stars are originated from this region.In our N -body model, the influence radius is defined as the radius within which the enclosed stellar mass equals the SMBH mass.The influence radius in the model with M BH = 0.01 (M BH = 0.05) roughly equals 0.1 (0.2). From the fitted density profile (equation 20), we also compute the composited gravitational potential in the star cluster ϕ(r) and the apocenter r a for the (zero angular momentum) radial orbit with a given orbital energy E.An example of r a (E) is shown in the right panel of Figure 2. Figure 3 depicts Q as a function of r a , which is evaluated by equation 19.Here, the density and velocity dispersion profile are modeled by equations 20 and 21 (see also Figure 2).For comparison purposes, we adopt two different values of Q boost = 1 and Q boost = 5.From the figure, we find that both the empty loss-cone regime (Q < 1) and full loss-cone regime (Q > 1) are present in our N -body data.The above criteria for the empty and full loss-cone regime are obtained by comparing the size of the loss-cone and the size of angular momentum diffusion (see equation 5).Magorrian & Tremaine (1999) have proposed another criterion, where the loss-cone regimes are separated by Q ≃ − ln R lc , which comes from the consideration of the loss-cone flux.This alternative criterion does not change our conclusion that both the empty and full loss-cone regimes exist in our N -body models.20), but replacing the density and radius quantities with number density and energy, respectively. Distribution of tidally accreted stars in eccentricity and penetration factor In the previous subsection, we have derived theoretical, analytical expressions for the distribution of tidally accreted stars (bound and unbound ones) in terms of eccentricity e and penetration factor β; in order to achieve that, we have used double power law functions for the stellar density, the Jeans equation for the stellar velocity dispersion, and double-power-law function for the energy distribution of tidally disrupted stars.Now we will check the final results of the previous subsection for N TD,b,u (e) and N TD,b,u (β) (see equation 16) directly against the N -body data of particles arriving at the tidal radius.Figure 4 shows the dependence of the number density of bound stars on the orbital eccentricity.The red histogram represents the simulated number densities.The uncertainties of the measurements are computed based on the Poisson error, the 1σ confidence level single-sided upper and lower limits are computed with equations ( 9) and ( 12) in Gehrels (1986).The black and cyan curves represent the theoretical number densities obtained with different Q boost values (note this specific color setting for Q boost is used in Figures 3, 4, 5 and 8).We find that Q boost can mildly affect the theoretical N TD,b (e).In both panels, the distributions are quite narrow near the parabolic case (e = 1).The simulated number densities are also in good agreement with the theoretical ones, except for some stronger fluctuations around e ∼ 0.996 and e ∼ 0.998.This is because the particle resolution of our N -body simulations is not sufficient there.The number density of the M BH = 0.01 case is wider for the orbital eccentricity than that of the M BH = 0.05 case.This trend can be interpreted as follows: from equation ( 9), the lowest eccentricity of the bound stars can be estimated to be e ll (M BH , E min ) = 1 − 4r t |E min |/GM BH .We find that E min = −2 in the M BH = 0.01 case, whereas E min = −4 (ignoring the isolated bins) in the M BH = 0.05 case.Substituting each quantity into the above equation, we find e ll (0.01, −2) = 0.9960 and e ll (0.05, −4) = 0.9984.These evaluations are consistent with the number density distributions shown in Figure 4. Figure 5 shows the dependence of the number density of bound stars on the penetration factor β. The figure format of the two panels is the same as for Figure 4. We find Q boost has strong effect on the theoretical N TD,b (β).For bound stars, the maximum value of Q is achieved at r a = r h .Figure 3 shows that in the M BH = 0.01 case, Q(r h ) = (1.89,9.45) for Q boost = (1, 5), respectively.Since N TD,b (β, E) = 0 when β > 1/g(Q), the integrated N TD,b (β) shall vanish beyond β = 7.4 in the Q boost = 1 case, while for the other Q boost case, the vanishing point extends to much higher β, as shown in the left panel of Figure 5 [the vanishing behavior of the theoretical models in the right panel (M BH = 0.05) can be understood in the same way].We find that the analytical number densities obtained with Q boost = 5 are in good agreement with the simulated ones within the range of β ≲ 10.For β ≳ 10, the deviation between the analytical and simulated number densities gets larger, because of the poor numerical resolution of the N-body models. Figures 6 and 7 compare the analytical number densities of N TD,u (e) and N TD,u (β) with those of the simulated number densities.Our theoretical predictions match well with the simulated number densities in both figures.As in the bound star case, the number densities of M BH = 0.01 case are Dependence of Q on the apocenter distance radius r a for bound stars.Hénon units are adopted for the plot.The solid ocher line and horizontal black dotted line delineate the − ln R lc curve ( see Magorrian & Tremaine 1999 for details) and the Q = 1 line, respectively, which provides the criterion for distinguishing the empty and full loss-cone regimes.The vertical dotted red line represents the influence radius of the SMBH.The solid black and light blue lines indicate the Q curves with Q boost = 1 and Q boost = 5, respectively, which are given by applying the fitted simulation results to equation ( 19). more widely distributed over the eccentricity than the M BH = 0.05 case.Substituting E max and |E t | = GM bh /r t into equation (A9) with β = 1, we obtain e max (M BH , E max ) = 1 + 2r t E max /(GM BH ).We find from the simulated value of E max that e max (0.01, 1.259) = 1.0025 and e max (0.05, 1.585) = 1.0006.These suggest that the number density is more widely distributed over the orbital eccentricity in the star cluster with the less massive black hole. Distribution of stars on the eccentricity-penetration factor plane At the time of tidal disruption, it is the eccentricity e and the penetration factor β that can be related to the observational characteristics (e.g., light curve).Also in our N -body simulations, we have a direct handle to determine these two quantities for any tidal disruption locally, without knowing anything about the large-scale distribution of stars and the gravitational potential.Therefore, we check here what we can deduce from our previous analytical results for the distribution of tidally disrupted stars on the e-β plane and compare again the expectations with the simulation data.For a given energy E = −GM BH /2a at the R = R 0 limit, we can define the minimum orbital eccentricity of a bound star by using r t /β = a(1 − e) as where β = 1/g(Q) is obtained through equations ( 6) and (A8).Adopting β = 1, we find equation ( 22) corresponds to equation ( 9) at r t /a ≪ 1: e ll = 1 − 4r t |E|/(GM BH ) ≈ 1 − 2r t |E|/(GM BH ).The unbound stars all have E ≤ E max .We use again |E t | = GM bh /r t and equation (A9), to obtain For Q = 1, we obtain the orbital eccentricity on the boundary between the empty loss-cone and the full loss-cone regimes: where the corresponding semimajor axis, a lcb , can be obtained from Eq. 19 as where we have adopted r a = 2a, and r a is computed from the combined gravitational potential of stars and SMBH, see right panel of Figure 2 for the result.In this case, we find e lcb ≈ 1 for β ≥ 1. Figure 8 shows the distribution of bound and unbound stars that undergo TDEs on the e-β plane.The solid magenta curve denote e max .The e min (solid) and e lcb (dashed) lines are plotted with two different Q boost values that are indicated by the colors, respectively.The value of E max is obtained from our N -body simulations and a lcb ≈ r crit /2 is read out from Figure 3.We notice that Q boost = 1 gives poor estimate of e min : lots of data points are lying outside the boundary.On the other hand, the Q boost = 5 models show better agreement with the data.While the space between e lcb and e min corresponds to the empty loss-cone regime, the space between e lcb and e = 1 corresponds to the full loss-cone regime.The stars located above the e = 1 axis are supplied to the black hole from the Maxwellian distribution regime.We find that only a small fraction of the stars originate from the empty loss-cone regime and the corresponding values of β are distributed around unity, whereas most of the stars are originally supplied from the region between e lcb and e max over the much wider range of β.These results are consistent with loss-cone theory: due to the diffusion nature in the empty loss-cone regime, the loss-cone flux is much smaller than the full loss-cone regime, and the stars inside the empty loss-cone cannot penetrate the tidal radius too much. We also find that there are some stars outside e min or e max .These outliers seem to violate the losscone theory.There are two possible reasons for the outliers behaving unexpectedly: first, the more energetic close two-body encounter occurs in the N -body simulations, leading to an enhancement of the angular momentum exchange, so that the stars have R < R 0 (Lin & Tremaine 1980).Second, the quantity Q depends on the density and the velocity dispersion of the star cluster, which can fluctuate with radius and evolve with time.The number of outliers is very small, so e min and e max are generally useful as limits. The critical eccentricities are (Park & Hayasaki 2020): where k = 0 is the original case discussed in Hayasaki et al. (2018).They found by N -body experiments that stars on marginally eccentric and marginally hyperbolic orbits are the main source of TDEs in a spherical nuclear star cluster. In our case, we have m ⋆ = 1/N ; for the simulations used in Figure 8 we find, for example, e crit,1 ≈ 0.885 and e crit,2 ≈ 1.12 for M bh = 0.01, β = 1, and k = 0 case and e crit,1 ≈ 0.933 and e crit,2 ≈ 1.067 for M bh = 0.05, β = 1, and k = 0 case.In any case, the maximum and minimum eccentricities predicted and found in the simulation, as shown in the figure, are much smaller than e crit,2 and much larger than e crit,1 , i.e. very close to the parabolic case.This means, in the terminology of Hayasaki et al. (2018) that all our TDEs are either marginally eccentric or marginally hyperbolic.This is not surprising, because our analysis is based on the same N -body data; however, we have derived in this subsection much narrower limits for e min and e max , which are based on the diffusion theory going back to CK78. DISCUSSION We have derived a semi-analytical model for the number distribution of eccentricity e and penetration factor β of tidally disrupted stars in a spherical nuclear star cluster around an SMBH.It has been compared to the results of our previously published (Hayasaki et al. 2018) direct N -body simulation with good agreement.To get our model, we use double-power-law functions to fit the stellar density of the N -body data and compute the velocity dispersion profile via the Jeans equation, as well as use double-power-law function to model the energy distribution of the bound and unbound tidally disrupted stars.Our method is based on the classical results of loss-cone diffusion for bound stars, using the Fokker-Planck equation (CK78).For unbound stars, we use a simple approximation based on the assumed Maxwellian character of the stellar distribution function.For an improved treatment of unbound stars, we need to consider the effects of the local self-gravity of stars and other external factors as the galactic potential.Our model is useful for discussing the scaling behavior of TDE statistics; current N -body simulations are still far away from realistic particle numbers and sizes of tidal radii (Hayasaki et al. 2018).Nevertheless, simulation data like the ones presented here have been used to extrapolate from our unphysically small particle numbers and large tidal disruption radii to real galactic nuclei -typical results for the TDE rate in N -body simulation models range around or little above 10 −6 per year per galaxy (Zhong et al. 2014;Panamarev et al. 2019;Li et al. 2023), which is the lower bound of Stone & Metzger (2016); the latter paper gives a range of up to 10 −4 per year per galaxy, in accord with recent work of Bortolas et al. (2023).When comparing such rates, one should bear in mind that the goal of our paper is not to give any accurate predictions of observed TDE rates.Cited papers include stars from a much wider range of origin, out into the bulge; our work has not yet properly accounted for a realistic mass spectrum and tidal disruption properties depending on stellar type and parameters (but this work is in progress).Another parameter not yet carefully checked in our models is the effect of the black hole mass (relative to the cluster mass and relative to the stellar particle mass). Let us first qualitatively discuss how the number densities derived in our work would vary with the black hole mass relative to the star cluster.We assume a power-law density profile in the cusp ρ(r) = ρ h (r/r h ) −s , where ρ h = (3 − s)M BH /(4πr 3 h ) is the density at the influence radius according to the definition of r h .Note that the total stellar mass inside the influence radius is equal to the black hole mass.The velocity dispersion inside the influence radius follows σ 2 (r) ≈ GM BH /r.A critical radius, where the star consumption is balanced with its replenishment by two-body relaxation, is defined by the conditions θ lc = θ D (in the notation of Frank & Rees (1976) and Amaro-Seoane et al. ( 2004)) or Q = 1 (in our notation following CK78).Following Frank & Rees (1976) and Brockamp et al. (2011) we get As we have shown in the previous section, a Q boost factor is important for matching the theoretical model with the N -body results, so we also add the Q boost factor to equation 27 and in the following part we adopt Q boost = 5.Note that the left term yields for the traditional value s = 7/4 the result r crit ∝ (r t M BH ) (4/9) consistent with Baumgardt et al. (2004).The right-hand side delivers a different scaling, because we have used additionally results of the scaling procedure described in Zhong et al. (2014): BH , and we simply adopt 1/2 for the scaling of r h instead of 0.54, which is obtained from the M BH − σ relation, log(M BH /M ⊙ ) = 8.18 + 4.32 log[σ/(200km s −1 )] (Schulze & Gebhardt 2011).We get for the ratio of critical to influence radius using our scaling (ignoring the slowly varying logarithmic term).Therefore, for s < 4, the ratio of critical to influence radius increases with black hole mass, as was already observed for the standard case by Frank & Rees (1976).Figure 9 depicts r crit /r h as a function of M BH (see also equation 28) in the range of 10 3 M ⊙ ≤ M BH ≤ 10 8 M ⊙ , where we adopt s = 1.75 for the Bahcall-Wolf cusp (Bahcall & Wolf 1976) and s = 1 for the cusp obtained from the N -body simulations. Assuming that the semimajor axis a min corresponding to E min is given by a fixed fraction of r crit as Since the ratio of r t to r crit is less than 3 × 10 −4 over the whole range of the black hole mass, e min is always larger: 0.994 for f = 0.05 and β = 1.Because of 1 − e min ∝ r t /r crit ∝ M (s−9)/[6(4−s)] BH , e min is closer to 1 for s < 4 as the black hole mass is larger.We also confirm that the black hole mass dependence of the pericenter radius is the same as that of the tidal disruption radius, i.e., r p,min = a min (1 − e min ) ∝ M Next, let us see how e min and e lcb (see equations 24 and 29) depend on the black hole mass on the e-β plane.The left panel of Figure 10 shows it for the M BH = 10 3 and 10 4 M ⊙ cases.The e lcb curve of M BH = 10 3 M ⊙ gets larger than M BH = 10 4 M ⊙ case (thick curve).In addition, as mentioned in Section 3.3 (see also Figure 8), most of the bound stars are distributed around the e lcb curve on the plane.These suggest that the star is distributed closer to e = 1 over the whole range of β as the black hole mass increases.The right panel of Fig. 10 depicts how the e min curve depends on the black hole mass on the e-β plane.It is clear from the panel that e min gets closer to 1 as the black hole mass is larger.This is consistent with 1 − e min ∝ M (s−9)/[6(4−s)] BH (s < 4) as we estimated in the previous paragraph.In summary, Figure 10 suggests that the stars are supplied into the black hole on extremely marginally eccentric to parabolic orbits for a spherical cluster with M BH > 10 7 M ⊙ black hole. Our analysis of the e-β distributions could provide a good tool to probe the dynamical status of the stars causing TDEs in a star cluster.Hayasaki et al. (2013Hayasaki et al. ( , 2016) ) and Bonnerot et al. (2016) studied the accretion disk formation by performing hydrodynamic simulations, where the authors have adopted (β, e) = (5, 0.8) as initial values, although they also used other combinations of (e, β).This parameter set is clearly ruled out in our model.However, this does not mean that such a very tightly bound TDE cannot occur.The tightly bound stars are likely to supply to the loss-cone not by two-body encounters, but by other mechanisms: the tidal separation of stellar or compact binaries approaching the SMBH (Fragione & Sari 2018), accretion-disk-mediated TDEs (Kennedy et al. 2016), TDEs produced by a recoiling SMBH (Gualandris & Merritt 2008;Li et al. 2012) or by a merging SMBH binary (Hayasaki & Loeb 2016;Li et al. 2017).Finally, we note that star clusters possessing radially biased velocity distribution, especially at high binding energy, or having nonspherical gravitational potential may help to increase the rate of TDEs with e < 1 (not too close to 1) and extend the β distribution to β > 1 in the empty loss-cone regime; see also the detailed discussion at the end of Section 5. CONCLUSION Understanding TDEs and their light curves provides a clue to the properties of the central SMBH, the accretion disk around it, and to the stellar density and velocity distributions in the nuclear star cluster surrounding the SMBH.The link between TDEs and central star clusters in galactic nuclei has been a classical subject of seminal papers, such as Frank & Rees (1976); Bahcall & Wolf (1976) finding the classical density distribution near central SMBHs.Dokuchaev & Ozernoi (1977a,b) first noted that accretion and the tidal disruption of stars with low angular momentum cause the density profile to flatten out toward the black hole (the energy distribution function f (E) drops towards zero as they showed; today we would call this the empty loss-cone region-see also Ozernoi & Reinhardt (1978) for a summary of the topic at the time).CK78 put this on a more quantitative footing by using the technique of solving the orbit-averaged Fokker-Planck equation.Among the classical work in this field, also Rees's conjecture about the fate of tidal debris (Rees 1988) is most noteworthy; it has been expanded more recently by Hayasaki et al. (2018) and Park & Hayasaki (2020), looking for critical eccentricities that separate partial from full mass loss (hyperbolic) and partial from full mass accretion (eccentric). Our study generalizes and expands this by computing approximate distributions of bound and unbound stars by predicting analytically (and comparing with N -body data) the number densities as a function of eccentricity e and penetration factor β, both of which are key parameters for the prediction of the observational appearance of TDEs. By following the generalized model of CK78 for bound and unbound stars, we predict that tidally disrupted stars, for all penetration factors, occupy only a small range in eccentricities, much smaller than the critical eccentricities cited above.We estimate some minimum, maximum, and typical eccentricities for tidally disrupted stars.They are all very close to the parabolic case, either very marginally eccentric or very marginally hyperbolic (see Fig. 8). Amaro-Seoane & Spurzem (2001) and Amaro-Seoane et al. ( 2004) proposed another model for the loss-cone in a spherical star cluster.It is interesting to note that they have also derived the density and velocity dispersion of bound and unbound loss-cone stars by using moment equations of the basic Fokker-Planck equation and very similar principles to CK78 and this paper.Due to the use of moment equations (the so-called gaseous model of star clusters; see, e.g., Giersz & Spurzem 1994) their analysis is completely based on density and velocity profiles rather than orbits with energy and angular momentum (or eccentricity and penetration factor).Their model takes into account an anisotropic velocity distribution also for the unbound stars.In the future, a more quantitative comparison of the two models could be done.Our theoretical predictions have all been tested against the data of our previously published direct N -body simulations (Hayasaki et al. 2018); comparison with our analytical model helps to understand the scaling behavior of the N -body simulations, since we can still not yet do realistic particle numbers for them.Our primary conclusions are summarized as follows: 1. Our results provide the number density of bound tidally disrupted stars as a function of orbital eccentricity e and penetration factor β; in practice, we used the cumulative numbers of disrupted stars N TD (e) and N TD (β) over some simulated time, since they can be directly compared with simulation results.To get them, we fit the stellar density with double power law profile and solved the corresponding velocity dispersion via the Jeans equation.We also use double-powerlaw functions to model the energy distribution of tidally disrupted stars (obtained from the N -body data). 2. From these results, we have analytically derived three characteristic orbital eccentricities: e min , e max , and e lcb in the loss-cone region, where e min and e max take the minimum and maximum values for a given β, respectively, whereas e lcb represents the orbital eccentricity which gives the boundary between the empty and full loss-cone regimes.These eccentricities are given by equations ( 22), (23), and (24), respectively.We have confirmed that the stars causing TDEs are distributed between e min and e max on the e − β plane by N-body experiments.Moreover, we find most of the bound stars are focused between e lcb and e = 1, i.e., in the full loss-cone regime, whereas the remaining bound stars are originating from the empty loss-cone regime.This result is consistent with the loss-cone theory. 3. We conclude from the limiting eccentricity values that they are very close to the parabolic case, and far away from the critical eccentricities for complete debris accretion or complete debris escape from the SMBH.We have shown that this conclusion holds also for larger more realistic black hole masses. Our model of angular momentum diffusion at a given energy value E, as given in Eq, 19, uses a free parameter Q boost for fitting to our simulation results.Merritt (2013) suggested using the steadystate solution of a Fokker-Planck equation in angular momentum space, for every energy value, in order to obtain the flux across the loss-cone boundary, i.e. a value of Q boost in our terminology.This concept is based on the assumption that steady state in angular momentum space is achieved much faster than in energy space; it is used by the PhaseFlow 1D Fokker-Planck code (Vasiliev 2017).In our paper, we prefer not to follow such a two-timescale approach, rather keep all our fitting procedures in energy space and use the free factor Q boost .There may be several factors which could affect the clean separation of angular momentum and energy diffusion time scales.Most notable are rotation, axisymmetric gravitational potentials-see our earlier 2D Fokker-Planck model with full 2D representation of angular momentum diffusion near the loss-cone in Fiestas & Spurzem (2010); Fiestas et al. (2012), based on the 2D Fokker-Planck code used by Einsel & Spurzem (1999), but also strong anisotropy could have an impact here Szölgyén et al. (2019).Arguably the use of PhaseFlow will be a good method to quickly get the n(e) and n(β) distributions for TDEs in galaxy models, using energy distributions, and directly aimed at real systems (Pfister et al. 2019(Pfister et al. , 2020;;Bortolas et al. 2023).In any case, from the knowledge of n(e) and n(β) one could estimate the distribution of peak mass fallback rate in different galaxies, since the peak mass fallback rate depends on both β (Guillochon & Ramirez-Ruiz 2013) and e (Hayasaki et al. 2013;Park & Hayasaki 2020). Some issues remain to be subject of further work; in our N -body simulations, we have adopted a fixed position and mass of the central black hole.For a star cluster with an intermediate mass black hole (IMBH), for example, the ratio of black hole mass to stellar mass will be smaller than in this paper, and the Brownian motion of the IMBH will not be suppressed.The Brownian motion of the black hole modifies the energy distribution of stars so that the number density and e-β distributions can be significantly affected, which is subject of our future work. Our model is a high-resolution direct N -body model of a nuclear star cluster, following individual stellar orbits and TDEs.We have measured the distribution of orbital parameters of tidally disrupted stars and compared the results with a semi-analytical model.Our N -body models are not restricted to spherical symmetry, even though in this paper we do study only spherical nuclear star clusters.We do not intend to predict detailed TDE rates for specific galaxies, such as other models based on 1D Fokker-Planck theory (see, e.g., Stone & Metzger 2016;Pfister et al. 2019Pfister et al. , 2020;;Bortolas et al. 2023).Rather, we are interested in the analysis of the stellar distribution, relaxation and accretion processes only in the inner zone of a nuclear star cluster (stellar mass limited to ten times the black hole mass).Models based on 1D Fokker-Planck theory are computationally much faster and can extend much farther out, but rely on approximations such as spherical symmetry and a steady state in angular momentum diffusion. Future work in the domain of our N -body simulation model is to include a stellar mass spectrum, stellar populations with different ages, direct stellar collisions and relativistic dynamics of stellar mass black holes in the nuclear star cluster.This goes along with a more realistic treatment of tidal disruptions (partial and full; Zhong et al. 2022;MacLeod et al. 2013) as well as direct plunges and relativistic or dissipative inspirals (see, e.g., our recent paper Li et al. 2023).Furthermore the assumption of spherical symmetry will be relaxed in favour of rotating, axisymmetric (Fiestas & Spurzem 2010;Zhong et al. 2015) and triaxial models (Norman & Silk 1983;Poon & Merritt 2002, 2004;Merritt & Poon 2004).All of these will disturb the steady-state picture underlying our current paper and have interesting consequences for TDEs and produce also gravitational-wave events instead of TDEs.The Jacobian determinants corresponding to the above variable transformations are summarized in Table 1.Note-The Jacobian determinants take a positive sign. 2 In the simulation, we record the position r and velocity v at the time when a star enters r t .The two-body eccentricity of a unbound star is e = 2EJ 2 /(GM BH ) 2 + 1, where J = |r × v| and E = |v| 2 /2 − GM BH /r.Then β is computed with equation A9. Figure 1 . Figure1.Energy dependence of the cumulative number densities for both the bound (left panel) and unbound (right panel) disrupted stars with M BH = 0.01 cases.In each panel, the red histogram shows the simulated number density (dN/dE), whereas the gray line represents the double-power-law curve, which is fitted to the simulated number density using equation taking the same functional form of equation (20), but replacing the density and radius quantities with number density and energy, respectively. Figure 2 . Figure2.Radial profiles of the density (left) and of the square of the stellar velocity dispersion (middle) for the model cluster with M BH = 0.01.In the left panel, the red dots and gray solid line represent the simulated and numerically fitted density profiles, respectively.In the middle panel, the red dots and gray solid line denote the square of stellar velocity dispersions evaluated by the simulation and by equation (21), i.e., the Jeans equation, respectively.The right panel depicts the apocenter distance radius r a as a function of the specific orbital energy E (= v 2 /2−GM BH /r) for a radial orbit with zero angular momentum.The horizontal blue dotted line indicates the influence radius r h , whereas the vertical black dotted line corresponds to the line of E = 0. Figure 4 . Figure 4. Dependence of the number density of the bound stars on the orbital eccentricity.The red histogram represents the simulated number density, while the solid black and light blue lines delineate the theoretical number densities with Q boost = 1 and with Q boost = 5, respectively.Note that the theoretical number density is computed by equation (16).The left panel corresponds to the M BH = 0.01 case, while the right panel corresponds to the M BH = 0.05 case.The error bars indicate the statistical uncertainty corresponding to the standard deviation. Figure 5 . Figure 5. Dependence of the number density of the bound stars on the penetration factor (β).The figure formats are the same as Figure 4 but for β. Figure 6 .Figure 7 . Figure 6.Same format as Figure 4 but for the unbound stars.Equation 15 is used to evaluate the theoretical number density quantitatively. Figure 8 . Figure 8. Distribution of the stars, which can cause TDEs, on the e-β plane (left: M BH = 0.01; right: M BH = 0.05).The solid magenta curve denote e max .The e min (solid) and e lcb (dashed) are plotted with two different Q boost values that are indicated by the colors, respectively (see also equations 22-24). where f is a parameter determined by N -body simulations, we have E min ∝ −M 7/3, E min decreases as M BH decreases.Substituting E min = −GM BH /(2f r crit ) into equation (22), we obtain Figure 9 . Figure 9.Black hole mass dependence of r crit /r h .The two slopes for the density profiles: s = 1.75 (Bahcall-Wolf cusp) and s = 1 (N -body simulations) are adopted. Figure10.Black hole mass dependence of two characteristic eccentricities: e min and e lcb in the e-β plane.We adopt the Bahcall-Wolf density cusp and Q boost = 5 for all the models.The left panel shows the β-dependence of e min and e lcb for the M BH = 10 3 M ⊙ and 10 4 M ⊙ cases.The dashed lines denote the eccentricity, e lcb , between the empty and full loss-cone regimes (see equation 24), whereas the solid lines denote the eccentricity, e min , of most tightly bound stars (see equation 29).The different color shows the different black hole mass.In the right panel, all the lines represent e min .We adopt different line styles for the different black hole masses for 10 3 M ⊙ ≤ M BH ≤ 10 8 M ⊙ . S.Z. has been supported by the National Natural Science Foundation of China (NSFC 11603067) and acknowledges the support from Yunnan Astronomical Observatories, Chinese Academic of Sciences.K.H. has been supported by the Korea Astronomy and Space Science Institute (KASI) under the R&D program supervised by the Ministry of Science, ICT and Future Planning, and by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2016R1A5A1013277, 2017R1D1A1B03028580, and 2020R1A2C1007219 (K.H.)).K.H. has been supported by the National Supercomputing Center with supercomputing resources including technical support (KSC-2019-CRE-0082 (K.H.)) and in part by the National Science Foundation under Grant No. NSF PHY-1748958.K.H. has been also financially supported during the research year of Chungbuk National University in 2021.The authors acknowledge the Yukawa Institute for Theoretical Physics (YITP) at Kyoto University.Discussions during the YITP workshop YITP-T-19-07 on International Molecule-type Workshop "Tidal Disruption Events: General Relativistic Transients" were useful to complete this work.The authors also acknowledge support by the Chinese Academy of Sciences (CAS) through the Silk Road Project at NAOC.We are grateful for the support from the Sino-German Center (DFG/NSFC) under grant no.GZ1289.S.L., P.B. and R.S. acknowledge the Strategic Priority Research Program (Pilot B) Multi-wavelength gravitational wave universe of the Chinese Academy of Sciences (No. XDB23040100).S.L. and R.S. acknowledges Yunnan Academician Workstation of Wang Jingxiu (No. 202005AF150025).The work of PB was supported by the Volkswagen Foundation under the special stipend No. 9D154.PB thanks the support from the special program of the Polish Academy of Sciences and the U.S. National Academy of Sciences under the Long-term program to support Ukrainian research teams grant No. PAN.BFB.S.BWZ.329.022.2023. Table 1 . Table of the variable transformation and the associated Jacobian determinant t |(e−1) 2 ]
15,292.2
2020-11-18T00:00:00.000
[ "Physics" ]
Functional Nanostructured Oligochitosan–Silica/Carboxymethyl Cellulose Hybrid Materials: Synthesis and Investigation of Their Antifungal Abilities Functional hybrid materials were successfully synthesized from low-cost waste products, such as oligochitosan (OCS) obtained from chitosan (one of the main components in crab shells) and nanosilica (nSiO2) obtained from rice husk, in a 1:1 ratio (w/w), and their dispersion in the presence of carboxymethyl cellulose at pH 7 was stable for over one month without aggregation. The molecular weights, chemical structures, morphologies, and crystallinities of the obtained materials were characterized by GPC, FTIR, TEM, and XRD, respectively. The antifungal effects of OCS, nSiO2, and the OCS/nSiO2 hybrid materials were investigated via a disk-diffusion method. The results showed that the nanohybrid materials had better resistance to Phytophthora infestans fungus than the individual components, and a concentration of the OCS2/nSiO2 hybrid material of 800 mg L−1 was the lowest concentration where the material completely inhibited Phytophthora infestans growth, as measured via an agar dilution method. This study not only creates a novel environmentally friendly material with unique synergistic effects that can replace current toxic agrochemicals but also can be considered a new platform for further research in green agricultural applications. Introduction Chitosan is the second most abundant natural polysaccharide and has a chemical structure that includes glucosamine and acetyl glucosamine monomer units linked via β (1-4) glycosidic bonds. Chitosan has a variety of unique functional characteristics, such as biodegradability, biocompatibility, nontoxicity, antibacterial, and antifungal properties; as such, it can be applied to a number of fields. Nonetheless, its poor solubility limits its utility in several applications, such as in food, biomedicine and agricultural fields [1]. In contrast to chitosan, oligochitosan (OCS), which is obtained by the degradation of chitosan, has short chain lengths, low viscosity, free amino groups, and great solubility at a neutral pH in addition to superior properties to those mentioned above for chitosan, resulting in it receiving increasing attention from many researchers recently [2][3][4]. Silica is a type of mineral that consists of silicon and oxygen, which are the two most abundant elements in the crust of the earth. Although it is regarded as a necessary nutrient source that plays an important role in stimulating the growth of plants as well as supporting the self-defense response of plants against various diseases, silica mostly occurs in the crystalline state and rarely in an amorphous state in nature. Therefore, plants rarely absorb natural silica, while synthesized silica, notably nanosilica (nSiO 2 ), used for agricultural purposes is in the amorphous form. nSiO 2 has been found to help improve soil properties and increase the germination rate of seeds [5] as well as reduce the germination time of tomato seeds, while significantly acting on pest and disease resistance in plants, which has led to increased yields in a number of crops [6,7]. Phytophthora infestans (P. infestans) is a heterothallic oomycete, which is a near-obligate hemibiotrophic pathogen under natural and agricultural conditions. Its asexual cycle enables incredibly rapid population growth in susceptible host tissue, and sporangia are produced on sporangiophores that grow from infected tissue [8]. Over 150 years after it was first discovered that late blight disease was caused by P. infestans in Europe and in North America [9], it still remains a major problem in agriculture and a dangerous pathogen causing serious decreases in crop yield, and can even drive modern farmers out of business, especially in Vietnam, where there is no stable disease suppression, and safe and effective treatments have not been available. This has resulted in commercially available toxic agrochemicals, such as metalaxyl and antifungal antibiotics, being widely marketed with little regulation, and the improper use of these fungicides for the early control of late blight disease has led to not only environmental pollution but also hazardous influences on human health. Simultaneously, the application of fungicides has brought about the emergence of fungicide-resistant strains of fungi. The development of OCS-nSiO 2 hybrid materials dispersed in carboxymethyl cellulose (CMC) as effective eco-friendly agrochemicals from the abundant waste resources of agriculture and aquaculture is meaningful and valuable for solving the alarming problems mentioned above. It was hypothesized that when the nSiO 2 particle is small, the more silica particles accumulate in the cell membrane, the cell wall of fungi can be easily broken. Furthermore, OCS products with small molecular weights will show higher antifungal ability than that of chitosan. And therefore, it is assumed that the combination of OCS and nSiO 2 in the presence of CMC creates new hybrid materials having a synergistic resistance effect against P. infestans. The inhibition zones of the hybrid materials are expected to be much larger than those of the individual components at the same concentrations. To the best of our knowledge, until now, no publications that study the synthesis of the nanostructured oligochitosan-silica hybrid materials with carboxymethyl cellulose stabilizer and investigate their antifungal abilities has been reported. Thus, this study, which is the first investigation on the preparation of an environmentally friendly material with unique synergistic effects between OCS and nSiO 2 with CMC stabilizer that can replace toxic agrochemicals, has applications in the current situation. The synthesis and investigation of the fungicidal ability of different kinds of OCS-nSiO 2 hybrid materials prepared from OCS with different molecular weights are also discussed. Building on the outcomes of this current study, the possible antifungal mechanisms are also described in this report. Materials Chitosan, derived from chitin with a 90.46% degree of deacetylation (DD) and a molecular weight (M w ) of 94.28 kDa, was supplied by the VINAGAMMA center (Ho Chi Minh, Vietnam). Raw rice husks (RHs) were bought in southern Vietnam. H 2 O 2 30% (d: 1.11 g mL −1 ) was purchased from Merck (Darmstadt, Germany). All other chemicals, including carboxymethyl cellulose (CMC), with a M w of 557.79 kDa, lactic acid, sodium hydroxide, hydrochloric acid, ammonium hydroxide and ethyl alcohol, were of reagent grade and obtained from JHD Chemical (Xilong, China). Distilled water was used in all preparations. The Phytophthora infestans (P. infestans) fungus was provided by the Institute of Applied Materials Science, Vietnam Academy of Science and Technology ( Ho Chi Minh, Vietnam). Preparation of Nanosilica (nSiO 2 ) from the Rice Husk Rice husks (RHs) were washed with water to remove dust, soluble substances, and other contaminants before being dried at 60 • C in a forced air oven. Then, 50.0 g of dried RHs was treated with 800 mL of a 1% (w w −1 ) HCl solution at room temperature for 2 h by magnetic stirring and kept overnight before decanting, and thoroughly washing with distilled water. Then, the acid-treated RHs were incinerated at 700 • C for 2 h inside a programmable furnace to obtain nSiO 2 . Preparation of Hybrid Materials The synthesis of hybrid materials based on OCS and nSiO 2 with a mass ratio of 1:1 was carried out as previously reported [11,12] with modifications as follows. nSiO 2 (1.00 g) was dissolved in 6.65 mL of 1 M NaOH in beaker A, while CMC (0.30 g) was dissolved with distilled water in beaker B. Afterwards, the CMC solution was poured into beaker A and stirred for 2 h. The obtained solutions of OCS with different molecular weights (OCS1, OCS2, and OCS3) were slowly added dropwise into the mixture in beaker A to create OCS1/nSiO 2 , OCS2/nSiO 2 , and OCS3/nSiO 2 , respectively, and the pH values of these mixtures were adjusted to 7 by 1 M HCl. They were stirred for 3 h prior to being stored at room temperature. Characterization and Measurements The average molecular weights of the chitosan samples were measured by an Agilent GPC (LC-20AB; Shimadzu, Kyoto, Japan) with an RI-10A detector and 250 ultrahydrogel column from Waters (Milford, MA, USA) together with an aqueous solution of 0.25 M CH 3 COOH/0.25 M CH 3 COONa as the eluent at a flow rate of 1 mL min -1 . The chemical structures of OCS, CMC, nSiO 2 , and OCS/nSiO 2 were analyzed by using an FTIR 8400S spectrometer (Shimadzu, Kyoto, Japan) and KBr pellets. The DD of the chitosan samples was calculated based on FTIR spectra according to the following equation: where A 1320 and A 1420 are absorbances of chitosan at 1320 and 1420 cm − 1 , respectively. The X-ray diffraction (XRD) pattern of nSiO 2 was recorded on an X-ray diffractometer, D8 Advance A25 (Brucker, Karlsruhe, Germany) in the scattering range (2θ) of 0 • -80 • with a step rate of 0.25 • /min. The particle sizes and morphologies of nSiO 2 and the OCS/nSiO 2 samples were investigated using transmission electron microscopy on a JEM1400 (JEOL, Tokyo, Japan). Antifungal Effect Test on Phytophthora Infestans The antifungal activities of chitosan, oligochitosan, silica, and the hybrid materials were investigated via a paper disk-diffusion method. First, carrot glucose agar plates (CGA, carrot infusion 220 g L −1 , glucose 20 g L −1 , agar 9 g L −1 ) were prepared as described in a previous report [13] with several modifications. Second, cell suspensions obtained from fungal colonies were spread directly on the agar surface. Then, paper discs (diameter 6 mm) loaded with a volume (50 µL) of the antimicrobial agents at different concentrations (600, 800, 1000, 1200, 1400, 1600, and 1800 mg L −1 ) in distilled water were dried in air and placed on the inoculated agar plates and incubated for two days. The inhibition zones were calculated from their diameters. The statistics were analyzed by using two-way analysis of variance (ANOVA) with p-values less than 0.05 being considered statistically significant. Investigation of the Minimum Inhibitory Concentration (MIC) According to the standard agar dilution method [14], the preparation was as follows: an appropriate dilution of a OCS2/nSiO 2 solution was added to prepared molten test agars to ensure that the concentrations of OCS2/nSiO 2 in the mixtures were 400, 600, 800, 1000, 1200, 1400, 1600, and 1800 mg L −1 . Then, the mixtures were poured into Petri dishes (90 mm × 15 mm). Inoculum samples were spread on the plates and fungi were grown for 10 days. Antifungal agent-free plates were used as control plates, and the tests were repeated three times. The MIC values were evaluated based on the lowest concentrations of antifungal agents, where no growth of P. infestans was observed. Characterization of the Oligochitosan The FTIR spectra of the original chitosan and the oligochitosan products produced by the irradiation with different doses of chitosan in the presence of 0.5% H 2 O 2 are displayed in Figure 1. The FTIR spectrum of the initial chitosan possesses characteristic absorption peaks, including those at 3200-3500 cm −1 (the stretching vibrations of O-H and N-H), 2888 cm −1 (the stretching vibration of C-H), and a range of peaks from 1159-896 cm −1 (the stretching vibrations of C-O and C-O-C in the glucose ring). The peaks at 1648 cm −1 (amide I), 1586 cm −1 (amide II), 1420 cm −1 (the symmetrical deformations of -CH 3 and -CH 2 ), and 1320 cm −1 (the absorbance of the C-N in CH 3 CONH-, amide III) were calculated to define the DD of chitosan. Additionally, the FTIR spectra of the irradiated products did not differ from the FTIR results of the initial chitosan; notably, the main functional groups of these materials are still present, and there are no structural changes in chitosan after irradiation. The DDs of CTS0, CTS1, OCS1, OCS2, and OCS3 were 90.46, 88.2, 87.88, 84.27, and 86.42, respectively. Characterization of the Nanosilica After the incineration process of acid-treated RHs at 700 • C, the obtained nSiO 2 was analyzed by FTIR, XRD, and TEM. Figure 2A shows that characteristic peaks appear at 1119 cm −1 (the asymmetrical stretching vibration of O-Si-O), 779 cm −1 (the symmetrical stretching vibration of O-Si-O), and 452 cm −1 (the bending vibration of O-Si-O) [15]. The band at 1179-1200 cm −1 is attributed to the asymmetric stretching mode of the SiO 4 coordination units; the broad peak at 3449 cm −1 is assigned to stretching of the -OH groups, while a small and low peak that appeared at 1639 cm −1 is related to the bending vibration of water molecules absorbed onto the surface of the silica particles [11,16]. Figure 2B shows the XRD pattern of the nSiO 2 sample derived from the combustion of acid-treated RH powders at 700 • C for 2 h. It shows that there is only one broad peak at 2θ = 22 • , indicating that the generated nSiO 2 was pure and had an amorphous structure, which is similar to the results of previous reports [17]. The particle morphology and size of the obtained nSiO 2 after the combustion process were analyzed via TEM and are displayed in Figure 2C,D. The TEM image and particle size distribution graph show that the shape of the nSiO 2 particles is nearly spherical, their average size is approximately 25-40 nm, and they have a tendency to form clusters during synthesis. Characterization of the Hybrid Materials The three categories of obtained hybrid materials include OCS1/nSiO 2 , OCS2/nSiO 2 , and OCS3/nSiO 2 , and they were created by interactions between the nSiO 2 obtained from RHs and OCS samples with different M w of 5.48, 4.21, and 3.60 kDa in a 1:1 ratio. To stabilize the hybrid materials during long-term storage, CMC, at a concentration of 0.3% (w/v), was added to the mixture; that is, CMC can help minimize the precipitation phenomenon of the hybrid materials. The observed FTIR spectrum of CMC reveals a strong broad peak at 3460 cm −1 (the stretching vibration of -OH) and a peak at 2923 cm −1 (the stretching vibration of C-H) ( Figure 3). Two strong peaks at 1608 and 1421 cm −1 can be observed due to the appearance of the asymmetrical and symmetrical stretching, respectively, of the -COO-groups [18,19]. Moreover, a peak at 1322 cm −1 is attributed to the bending vibrations of the -CH 2 and -CH 3 groups, and a peak at 1054 cm −1 is assigned to the >CH-O-CH 2 group [20]. In comparison with the FTIR spectra of nSiO 2 and OCS, Figure 3A Additionally, a shoulder peak at 971 cm −1 can be assigned to the formation of hydrogen bonds between the silanol groups on the surface of silica and the amid and oxy groups of OCS/CMC [21], while the amide I band of OCS1 at 1642 cm −1 shifts to 1622 cm −1 and the position of the amide II peak at 1589 cm −1 also relocates to 1598 cm −1 in the FTIR spectrum of OCS1/nSiO 2 , which indicates the appearance of associations between OCS and CMC. Moreover, a weak shoulder peak at 1735 cm −1 reflects the interactions between the -COOH groups of CMC and the -NH 2 groups of OCS [19]. Regarding the OCS2/nSiO 2 hybrid material, there were only small differences in the characteristic peak positions in comparison with the FTIR spectrum of the OCS1/nSiO 2 hybrid material. Figure 3B(d) shows a small shoulder peak at 963 cm −1 (due to the formation of hydrogen interactions between nSiO 2 and OCS/CMC) in the FTIR spectrum of OCS2/nSiO 2, while these interactions appear as a peak at 971 cm −1 in the FTIR spectrum of OCS1/nSiO 2 . The appearance of peaks at 1105 and 801 cm −1 indicates the existence of the Si-O-C bond. Additionally, OCS2/nSiO 2 has shifts of the characteristic absorption peaks, notably, the amide I band at 1642 cm −1 of OCS2 shifted to a higher wavenumber of 1643 cm −1 instead of a lower wavenumber (1622 cm −1 ), as in the case of OCS1/nSiO 2 , and the amide II peak shifted from 1592 to 1598 cm −1 with a higher intensity in the FTIR spectrum of OCS2/nSiO 2 than in that of OCS1/nSiO 2 . Likewise, Figure 3C shows that OCS3/nSiO 2 created from the combination of nSiO 2 and OCS3 in the presence of CMC has shifts in the characteristic peaks toward higher wavenumbers (amide I (1648 cm −1 ) and amide II (1595 cm −1 )) and the new peaks at 1108 cm −1 and 781 cm −1 can be attributed to the association between the components. Figure 4A(1),B(1),C(1) show the produced hybrid material solutions stored at room temperature for 24 h without the appearance of aggregation. These OCS1/nSiO 2 , OCS2/nSiO 2 , OCS3/nSiO 2 hybrid samples were analyzed via TEM to investigate their morphologies ( Figure 4A(2),B(2),C(2)) as well as particle sizes ( Figure 4A(3),B(3),C(3)). The results indicate that the particle sizes of the hybrid materials are far smaller than that of the nSiO 2 obtained from RH. In particular, the particle size of the OCS1/nSiO 2 sample is 4-8 nm in diameter, while the particle sizes of the OCS2/nSiO 2 and OCS3/nSiO 2 samples are approximately 2-8 nm and 3-7 nm, respectively. This finding can be explained by the first phase of the synthetic process for the OCS/nSiO 2 hybrid materials, in which silica becomes sodium silicate in the base medium. Then, oligochitosan/carboxymethyl cellulose plays a role in stabilizing the silica particles that are regenerated during the reaction of the sodium silicate and OCS/CMC solution. This helps prevent the nSiO 2 particles from aggregating. However, the different particle size ranges of the three different hybrid materials may be related to the formation abilities of the interactions between the components. (1), A(2), and A(3)), OCS2/nSiO 2 (B(1), B(2), and B(3)), and OCS3/nSiO 2 (C(1), C(2), and C(3)), respectively. Concentration of Antifungal Agent (mg L −1 ) The Average Inhibition Zone Diameter (mm) CTS1 nSiO 2 OCS1 OCS2 600 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 800 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 1000 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 1200 6 When CTS1 with a high molecular weight (M w = 48.35 kDa) was used as the antifungal agent at a concentration of 1200 mg L −1 , the diameter of the inhibition zone that begins to appear is approximately 6.83 ± 0.17 mm. However, despite increasing the concentration of CTS1 from 1200 to 1800 mg L −1 , this diameter is nearly unchanged, approximately 6.67-7.17 mm, which indicates that its antifungal activity remained essentially constant. Additionally, the OCS products with small molecular weights showed antifungal effects at 1400 mg L −1 . Although the fungal inhibition starts to appear at a higher concentration than that of CTS1, the inhibitory zone diameters at 1400 mg L −1 for the OCS1, OCS2, and OCS3 samples are 1.16, 2.60, and 3.00 mm, respectively, greater than that of CTS1 at the same concentration. Moreover, the diameters of the inhibition zones had an increasing trend when the concentration of the OCS antifungal agents increased from 1400 to 1800 mg L −1 ( Figure 5). OCS3, which has the smallest molecular weight (M w = 3.60 kDa), showed the highest antifungal ability among the three OCS samples based on the diameters of its inhibition zones at all investigated concentrations always being larger than those of OCS1 (M w = 5.48 kDa) and OCS2 (M w = 4.21 kDa). These results indicate that the smaller the molecular weight of chitosan, the higher the antifungal ability of chitosan. Similar to paromomycin or other aminoglycoside antibiotics, the primary mechanism of the cytotoxicities of chitosan and oligochitosan can be related to mitochondrial mutations leading to 12S rRNA binding with a higher affinity to the aminoglycosides, which can cause misreading of the genetic code and mistranslated proteins in the fungi [22]. However, the antifungal activity of OCS is better than that of chitosan. It can be reasoned that compared to chitosan with a high molecular weight, the low molecular weight OCS is more readily absorbed into the microbial cells of fungi [23], and simultaneously, the intramolecular hydrogen bonds and van der Waals forces in the short OCS chains are drastically reduced during the preparation of OCS, as mentioned in our previous report [10]; that is, the hydroxyl and amino groups are more active, which plays a primary role in the antifungal activity of OCS. The growth and normal physiological functions of a fungal pathogen can be directly disrupted by the formation of polyelectrolytic complexes between the positively charged amino groups of OCS and the negatively charged groups on the cell surface, which can lead to disordering of the cell surface [23,24]. In terms of nSiO 2 , the results of the investigation on its antifungal capability in the concentration range of 600 to 1800 mg L −1 show the appearance of an inhibition zone against P. infestans once the concentration reached 1200 mg L −1 , as in the case of CTS1, and the diameter of the inhibition zone was approximately 7.77 ± 0.15 mm, which is approximately 0.94 mm larger than that of CTS1 at the same concentration (Table 1). When the concentration of nSiO 2 increased from 1400 to 1800 mg L −1 , the inhibition zone diameter slightly expands from 8.00 ± 0.00 mm to 9.77 ± 0.15 mm. However, the inhibition zone diameters of nSiO 2 are smaller in comparison with those of the OCS samples at 1400-1800 mg L −1 . The different antifungal activities of nSiO 2 and OCS may depend on their fungal inhibition mechanisms. Until now, there has not been specific research on the mechanism of nSiO 2 inhibition against P. infestans. According to earlier reports [25][26][27][28], there are several theories in this regard, which are as follows. The antifungal activity of nSiO 2 may be related to protein molecule deactivation as well as direct interaction with the DNA of the fungal pathogen leading to DNA mutations and replication damage. In addition, the cell wall of fungi can be easily broken when the nSiO 2 particles are small and the hydroxyl groups on their surface easily create hydrogen links with the lipopolysaccharides of the cell. As more silica accumulates in the cell membrane, it can bring about cell lysis because of its prevention of transmembrane energy cycling. In addition, insoluble substances can disrupt the electron transport chains fungal membrane or cause oxidation. Regarding the hybrid materials, interestingly, whereas the lowest concentrations of nSiO 2 and OCS to produce a fungal inhibitory effect against P. infestans were 1200 and 1400 mg L −1 , respectively, the new nanostructured oligochitosan-silica hybrid materials showed antifungal abilities at a concentration of 800 mg L −1 with inhibition zone diameters of 10.17 ± 0.17 (OCS1/nSiO 2 ), 9.67 ± 0.17 (OCS2/nSiO 2 ), and 8.00 ± 0.50 mm (OCS3/nSiO 2 ). Notably, the inhibition zones of all three hybrid materials were far larger than those of the individual components at the same concentrations. These results demonstrated our above assumption. The lowest concentration to prevent the growth of P. infestans in this investigation decreases by more than 1.5 times when OCS/nSiO 2 hybrid materials are used as the antifungal agents, which proves that the combination of silica and oligochitosan successfully created a new material having much better antifungal activity against P. infestans than each individual substance. The inhibitory zones of the three hybrid materials tended to significantly increase when their concentrations increased from 800 to 1800 mg L −1 . During the investigation, the antifungal ability of OCS1/nSiO 2 is the best, followed by that of OCS2/nSiO 2 , while OCS3/nSiO 2 has the lowest antifungal capability. Although the antifungal activity of OCS3 against P. infestans is the best among the three surveyed oligochitosan samples, the hybrid material combining nSiO 2 and OCS3 does not have the best antifungal synergistic effect. This phenomenon can be explained by the M w of OCS3 being too small to fully coat the silica particles and create a stable hybrid material, such as OCS2/nSiO 2 (the most stable mixture) and OCS1/nSiO 2 (the second most stable hybrid material); the interaction forming abilities among the components of the OCS3/nSiO 2 mixture can be considered to be lower than those of OCS1/nSiO 2 and OCS2/nSiO 2 . This low ability can impact the antifungal effect of OCS3/nSiO 2 and make it lower than that of OCS1/nSiO 2 and OCS2/nSiO 2 ; however, importantly, it still has a better antifungal effect than that of nSiO 2 or OCS3. As mentioned above, OCS2/nSiO 2 is the most stable mixture among all three hybrid materials, showing no aggregation for more than one month when observing their stabilities (Figure 6), and simultaneously has good antifungal capability as opposed to that of OCS3/nSiO 2 . To identify the lowest concentration of the hybrid material that can inhibit P. infestans growth (MIC), OCS2/nSiO 2 was chosen for implementation in another bioassay test via the agar dilution method with a concentration range of OCS2/nSiO 2 from 400 to 1800 mg L −1 . The results indicate that, in comparison with the growth control plates, which are agar plates without an antifungal agent during the 10-day investigation ( Figure 7A), except for the plates with OCS2/nSiO 2 concentrations of 400 and 600 mg L −1 ( Figure 7B,C), there was no observed fungal growth in the rest of the plates treated with this antifungal agent at concentrations from 800 to 1800 mg L −1 ; that is, the concentration of 800 mg L −1 was the lowest concentration of the OCS2/nSiO 2 material to completely prevent fungal development ( Figure 7D). Interestingly, although the OCS2/nSiO 2 concentrations of 400 and 600 mg L −1 cannot produce an effective antifungal effect against P. infestans, the colonies in the plates with antifungal agent concentrations of 400 and 600 mg L −1 are obviously smaller than those in the growth control plates. This result requires further study in future reports. A study on antifungal activity of penicillic acid isolated from Aspergillus sclerotiorum (Filamentous fungi) against Phytophthora species was investigated [29]. The results indicated that the minimum inhibitory concentrations (MICs) of this kind of material against Phytophthora species were from 5 to 35 µg/disc. Notably, the MIC of penicillic acid inhibiting the growth of P. infestans was 5 µg/disc. This pattern was 80 times lower than that of the OCS2/nSiO 2 material used to inhibit growth of P. infestans in this report. Even though the antifungal activity of the hybrid material in this research was not as effective as those of penicillic acid, it is still considered a potential eco-friendly agrochemical for developing green agriculture field and reducing harmful impacts on the environment and human health caused by toxic fungicides and pesticides. Simultaneously, the novel nontoxic hybrid material can be a new vaccine for stimulating self-defense and growth of plants. Previous reports indicated that the materials from OCS and/or nanosilica created positive impacts on plants physiology and defense response as well as yield increases [11,12]. In contrast, it was found that penicillic acid is cytotoxic and hepatotoxic as well as acutely toxic to animals such as poultry, rats, and rabbits. Furthermore, it is claimed to be carcinogenic and cardiotoxic. It also causes blood vessel dilatability, according to a previous report [30]. In this study, the possible antifungal mechanisms of OCS-nSiO 2 /CMC hybrid materials are described (Figure 8). These materials may possess synergistic characteristics of both components, nSiO 2 and OCS, as mentioned above. nSiO 2 with tiny size coated by OCS/CMC with active functional groups probably easily deactivate protein molecules in the cell wall and accumulate in the cell membrane, leading to cell lysis. Moreover, the normal physiological functions of a fungal pathogen can be directly disrupted via DNA damage, genetic code misreading, as well as electron transfer chain disorders by the OCS-nSiO 2 hybrid material, when it penetrates inside the fungal pathogen. The successful synthesis of new nanostructured hybrid materials with a synergistic effect of the antifungal activities of both nSiO 2 and OCS is of great importance in developing a new platform for further research with respect to finding environmentally friendly, biodegradable, and natural fungicides. These materials can be considered potential candidates for application in the agriculture field, and can be a substitute for toxic and unhealthy commercial agrochemicals for the sake of a greener world. Conclusions This study shows that novel hybrid materials with a 1:1 (w/w) ratio of OCS to nSiO 2 in the presence of 0.3% CMC can be successfully generated. The materials have good solution stabilities of more than one month without aggregation. OCS2/nSiO 2 (4.21 kDa) was the most stable mixture in comparison with OCS3/nSiO 2 (3.60 kDa) and OCS1/nSiO 2 (5.48 kDa). Their particle sizes were 4-8, 2-8, and 3-7 nm, respectively. The possible schematic mechanisms of the antifungal activity of the OCS-nSiO 2 /CMC hybrid materials were also discussed. All the oligochitosan, silica, and hybrid material samples had good antifungal abilities against P. infestans, which causes late blight disease, yet the antifungal abilities of the hybrid materials, due to a synergistic effect, had better antifungal capacities than that of each individual component. Interestingly, the diameters of the inhibition zones of OCS/nSiO 2 were approximately 4-5 and 7-9 mm larger than those of OCS and nSiO 2 , respectively. Notably, the minimum concentration at which complete inhibition of fungal growth occurred was 800 mg L −1 , which was a 1.5-fold lower concentration in comparison with that of OCS or nSiO 2 . However, the antifungal ability of OCS2/nSiO 2 was still very effective. The minimum concentration of OCS2/nSiO 2 needed for complete P. infestans growth inhibition during the 10-day investigation by the agar dilution method was 800 mg L −1 . We can conclude that these materials have great potential as eco-friendly candidates with excellent fungal resistant activities for use in agriculture to replace current toxic fungicides. Conflicts of Interest: The authors declare no conflict of interest.
6,946.8
2019-04-01T00:00:00.000
[ "Materials Science", "Chemistry", "Environmental Science" ]
Probing the transversity spin structure of a nucleon in neutrino-production of a charmed meson Including O(m_c) terms in the coefficient functions and/or O(m_D) twist 3 contributions in the heavy meson distribution amplitudes leads to a non-zero transverse amplitude for exclusive neutrino production of a D pseudoscalar charmed meson on an unpolarized target. We work in the framework of the collinear QCD approach where chiral-odd transversity generalized parton distributions (GPDs) factorize from perturbatively calculable coefficient functions. Introduction The now well established framework of collinear QCD factorization [1][2][3] for exclusive reactions mediated by a highly virtual photon in the generalized Bjorken regime describes hadronic amplitudes using generalized parton distributions (GPDs) which give access to a 3-dimensional analysis [4] of the internal structure of hadrons. Neutrino production is another way to access (generalized) parton distributions [5]. Although neutrino induced cross sections are orders of magnitudes smaller than those for electroproduction and neutrino beams are much more difficult to handle than charged lepton beams, they have been very important to scrutinize the flavor content of the nucleon and the advent of new generations of neutrino experiments opens new possibilities. In particular, the flavor changing character of the electroweak current allows charmed quark to be produced in processes involving light quark partonic distributions [6]. This in turn allows helicity flip hard amplitudes to occur at the O( m c Q ) level where Q is the typical large scale allowing QCD collinear factorization. Such a coefficient function has to be attached to a chiral-odd generalized parton distribution, the elusive transversity GPDs [7][8][9]. The transverse character of these GPDs select the transverse polarization of the W−boson, which phenomenologically allows a separation of this interesting amplitude through the azimuthal distribution of the final state particles [ Kinematics For definiteness, we consider the exclusive production of a pseudoscalar D−meson through the reaction (see Fig. 1): where N is a proton or a neutron, in the kinematical domain where collinear factorization leads to a description of the scattering amplitude in terms of nucleon GPDs and the D−meson distribution amplitude, with the hard subprocesses: Our kinematical notations are as follows (m and M D are the nucleon and D−meson masses, m c will denote the charmed quark mass): with p 2 = n 2 = 0 and p.n = 1. As in the double deeply virtual Compton scattering case [10], it is meaningful to introduce two distinct momentum fractions: Neglecting the nucleon mass and ∆ T , the approximate values of ξ and ξ are To unify the description of the scaling amplitude, we define a modified Bjorken variable which allows to express ξ and ξ in a compact form: If the meson mass is the relevant large scale (for instance in the limiting case where Q 2 vanishes as in the timelike Compton scattering kinematics [11]) : The transverse amplitude In the Feynman gauge, the non-vanishing m c −dependent part of the Dirac trace in the hard scattering part depicted in Fig. 1a reads: Physics Opportunities at an Electron-Ion Collider where ε is the polarization vector of the W ± boson (we denotep = p µ γ µ for any vector p). The fermionic trace vanishes for the diagram shown on Fig. 1b thanks to the identity γ ρ σ αβ γ ρ = 0. The denominators of the propagators read: where k c (k g ) is the heavy quark (gluon) momentum. The transverse amplitude is then written as (τ = 1 − i2): with C = 2π 3 C F α s V dc , in terms of transverse form factors that we define as : where F d T is any d-quark transversity GPD, α = 2ξM 2 In the following, we shall put β to 0. The prefactor in Eq.(10) shows the two sources of the transverse amplitude : m c signals the contribution from the helicity changing part of the heavy quark propagator, while M D signals the contribution from the twist 3 heavy meson distribution amplitude which we parametrize (omitting the Wilson lines) as : and for simplicity we identify φ s with the leading twist 2 pseudoscalar charmed meson DA φ D defined as: The azimuthal dependence of neutrinoproduction. The dependence of a leptoproduction cross section on azimuthal angles is a widely used way to analyze the scattering mechanism. This procedure is helpful as soon as one can define an angle ϕ between a leptonic and a hadronic plane, as for deeply virtual Compton scattering [12] and related processes. In the neutrino case, it reads : and the "cross-sections" σ lm = * µ l W µν ν m are product of amplitudes for the process W( l )N → DN , averaged (summed) over the initial (final) hadron polarizations. In the anti-neutrino case, one gets a similar expression with σ −− → σ ++ , σ −0 → σ +0 , 1 + We use the standard notations of deep exclusive leptoproduction, namely y = p 1 .q/p 1 .k and 2(1 − y)/[1 + (1 − y) 2 ]. The azimuthal angle ϕ is defined in the initial nucleon rest frame as: while the final nucleon momentum lies in the xz plane (∆ y = 0). The quantity σ −0 is directly related to the observables < cos ϕ > and < sin ϕ > through Estimating the counting rates and the angular observables defined above is in progress [13]. As a first step, we calculate σ −− which is bilinear in transversity quark GPDs. At zeroth order in ∆ T , σ −− reads: Using the model of Ref [14] for the D + meson distribution amplitude and the parametrization of the dominant transversity GPD H T (x, ξ, t) from Ref [15] (and neglecting for the time being other chiral-odd GPDs contributions), we compute the contribution to the differential cross section given in Eq.(13) integrated over ϕ. The result is shown in Fig. 2 as a function of Q 2 for s = 20 GeV 2 , y = 0.7 and t = t min . Since the process selects the d− quark contribution, the proton and neutron target cases allow to access H d T and H u T respectively. Although small, the cross-sections are of the same order of magnitude as those for the neutrino production of π or D s mesons estimated in [5]. This shows that these processes should be measurable in intense neutrino beam facilities. Let us remind the reader that we allow Q 2 to be quite small since the hard scale governing our process is M 2 D + Q 2 . Conclusion. Collinear QCD factorization has allowed us to calculate neutrino production of D−mesons in terms of GPDs. Gluon and both chiral-odd and chiral-even quark GPDs contribute to the amplitude for different polarization states of the W ± boson. The azimuthal dependence of the cross section allows to separate different contributions. Planned high energy neutrino facilities [16] which have their scientific program oriented toward the understanding of neutrino oscillations or elusive inert neutrinos may thus allow -without much additional equipment -some important progress in the realm of hadronic physics.
1,664.6
2015-09-07T00:00:00.000
[ "Physics" ]
Modulation instability in nonlinear media with sine-oscillatory nonlocal response function and pure quartic diffraction Modulation instability of one-dimensional plane wave is demonstrated in nonlinear Kerr media with sine-oscillatory nonlocal response function and pure quartic diffraction. The growth rate of modulation instability, which depends on the degree of nonlocality, coefficient of quartic diffraction, type of the nonlinearity and the power of plane wave, is analytically obtained with linear-stability analysis. Different from other nonlocal response functions, the maximum of the growth rate in media with sine-oscillatory nonlocal response function occurs always at a particular wave number. Theoretical results of modulation instability are confirmed numerically with split-step Fourier transform. Modulation instability can be controlled flexibly by adjusting the degree of nonlocality and quartic diffraction. type of the nonlinearity and the power of plane wave have deep impacts on maximum and bandwidth of MI spectra.The maximum of the growth rate occurs always at a particular wave number.We also demonstrate properties of MI with split-step Fourier transform.Nonlocality and quartic diffraction can suppress or promote MI flexibly. Method Model and basic equations Considering an one-dimensional optical beam propagating in a nonlocal nonlinear media with pure quartic diffraction, the dynamics of such beam can be described by the following normalized nonlocal nonlinear Schrödinger equation 50 where the variables x and z are dimensionless spatial coordinates.The parameter β 4 corresponds to quartic dif- fraction coefficient of the beam ( β 4 > 0 and β 4 < 0 represent anomalous and normal diffractions 53,54 , respec- tively), and s = 1 ( s = −1 ) represents a focusing (defocusing) nonlocal nonlinearity.R(x) is nonlocal response function which has several different representations, such as the Gaussian function 55 , rectangular function [2][3][4] .In this paper, we assume the response function is in the following sine-oscillation form 22,23 with the Fourier transform of the nonlocal response function R(x) is represented as The sine-oscillatory nonlocal response function and its Fourier transform are shown in Fig. 1a,b, respectively. Linear-stability analysis In general, the plane wave solution of Eq. ( 1) can be written as [17][18][19] here P 0 is optical intensity of uniform plane wave. Then, we introduce a random perturbation a(x, z) to the plane wave solution with |a| 2 ≪ P 0 .Substituting Eq. ( 5) into Eq.( 1) and linearizing around the unperturbed solution, we can obtain Decomposition the perturbation into the complex form of a = u + iv with u and v are real and the imaginary parts, respectively, then we can obtain the following two coupled equations Considering the derivatives of Eqs. ( 9) and (10) with respective to coordinate z, we can obtain the following ordinary differential equations in the k space By solving Eqs. ( 13) and ( 14), the solution of random perturbation is obtained with c 1 and c 2 are arbitrary constants, and the eigenvalue is given by It is obvious that no MI exists when 2 < 0 and the plane wave is stable.On the contrary, for 2 > 0 , the per- turbation grows exponentially during propagation.The growth rate defined by g(k) = |Re{ }| is represented as which indicates that MI exists only when 2sP 0 /(1 − σ 2 k 2 ) + β 4 k 4 < 0 .In the limit of local nonlinearity, i.e., R(x) = δ(x) and σ = 0 , the growth rate is MI when s = 1 Firstly, we focus on MI in self-focusing nonlocal Kerr media with s = 1 .We display in Fig. 2 the MI gain spec- tra versus the wave number k and quartic diffraction coefficient β 4 .In the limit of local nonlinearity ( σ = 0 ), as shown in Fig. 2a, there are two symmetric sidebands when β 4 < 0 and the bandwidth decreases when β 4 decreases.However, MI disappears when β 4 > 0 .When the degree of nonlocality is weak σ = 1 , as shown in Fig. 2b, the sidebands appear regardless of the quartic diffraction is normal or anomalous.When β 4 < 0 , the maximum of growth rate increases with the decrease of β 4 while the bandwidth remains constant.On the con- trary with β 4 > 0 , when β 4 increases, the bandwidth will decrease, while the maximum of the growth rate will increase.Thus MI can be suppressed with smaller |β 4 | .As shown in Fig. 2c, when the degree of nonlocality is σ = 4 , we can find that both the maximum and the bandwidth of the growth rate decrease, which indicates that MI can be effectively suppressed with strong nonlocality. Figure 3 illustrates the influences of P 0 on MI.In the case of β 4 > 0 , bandwidth and maximum of growth rate increase with the increase of P 0 , as shown in Fig. 3a.However, in the case of β 4 < 0 , as shown in Fig. 3b, the maximum of the growth rate increases while the bandwidth remains constant when P 0 increases.Thus, the increase of optical intensity P 0 promotes MI regardless the quartic diffraction is normal or anomalous. Furthermore, different from other nonlocal response functions 50 , we also find that the maximum of the growth rate occurs always at the particular wave number |k| = 1/σ , as shown in Figs. 2 and 3. To demonstrate the MI obtained by linear-stability analysis in self-focusing Kerr media with a sine-oscillatory nonlocal response function, we perform numerical simulations of Eq. ( 1) by using split-step Fourier method.A plane wave with a small period perturbation is used as the initial input with amplitude ε = 10 −4 and the wave number k (corresponds to the maximum of the growth rate) of the perturbation. When β 4 > 0 , we show in Fig. 4 the propagation dynamics of the perturbed plane wave in nonlocal self- focusing media with different parameters.We can see that the perturbation grows obviously at propagation distance z = 3 with β 4 = 0.01 , P 0 = 1 and σ = 1 , as displayed in Fig. 4a.When the degree of nonlocality increases ( σ = 2 ), as shown in Fig. 4b, MI is suppressed significantly.Almost no MI exist at z = 3 , and perturbation grows visibly at z = 10 .This result conforms to the conclusion of Fig. 2 that MI can be effectively suppressed with strong nonlocality.Figure 4c,d also confirm the conclusions that MI can be promoted by increasing β 4 and P 0 , which have been illustrated in Figs. 2 and 3. Numerical simulations of the propagation of perturbed plane waves are displayed in Fig. 5 in the case of β 4 < 0 .Compare Fig. 5a with Fig. 5b, similar with β 4 > 0 , strong nonlocality also suppress MI.It is also demon- strated that MI is enhanced with the decrease of β 4 and weakened when P 0 decrease, as shown in Fig. 5c,d.These numerical simulations are in completely agreement with the analytical results obtained by linear-stability analysis. MI when s = −1 Subsequently, we study the MI in nonlocal Kerr media with self-defocusing nonlinearity ( s = −1 ).It is well known that MI in nonlocal self-defocusing media with second-order diffraction sensitively depends on the type of nonlocal response function 3 , whereas the introduction of fourth-order diffraction makes it possible for MI to occur in nonlinear media with arbitrary form of nonlocal response functions.Also standard diffraction is always positive (normal) 18 , on the contrary, quartic diffraction can be either positive or negative.Similarly, we display the gain spectra of MI with different parameters in Fig. 6.In contrast to the case of self-focusing nonlinearity, in the limit of local nonlinearity ( σ = 0 ), as shown in Fig. 6a, the sidebands of MI appear in the region β 4 > 0 and disappear in the region β 4 < 0 .In nonlocal case, as shown in Fig. 6b,c, the sidebands appear for arbitrary quartic diffraction coefficients, and the maximum of growth rate increases when the absolute value of the quartic diffraction coefficients increases.The bandwidths keep invariant for anomalous diffraction ( β 4 > 0 ) and decrease when β 4 decrease for normal diffraction ( β 4 < 0 ).Moreover, when the degree of nonlocality increases, both the maximum of the growth rate and the bandwidth of MI spectra decrease.This suggests that the conclusion MI is eliminated by strong nonlocality can also be easily obtained. Similarly, the impact of power P 0 on the spectra of MI in a self-defocusing media are displayed in Fig. 7.The maximum of growth rate always increase with the increase of P 0 for both normal and anomalous quartic diffrac- tion.The bandwidth remains constant for β 4 > 0 (Fig. 7a), whereas, as shown in Fig. 7b, in the region β 4 < 0 , the bandwidth increases when P 0 increases.These results are opposite to the case of s = 1 .We also find that the wave number |k| = 1/σ has the maximum of the growth rate. Numerical simulations of the propagation of perturbed plane wave (Eq.19) are demonstrated in Figs. 8 and 9. Obviously, as shown in Figs.8a,b and 9a,b, in the region β 4 > 0 and β 4 < 0 , strong nonlocality still suppresses MI effectively.Moreover, for β 4 > 0 , MI is weakened with the increase of β 4 and the decrease of P 0 , as shown in Fig. 8c,d.However, for β 4 < 0 , MI is weakened with the decrease of β 4 and P 0 , as shown in Fig. 9c,d.These numerical results are also consistent with the analytical results obtained by linear-stability analysis (Figs. 6 and 7). Conclusions In conclusion, we have investigated MI of one-dimensional plane wave in nonlinear Kerr media with sineoscillatory nonlocal response functions and pure quartic diffraction.The growth rate of MI was analytically obtained with linear-stability analysis and confirmed numerically with split-step Fourier transform.MI are sensitive to the degree of nonlocality, coefficient of quartic diffraction, type of the nonlinearity as well as the power of plane wave.The maximum of the growth rate occurs always at particular wave number |k| = 1/σ .Analytical and numerical results indicate that MI can be suppressed with the help of nonlocality and quartic diffraction.
2,406.6
2024-04-18T00:00:00.000
[ "Physics" ]