text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Effect of celecoxib in treatment of burn-induced hypermetabolism Abstract Background: Cyclooxygenase-2 (COX-2) catalyzes the rate-limiting step of prostanoid biosynthesis. Under pathologic conditions, COX-2 activity can produce reactive oxygen species and toxic prostaglandin metabolites that exacerbate injury and metabolic disturbance. The present study was performed to investigate the effect of Celecoxib (the inhibitor of COX-2) treatment on lipolysis in burn mice. Methods: One hundred male BALB/c mice were randomly divided into sham group, burn group, celecoxib group, and burn with celecoxib group (25 mice in each group). Thirty percent total body surface area (TBSA) full-thickness injury was made for mice to mimic burn injuries. Volume of oxygen uptake (VO2), volume of carbon dioxide output (VCO2), respiratory exchange ratio (RER), energy expenditure (EE), COX-2 and uncoupled protein-1 (UCP-1) expression in brown adipose tissue (BAT) were measured for different groups. Results: Adipose tissue (AT) activation was associated with the augmentation of mitochondria biogenesis, and UCP-1 expression in isolated iBAT mitochondria. In addition, VO2, VCO2, EE, COX-2, and UCP-1 expression were significantly higher in burn group than in burn with celecoxib group (P<0.05). Conclusion: BAT plays important roles in burn injury-induced hypermetabolism through its morphological changes and elevating the expression of UCP-1. Celecoxib could improve lipolysis after burn injury. Introduction Adipocytes can be broadly divided into white and brown fat cells. White fat cells are specialized to store chemical energy, while brown adipocytes produce heat, counteracting hypothermia, obesity, and diabetes [1]. Brown fat utilizes high mitochondrial content and high mitochondrial uncoupling protein 1 (UCP-1) to uncouple respiration and dissipate chemical energy as heat [2][3][4]. Meanwhile, small amounts of brown adipose tissue (BAT) may be found in the neck; in supra-clavicular and axillary regions; in paravertebral, perirenal/adrenal, and paraventral regions; around major vessels (the aorta and its main branches: carotids, subclavian, intercostals, and renal arteries). BAT can also be found within white adipose tissue (WAT) and skeletal muscle tissues [5]. Notably, histological studies on humans suggest that brown and white adipocytes are mixed together [6][7][8]. Brown fat cells are characterized by multilocular lipid droplets and increased amount of mitochondria which express UCP-1 [9]. UCP-1 is located on the inner membrane of mitochondria and uncouples the rates of substrate oxidation and ATP production by favoring the loss of protons and subsequent energy release [10]. Cyclooxygenase (COX) catalyzes the rate-limiting step of prostanoid biosynthesis. Two COX isoforms have been identified, COX-1, the constitutive form, and COX-2, the inducible form [11]. COX-2 is implicated in body fat regulation, but underlying cellular mechanism remains to be elucidated [12]. In the present study, burn injury model was constructed to explore the influences of Celecoxib, an inhibitor of COX-2, on fat catabolism and hypermetabolism after burn, as well as relevant molecular mechanisms. Burn injury model and grouping The present study was approved by Subcommittee on Research Animal Care of the First Affiliated Hospital to PLA General Hospital. A total of 100 Balbe/c mice (male, 20 + − 3 g) were randomly divided into four groups: sham treated group (S), burned group (B), sham burn + Celecoxib group (C), and burn + Celecoxib group (BC), 25 in each group. The animal experiments were performed in the Animal Experimental Center of the First Affiliated Hospital to PLA General Hospital. Thermal injury was produced in clean bench according to published protocols [13,14], with minor modifications. Each mice was anesthetized with pentobarbital sodium (50 mg/kg body wt, ip). After clipping back hair of the trunk, the animal was placed in a mold exposing 30% of total body surface area (TBSA), and the exposed area, which did not include the region expressing BAT, was immersed in 90 • C water for 9 s, producing a full-thickness, third-degree thermal injury of 30% TBSA. Sham burn animals were similarly treated, with the exception that they were immersed in room temperature water. After burn or sham burn treatment, all animals immediately received fluid resuscitation with 40 ml/kg saline intraperitoneally. For mice in sham burn + Celecoxib group and burn + Celecoxib group, 1500 ppm Celecoxib normal saline was given through gavage [15,16]. All mice were caged individually throughout the study duration. Measurement of rectal temperatures Rectal temperature of the mice in the four detected groups (S, B, C and BC) were detected. After the animal was placed on a hard surface, the probe was inserted, and rectal temperature was recorded after it was stabilized. Histology On post-burn day 10, mice were killed by pentobarbital sodium anesthesia. BAT lobules and contiguous or nearby normal WAT tissues were excised from posterior cervical-upper thoracic region and immersed in 10% formalin. After 24 h of fixation, excised fat was examined, comparative changes were noted, and lobe sizes were measured. Tissues were then block sectioned, inserted into cassettes, processed into paraffin blocks, microtome-sectioned into 6 μm and stained with Hematoxylin and Eosin (H&E) for microscopic examination. HE slides were microscopically evaluated for histological changes in BAT and adjacent WAT tissues. Lipid content was estimated as a percentage of 'clear areas' relative to remaining areas in stained cellular components (nucleus and cytoplasm) and supporting connective tissue. A calibrated ocular grid was used for random fields, and percentages were calculated as statistical averages. This method was utilized in lieu of fat-stained frozen tissue sections cumbersome in evaluation and fraught with a host of staining artifacts. Additionally, this method guaranteed equal or higher accuracy. Transmission electron microscopy protocol BAT was isolated from BALB/c mice of each group and washed twice with PBS solution. Then the tissues were fixed wtih 2% glutaraldehyde for 4 weeks. Later, the tissues were sectioned into 6 μm using microtome, and the ultrastructure, endoplasmic reticulum, and mitochondria of the cells were observed adopting transmission electron microscopy (TEM). Measurement of energy expenditure via indirect calorimetry Indirect calorimetry (TSE systems) was performed for 24 h at 7th day after burn treatment. The animals were fasted overnight. No food was offered in metabolic chamber during measurements. The metabolic chamber was controlled by a computer system. The rates of volume of oxygen uptake (VO 2 ) and volume of carbon dioxide output (VCO 2 ) were recorded, and respiratory exchange ratio (RER) and energy expenditure (EE) were computed automatically. Resting values for each parameter were defined as the 10th percentile of raw data. The animals were given free access to water during the measurements. EGTA and cut into small pieces. The tissue was then homogenized in extraction buffer containing 5 mg/dl fatty acid-free BSA and centrifuged at 600 g for 5 min. Supernatant was centrifuged at 11000 g, and pellet was resuspended in extraction buffer and centrifuged at 60 g for 5 min. The supernatant was centrifuged at 11000 g, and the pellet was suspended in a small volume of storage buffer containing 50 mM HEPES, pH 7.5, 1.25 M sucrose, 5 mM ATP, 0.4 mM ADP, 25 mM sodium succinate, 10 mM K 2 HPO 4 , and 5 mM DTT. All these procedures were performed in a cold room. The concentration of mitochondrial protein was determined through bicinchoninic acid method at 4 • C. The samples were kept in a -80 • C freezer until protein analysis. Measurement of BAT UCP-1 and COX-2 mRNA with quantitative real-time polymerase chain reaction RNA was extracted from iBAT tissues using QUAZO L reagent (Gibco-BRL). Briefly, tissues were put into a screw-cap vial half-full of Zircona beads (Biospec). Tissue and beads were placed in BeadBeater for 2 min. Lysate was transferred into a 2-ml tube where RNA was purified using Qiagen RNeasy kit after chloroform treatment. Specific primers for mouse UCP-1, COX-2 and β-actin (as a housekeeping gene) were designed by Invitrogen (California, U.S.A.). The primer sequences were as follows: UCP-1 forward: 5 -AGGGTTTGTGGCTTCTTTTC-3 , reverse: 5 -TGGTTGGTTTTATTCGTGGT-3 ; COX-2 forward: 5 -GTGCCTGGTCTGATGATGTATG-3 , reverse: 5 -TGAGTCTGCTGGTTTGGAATAG-3 ; β-actin forward: 5 -AGAGGGAAATCGTGCGTGAC-3 , reverse: 5 -AGGAGCCAGGCAGTAATC-3 . Real time RT-PCR quantified UCP-1 and COX-2 mRNA using Cycle iQ Muticolor Real Time PCR Detection System (Bio-Rad, Hercules, CA) and SYBR Green PCR Master Mix (Applied Biosystems, Foster City, CA). To normalize variations in mRNA extraction and cDNA synthesis, the expression of β-actin, a housekeeping gene, was also measured. Thermal cycling conditions referred to an initial 94 • C for 10 min, and followed by 50 cycles of 94 • C for 30 s, 60 • C for 30 s, and 72 • C for 30 s. Relative expression of UCP-1 and COX-2 was normalized to β-actin, and calculated using the method of 2 − C t . Statistical analysis All results were presented as mean + − SEM. Differences in continuous variables between two groups were estimated via unpaired t test. Nonlinear regression analysis was employed to identify correlations between continuous data. Two-way ANOVA was employed to compare data among three or more groups, and individual means were adjusted through Bonferroni test. All statistics were performed using SPSS 17.0. Differences with P value less than 0.05 were considered to be significant. Morphological change in BAT induced by burn injury We conducted tissue analyses to identify burn injury-associated iBAT activation. At macroscopic level, iBAT was much darker in burned animals than in sham-treated controls. However, in sham burn animals, it was difficult to distinguish iBAT from interscapular fat pad, which contains both iBAT and interscapular c (iWAT) based on gross observation. For further differentiation, we sectioned tissues, stained them with H&E, and performed histological analysis. At both low-and high-power light microscopic levels ( Figure 1A-D), two populations of adipocytes, white and brown, coexisted in iBAT in sham burn animals. Multilobular fat vacuoles were prominent in brown adipocytes in sham burned animals ( Figure 1A,C) and occupied the majority of cell volume. There was little Eosin-stained cytoplasm around fat droplets, and nucleus was located in peripheral area for each cell. In contrast, Eosin-stained cytoplasm was prominent in brown adipocytes from burned rats, and nucleus was centrally located and surrounded by cytoplasm ( Figure 1B,D). The size of fat droplets in burned animal ( Figure 1B,D) was much reduced compared with typical multilobular fat vacuoles seen in sham burn animals ( Figure 1A,C). Isolated BAT in interscapular area was weighted for mice. As shown in Figure 1E, BAT weight was significantly heavier in burned group (P<0.05), and Celecoxib treatment could obviously reduce BAT weight (P<0.05). Therefore, burn injury was associated with iBAT activation and increased the density of brown adipocytes in interscapular area. The ultrastructure of brown adipocytes was also evaluated using TEM. In sham burn animals (Figure 2A), fat droplets occupied the majority of cytoplasm, and round-shaped small mitochondria were scattered in cytoplasm. In burned animals ( Figure 2B), fat droplets were relatively small and cytoplasm was tightly packed by mitochondria. As illustrated in Figure 2C,D, morphometric analysis indicated that the ratio of fat droplet area to cytoplasmic area was significantly decreased after burn injury (68.41 vs. 14.41%, P<0.001). In addition, the number of mitochondria per brown adipocyte was increased after burn injury. Effect of celecoxib on treatment of burn-induced hypermetabolism The effect of Celecoxib in reducing burn injury-induced hypermetabolism was further investigated in groups of sham burn and burned animals receiving continuous saline or Celecoxib infusion via implanted osmotic pumps. We explored metabolic rates of burned animals receiving 7 days of continuous Celecoxib infusion. The results are summarized in Figure 3. Two-way ANOVA demonstrated that in sham burn animals Celecoxib treatment did not cause significant difference in VO 2 , VCO 2 , or EE. The analysis also unveiled significantly increased metabolic rate after burn injury. Celecoxib treatment lowered the expressions of UCP-1 and COX-2 Possible mechanism of Celecoxib reducing EE in burned animals was investigated through its effects on UCP-1 and COX-2 expressions. Quantitative real-time polymerase chain reaction (qRT-PCR) analysis suggested that compared with S (sham burn) group, the expression of COX-2 was significantly increased in B (burn) group (P<0.05). The treatment with Celecoxib could obviously suppress COX-2 expression in both sham and burn groups (P<0.05 for both). Moreover, compared with S group, the level of COX-2 did not show obvious changes in BC (burn + Celecoxib) group (P>0.05), suggesting that Celecoxib treatment could inhibit COX-2 expression during burn ( Figure 4A). qRT-PCR analysis on UCP-1 suggested that burn treatment could induce the expression of UCP-1, and that Celecoxib treatment was able to suppress UCP-1 expression (P<0.05 for both). Moreover, UCP-1 level did not show significant differences between S and BC groups (P>0.05), revealing that Celecoxib treatment might completely suppress UCP-1 activation induced by burn ( Figure 4B). In addition, the protein levels of UCP-1 and COX-2 in isolated mitochondria from iBAT were estimated using Western blot. Represented blotting images were shown in Figure 4C. Quantitative analysis demonstrated that compared with S group, burn treatment (B group) significantly induced the expression of COX-2 (P<0.05). Celecoxib could obviously inhibit COX-2 expression in both sham and burn groups (P<0.05 for both). Moreover, the protein levels of COX-2 in C and BC groups did not show significant difference (P>0.05), revealing that Celecoxib treatment inhibited COX-2 expression during burn ( Figure 4D). As for UCP-1 protein in isolated mitochondria from iBAT, B group showed a significantly increasing tendency compared with S group (P<0.05). Celecoxib could obviously inhibit UCP-1 expression in both sham and burn groups (P<0.05 for both). Moreover, UCP-1 expression did not show obvious differences between C and BC groups (P>0.05), suggesting an inhibiting effect of Celecoxib treatment on UCP-1 expression during burn ( Figure 4E). Discussion Two major observations were completed in two studies in this investigation. First, BAT was activated by burn injury and was associated with increased expression of UCP-1 and augmented amount of mitochondria. Second, mitochondria-targeted peptide Celecoxib attenuated burn injury-induced hypermetabolism, which was correlated with decreased expression of COX-2 and UCP-1. These observations clearly demonstrated that burn injury significantly increased resting EE at 7th day after burn. It was worth mentioning that in study 2, animals underwent surgical procedures for the implantation of catheters and Celecoxib delivery pumps; however, both burned groups showed similar increments in EE compared with sham burn animals, indicating that the above-mentioned surgical procedures did not exacerbate hypermetabolism at 7th day following burn injury. Thus, the observed alterations in metabolic rate reflected hypermetabolism induced merely by burn injury. Our previous study [17] revealed morphological changes in BAT after burn injury. The present study further explored those changes through immunohistochemistry and TEM, confirming that burn injury increased BAT mitochondria. All the observed changes were correlated with increased mitochondrion biogenesis and lipolysis. Increased mitochondrion biogenesis and lipolysis in BAT might be a possible mechanism for the development of burn injury-induced hypermetabolism. In the present study, mechanisms for increased BAT energetics were further studied through measuring UCP-1 and COX-2 expressions in isolated mitochondria from BAT. In the present study, we observed a significant alleviation in hypermetabolism following Celecoxib treatment in burned animals. The present study also revealed that reduction in resting EE was correlated with the reduction in brown adipocytes and reduced UCP-1 expression. These findings provided further evidence supporting that BAT and UCP-1 were associated with burn injury-induced hypermetabolic state. Mechanism underlying the effect of Celecoxib on energy metabolism can be explained by its role in modulating mitochondrial function after thermal injury. Significant increase in superoxide level following burn injury and oxidative damage to tissues are implicated in inflammation, systemic inflammatory response syndrome, severe injury, infection, sepsis, and multiple organ failure. Recent studies have demonstrated that superoxide induces uncoupling process in mitochondria, and that uncoupling is correlated with UCP-1 expression in different tissues, but not in those not expressing UCPs, such as liver [18][19][20]. The expression of UCP-1 in BAT occurs in mitochondria and is a nucleotide-sensitive process [21]. Mitochondria-targeted antioxidants could abolish superoxide-induced uncoupling through lowering UCP-1 expression [22]. Celecoxib is a scavenger of ROS that ameliorates lipid peroxidation, reduces mitochondrial ROS levels, inhibits mitochondrial permeability transition, and prevents the swelling of isolated mitochondria [23,24]. In addition, COX-2 is an essential factor for UCP-1 synthesis, which is a necessary product to induce the transformation of white adipocytes into brown adipocytes. Celecoxib is an inhibitor of COX-2, and the treatment with Celecoxib could suppress the expressions of COX-2 and UCP-1, thus inhibiting BAT activation and hypermetabolism. Taken together, mechanism for Celecoxib functioning was associated with reducing burn injury-induced hypermetabolism which was mediated by the inhibition of superoxide-induced UCP-1 expression in BAT. In recent years, BAT has become a target tissue in developing strategies for treating diseases associated with hypometabolic states such as diabetes and obesity [25]. The present study demonstrated that BAT might also play a role in burn injury-induced hypermetabolism. Therefore, BAT might be a potential target in treating burn injury-induced hypermetabolism. Ultrastructural analysis on BAT could serve as an indicator for treatment response in both hypoand hypermetabolic diseases and injuries. In conclusion, our studies have demonstrated that burn injury-induced hypermetabolism is associated with the activation of BAT with significant up-regulation of UCP-1 expression and mitochondria biogenesis. The inhibition of this hypermetabolic state by Celecoxib may be related to reduced mitochondrial UCP-1 expression. Therefore, altered mitochondrial function and increased uncoupling process are possible important contributors to burn injury-induced hypermetabolism. In the future, alteration in BAT could be a therapeutic target in reducing hypermetabolism and associated protein wasting in metabolic care of severely burned patients. Limitations in the present study should be noted. First, the sample size was not large enough. Second, the influences of Celecoxib treatment on the ultrastructure of brown adipocytes were not estimated by TEM in our study, which might reduce the veracity of final result. Third, mechanism for Celecoxib treatment functioning on burn-induced hypermetabolism was not explored. In addition, the influences of Celecoxib on COX-1, the constitutive form of COX, were not investigated in our study. Besides, Celecoxib could inhibit SUP-1 expression via multiple ways, but whether other inhibitors of COX-2 could suppress the expression of UCP-1 was not estimated. Therefore, further studies would be necessary in the future to verify our findings and investigate relevant mechanisms.
4,165
2020-04-01T00:00:00.000
[ "Medicine", "Biology" ]
N-Type Mg3Sb2-xBix Alloys as Promising Thermoelectric Materials N-type Mg3Sb2-xBix alloys have been extensively studied in recent years due to their significantly enhanced thermoelectric figure of merit (zT), thus promoting them as potential candidates for waste heat recovery and cooling applications. In this review, the effects resulting from alloying Mg3Bi2 with Mg3Sb2, including narrowed bandgap, decreased effective mass, and increased carrier mobility, are summarized. Subsequently, defect-controlled electrical properties in n-type Mg3Sb2-xBix are revealed. On one hand, manipulation of intrinsic and extrinsic defects can achieve optimal carrier concentration. On the other hand, Mg vacancies dominate carrier-scattering mechanisms (ionized impurity scattering and grain boundary scattering). Both aspects are discussed for Mg3Sb2-xBix thermoelectric materials. Finally, we review the present status of, and future outlook for, these materials in power generation and cooling applications. Introduction Thermoelectric technology, which can achieve reversible conversion between electricity and heat, holds great potential for alleviating the energy and environmental crises [1,2]. However, large-scale commercialization of thermoelectric technology has yet to be implemented, mainly due to the low energy-conversion efficiency of existing thermoelectric materials. The thermoelectric energy-conversion efficiency is contingent on the materials' dimensionless figure of merit zT = S 2 σT/ðκ e + κ l Þ, where S is the Seebeck coefficient, σ is the electrical conductivity, T is the absolute temperature, κ e is electronic thermal conductivity, and κ l is the lattice thermal conductivity [3][4][5][6]. Currently, advancements have been achieved in many kinds of thermoelectric materials, such as lead chalcogenides [7,8], SnSe [9][10][11], and half-Heuslers [12,13] at medium and high temperatures. However, progress on near-roomtemperature materials has been sluggish. The Bi 2 Te 3 -based compounds, discovered in the 1950s, have remained the state-of-the-art thermoelectric materials at around room temperature for several decades [14,15]. However, these materials are still not widely applied in viable thermoelectric applications due to the high cost of tellurium (Te) and some unresolved engineering issues (e.g., high contact resistance between the contact materials and the thermoelectric legs when nanostructured materials are considered for making the modules). Recently, the n-type Mg 3 Sb 2-x Bi x alloys have attracted significant attention because of their promising thermoelectric performance and good mechanical properties, the abundance and low cost of their constituent elements, etc. Mg 3 Sb 2 has a CaAl 2 Si 2 -type crystal structure, which consists of an octahedrally coordinated cation Mg 2+ layer and a tetrahedrally coordinated anion structure (Mg 2 Sb 2 ) 2that form a nearly isotropic three-dimensional (3D) chemical bonding network with an interlayer bond that is mostly ionic and partially covalent (Figure 1(a)) [16]. These crystallographic characteristics lead to decent electrical properties, intrinsically low lattice thermal conductivity, and good mechanical properties. Actually, Mg 3 Sb 2-x Bi x alloys have long been regarded as persistent p-type semiconductors, and their n-type counterparts were considered to be impossible to synthesize, which should be attributed to the negatively charged Mg vacancies that pin the Fermi level around the valence band [17][18][19]. This was the case until n-type Mg 3 Sb 2-x Bi x with high thermoelectric performance was reported by Tamaki et al. [17] [17,21,22]. Since the discovery of n-type Mg 3 Sb 2-x Bi x , notable advancements have been made, and its state-of-the-art average zT has been raised up to~1.1 in the range of 300-500 K, comparable to that of the Bi 2 Te 3 -based materials [23][24][25][26][27][28][29]. This review focuses on these n-type Mg 3 Sb 2-x Bi x alloys with promising thermoelectric performance. We first summarize the effects of alloying Mg 3 Sb 2 with Mg 3 Bi 2 on the band structure (e.g., bandgap, effective mass, and carrier mobility). The defect-controlled electronic transport in Mg 3 Sb 2-x Bi x thermoelectric materials will then be dis-cussed, including defect-chemistry-inspired dopant exploration and the defect-induced near-room-temperature shift in the carrier-scattering mechanism. Furthermore, promising applications in power generation and cooling are also discussed. The strategies mentioned here are believed to be equally applicable to many other thermoelectric materials. Some ideas for possible further improvement of thermoelectric performance in n-type Mg 3 Sb 2-x Bi x materials are also presented. Electronic Structure Alloying of Mg 3 Sb 2 with Mg 3 Bi 2 has a significant impact on the thermoelectric transport properties and band structures of the alloys. Zhang et al. [30] calculated the band alignments of Mg 3 Sb 2-x Bi x alloys and found that Mg 3 Bi 2 alloying results in a moderate increase in the energy separation between the conduction band minima K and CB 1 , decreasing the [23,32]. (c) Density of state effective mass (m d * ) for n-type Mg 3 Sb 2-x Bi x as a function of composition [23,28,34,35]. contribution of the secondary band minimum K to the electrical transport. Since Mg 3 Bi 2 is a semimetal [31] and Mg 3 Sb 2 is a semiconductor, the bandgap of Mg 3 Sb 2-x Bi x will be reduced with increasing Mg 3 Bi 2 content (Figure 1(b)), leading to an enhanced bipolar contribution for the Bi-rich compositions [23,32]. Thus, such compositions are not suitable for applications at higher temperatures. Considering the empirical trend of bandgap dependence on the application temperature range, the room temperature thermoelectric materials exhibit similar bandgaps, so the bandgap of Bi 2 Te 3-x Se x provides a hint for choosing Mg 3 Sb 2-x Bi x compositions with the proper Bi/Sb ratios [32]. In addition, the effective mass will be reduced with increasing Mg 3 Bi 2 concentration [31]. Theoretically, with increasing Bi content in Mg 3 Sb 2-x Bi x , the density of states effective mass (m d * ) is reduced from~1.53 m 0 (Mg 3 Sb 2 ) tõ 1.23 m 0 (Mg 3 SbBi) to~0.87 m 0 (Mg 3 Bi 2 ) based on the simulation from the BoltzTraP software package with spin orbit coupling (SOC) (300 K, carrier concentration:~4 × 10 19 c m −3 ), leading to a smaller Seebeck coefficient and higher carrier mobility [31]. Such a trend has been verified experimentally although the values seem to be lower than the theoretical calculation, as shown in Figure 1(c). It is clear that Bi alloying significantly reduces the density of states effective mass, indicating that it is an effective strategy to enhance the carrier mobility of Mg 3 Sb 2-x Bi x alloys. Therefore, the alloying concentration of Mg 3 Bi 2 is crucial for balancing the carrier mobility and the Seebeck coefficient, as well as the bipolar effect. Pan et al. [33] showed the band evolution from Mg 3 Bi 2 to Mg 3 Sb 2 through angle-resolved photoemission spectroscopy (ARPES) combined with density functional theory (DFT) calculations, which also indicated the effectiveness of adjusting the Bi/Sb ratio in improving thermoelectric performance. Chemical Doping Defect chemistry has been widely investigated in thermoelectric Zintl compounds in order to understand their intrinsic defects and to explore effective extrinsic dopants that can optimize their electronic transport properties [36][37][38]. In Mg 3 Sb 2-x Bi x alloys, native Mg vacancies caused by the low defect formation energy and high vapor pressure of Mg result in p-type conduction and abnormal electronic transport behavior near room temperature. Recent studies have shown that adding excess Mg could suppress the formation of such vacancies, leading to a reduction in hole concentration and further resulting in n-type conduction behavior [22]. However, due to the intrinsic doping limit, the electron concentration achieved is only~10 18 cm -3 , which is significantly lower than the optimal carrier concentration (~10 19 cm -3 ) needed to maximize the zT. Thus, further optimization of the electron concentration via extrinsic doping at the Mg or Sb/Bi sites is especially necessary in this case. Gorai et al. [39,40] used first principle defect calculations to study n-type doping strategies for Mg 3 Sb 2-x Bi x alloys, including (i) Sb substitution by mono-(Br, I) or divalent (Se, Te) anions, (ii) Mg substitution by trivalent or higher valence cations (La, Y, Sc, Nb), and (iii) insertion of cation interstitials (Li, Zn, Cu, Be), which are represented by black spheres and denoted by i(1), i(2), and i(3) in Figure 2(a). The chemical trends of various dopants have been revealed in terms of their solubility and maximum achievable electron concentration, and the discussion here mainly focuses on Sb and Mg substitution. For the Sb substitution strategy, the defect formation energy around the conductive band minimum in Te Sb is lower than that in Se Sb under the Mg-rich condition (Figure 2(b)), indicating that Te may have a higher doping limit and greater efficiency, both of which have been confirmed experimentally [20,35,41]. On the other hand, substitution by La, Y, and Sc at the cation site has been also explored. It has been found that the defect formation energy values of La Mg(1) , Y Mg(1) , and Sc Mg(1) are each lower than that of Te Sb , indicating that Mg substitution is even more effective than Sb substitution by Se or Te. The predicted carrier concentration in (La, Y, Sc)-doped Mg 3 Sb 2 could exceed 10 20 cm -3 . The relationship between the dopant concentration and the measured electron concentration of Mg 3 Sb 2-x Bi x for different dopants, i.e., La [42], Y [43], Sc [34], Se [35,44], and Te [45], is illustrated in Figure 2(c). For each dopant, the carrier concentration gradually saturates at a given value with increasing doping level, which is slightly different from the theoretical predictions (dashed lines). This may be closely related to the limited solubility of dopants in Mg 3 Sb 2-x Bi x alloys. Additionally, the optimized carrier concentration for power generation is in the range of~3 − 5 × 10 19 cm −3 , and it is slightly lower for cooling, and such carrier concentrations can be achieved by doping with Te, Y, Sc, and La. Actually, most studies reported thus far have focused on how to improve the zT value, ignoring the structural origin: e.g., how the electronic and atomic structures of the alloys, including the chemical bonding and the chemical state, evolve after introducing the dopant; how the band structures vary due to doping; and whether a chemical reaction occurs at high temperature. Such lack of structural understanding limits further improvement in the thermoelectric performance of the Mg 3 Sb 2-x Bi x alloys. Additionally, it should be noted that dopants may affect the thermal stability of the n-type Mg 3 Sb 2-x Bi x alloys, with studies suggesting that degradation in performance would occur with their long-term operation at high temperatures (≥673 K) and that cation-site doping (Y, La, Yb, etc.) via replacing excess Mg may improve their thermal stability and delay such decline in the thermoelectric properties [42,46,47]. This can be explained by the changing defect energetics and the fewer Mg deficiencies. Considering the differences in vapor pressure between Mg and Bi/Sb, the decreasing thermal stability has been attributed to the significant Mg loss (defects) at high temperature [48]. Cation-site doping can effectively eliminate Mg deficiencies and improve the thermal stability. On the other hand, by applying coating (such as boron nitride, etc.) on the surfaces of the Mg 3 Sb 2-x Bi x alloys, their thermal stability can be also effectively improved since such coating prevents Mg loss. Thus, both cation-site doping and coating technology are beneficial for improving thermal stability and promoting practical applications, especially power generation at elevated temperatures. Manipulating the Carrier-Scattering Mechanism In addition to tuning the carrier concentration, suppression of Mg vacancies in n-type Mg 3 Sb 2-x Bi x could also be employed to manipulate the carrier-scattering mechanism, thereby enhancing carrier mobility and improving the zT, which is particularly significant near room temperature. By exploring the Hall carrier mobility (μ H ) temperature (T) relation, ionized impurity scattering was found to dominate the electron transport around room temperature, resulting in low carrier mobility [45]. In order to reduce Mg vacancies and suppress ionized impurity scattering in Mg 3.2 Sb 1.5 Bi 0.49 Te 0.01 , Mao et al. [25] introduced transitionmetal elements (Fe, Co, Hf, Ta) into the material matrix, eventually increasing the room-temperature carrier mobility from~16 cm 2 V −1 s −2 to~81 cm 2 V −1 s −2 (Figure 3(a)). Similarly, other transition-metal elements, such as Nb [24] and Mn [5,32,44], have also been shown to have a dominant effect in shifting the scattering mechanism from ionized impurity scattering to a mixture of ionized impurity scattering and acoustic phonon scattering around room temperature. Additionally, since defects are highly sensitive to preparation conditions, Mao et al. [50] reported that manipulating the hot-pressing temperature could also tune the carrierscattering mechanism and thereby substantially enhance the carrier mobility of Mg 3.2 Sb 1.5 Bi 0.49 Te 0.01 . On the other hand, grain boundary scattering has also attracted increasing attention as a carrier-scattering mechanism other than ionized impurity scattering because samples with large grain size have been shown to demonstrate higher carrier mobility, which is particularly noticeable around room temperature [51,52]. The Mg 3.2 Sb 1.5 Bi 0.49 Te 0.01 samples prepared at a higher sintering temperature show noticeably enlarged grain size as well as higher electrical conductivity (Figure 3(b)). For example, the room-temperature electrical conductivity is~4 × 10 4 S m −1 for the sample with an average grain size of~7.8 μm, and it is~1 × 10 4 S m −1 for the sample with an average grain size of~1.0 μm [53]. Similarly, the grain size of Mg 3 Sb 2-x Bi x alloys was increased by annealing [54] or hot deforming [27,34,55], and improvement in mobility was also observed. It should be noted that the defects would be also reduced, in addition to the increasing grain size, by increasing the sintering temperature or by annealing. Thus, in these cases, the ionized impurity scattering was also reduced, eventually leading to the increased electrical conductivity. Kuo et al. explored the defect compositions near the grain boundary of Mg 3.05 Sb 1.99 Te 0.01 (nominal composition) using 3D atom-probe tomography (APT) (Figure 3(c)), from which the planar defect is clearly noticeable (as marked by the arrow), and it is a maximum 5 at. % Mg deficiency [56]. As discussed above, a Mg deficiency could easily induce a high Mg vacancy (V Mg 2-) concentration in the vicinity of the boundary and result in the depletion of free ntype carriers since V Mg 2serves as an effective electronkilling defect (Figure 3(d)). Single-crystal n-type Mg 3 Sb 2 was thus grown and used to investigate the underlying charge-scattering mechanism [33,57,58]. As indicated in Figure 3(e), acoustic phonon scattering dominates the charge transport in the single-crystal sample that lacks grain boundary electrical resistance, resulting in the sample's significantly increased weighted mobility near room temperature. This may support the proposition that grain boundary scattering dominates the carrier transport of n-type Mg 3 Sb 2-x Bi x alloys in the near-room-temperature range but does not exclude the ionized impurity scattering existing in the samples that do have lots of defects. Actually, in comparison to polycrystal Mg 3 Sb 2-x Bi x , not only grain boundaries but also defects are reduced in the single-crystal sample. Thus, additional details are needed to clarify the carrier-scattering mechanism, which is also crucial for further improving the thermoelectric performance of n-type Mg 3 Sb 2-x Bi x . Power Generation and Cooling Applications Mg 3 Sb 2-x Bi x alloys have shown promise for applications in power generation and cooling due to their high performance. Generally, the Sb-rich compositions (Mg 3 Sb 2 -based alloys) are promising for power generation at medium temperature although they may lack good stability due to Mg loss at high temperature (≥673 K). For example, Zhu et al. [59] reported that the conversion efficiency of Mg 3.1 Co 0.1 Sb 1.5 Bi 0.49 Te 0.01 could be up to~10.6% at a temperature difference of 400 K in the range from 300 K to 700 K, suggesting good potential for midtemperature heat conversion. The Bi-rich compositions (Mg 3 Bi 2 -based materials), on the other hand, show more potential for cooling applications. In this case, concerns regarding thermal stability can be ignored due to the low temperature range. Mao et al. [23] reported that optimized Mg 3.2 Sb 0.5 Bi 1.498 Te 0.02 exhibits a room temperature zT of more than 0.7 and that the unicouple of Mg 3.2 Sb 0.5 Bi 1.498 Te 0.02 and Bi 0.5 Sb 1.5 Te 3 exhibits a large temperature difference of~91 K at the hot-side temperature of 350 K, comparable to that of commercial coolers based on the Bi 2 Te 3 alloys. Imasato et al. [26] also fabricated n-type Mg 3 Sb 0.6 Bi 1.4 with a zT of 1.0-1.2 at 400-500 K, which surpasses that of the n-type Bi 2 Te 3 . Furthermore, Mg 3 Sb 2-x Bi x alloys are inexpensive compared to Bi 2 Te 3 -based materials because they minimize the need for expensive elemental Te, largely reducing the material cost. In addition, unlike the nanostructured n-type Bi 2 Te 3based materials that suffer from high contact resistance between the thermoelectric legs and the electrodes, such contact resistance can be greatly reduced for Mg 3 Sb 2-x Bi x by forming a sandwiched structure of Fe/Mg 3 Sb 2-x Bi x /Fe. All of these examples show the great potential that the Mg 3 Sb 2-x Bi x alloys have for becoming good candidates to replace the traditional Bi 2 Te 3 , promoting their application in thermoelectric technology. In particular, the high cooling performance of Mg 3 Bi 2 -based alloys inspires researchers to explore these semimetals as potential thermoelectric materials for cooling. Conclusions In summary, strategies like alloying, as well as defectcontrolled carrier-concentration optimization and manipulation of the carrier-scattering mechanism, have been successfully used to improve the thermoelectric performance of Mg 3 Sb 2-x Bi x alloys. Further research efforts are warranted to explore other effective and inexpensive dopants for wider temperature application such as in power generation and solid-state cooling, including the structural variation induced by these dopants, and effective strategies to improve thermal stability. In addition, the carrier-scattering mechanism needs to be clarified (whether ionized impurity scattering or grain boundary scattering can better explain the dramatic increase in mobility around room temperature) in the near future in order to further enhance the zT. Even so, Mg 3 Sb 2-x Bi x alloys show great potential for power generation and cooling applications.
4,305
2020-11-25T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
In Vitro Toxic Effect of Biomaterials Coated with Silver Tungstate or Silver Molybdate Microcrystals Purpose. This study evaluated the cytotoxicity of antimicrobial silver tungstate (Ag2WO4) or silver molybdate (Ag2MoO4) microcrystals coating biomaterials. Materials and Methods. The coating procedure was performed onto titanium, zirconia, and acrylic resin specimens. Eluates of the coated specimens were obtained, which were used for cytotoxicity analyses, including Alamar Blue, MTT, and CytoTox-ONE tests. Data were analyzed using two-way ANOVA, followed by the Tukey test (α= 0.05). The results of each experimental group were also compared to those of the control of living cells, taken as 100% cell viability. Results. In general, it was observed that the percentage of living cells from all biomaterials coated with both microcrystals was statistically different compared to the ones from the uncoated sample groups, except for the results from MTT of specimens of Ti coated with α-Ag2MoO4. All uncoated biomaterials were classified as noncytotoxic by the three assays used in the present study. It was observed that the microcrystals in solution were strongly cytotoxic, with death of almost 100% of cells, from the analysis of the results of the Alamar Blue assay. Conclusion. The most biomaterials coated with both microcrystals showed some degree of cytotoxicity in the different assays. The results described herein should be seen as an alert to the use of microcrystals, which can expose patients to health risks. Introduction Several therapies have been proposed to prevent and treat microbial infections, including those caused by biofilms. In general, the microbial biofilm formation is a multistep growth process involving pretreatment of the substrate by the formation of a layer called conditioning film, cell attachment, cell colonization, and extracellular matrix formation. Moreover, the biofilm formation can result in tolerance of microorganisms to high concentrations of various antimicrobials. This fact has an important clinical relevance because, even in the presence of treatment modalities, resistant biofilms can cause chronic infections [1]. Therefore, instead of just treating, effective therapies to prevent biofilm formation on surfaces of implanted and restorative materials are considered an essential measure against biofilmdependent diseases. Surfaces with antimicrobial properties are highly desirable in applications requiring a protective barrier against infection. In this context, coating surfaces with nanoparticles or microcrystals have been adopted [2][3][4][5][6]. In medicine and dentistry, different biomaterials, such as polymethylmethacrylate, ceramics, and titanium, could be coated with nanoparticles to improve their antimicrobial properties, especially in hindering adhesion and proliferation of microorganisms [7][8][9][10][11][12]. Silver nanoparticles (AgNPs) have been shown to have significant antimicrobial activity against planktonic cells and biofilms of Candida glabrata, Candida tropicalis, Staphylococcus aureus, and methicillin-resistant Staphylococcus aureus (MRSA) [13][14][15][16][17]. Recent studies have shown the antimicrobial effect of Ag as a microcrystal [18][19][20][21]. The ability of Ag 2 WO 4 in fighting Candida albicans is related to the imperfect and crystalline patterns of atom arrangements in its orthorhombic structure and good photocatalytic capacity under visible light [21]. Also, Ag particles can increase the interaction and penetration into the cell membrane and, consequently, their antimicrobial activity [13,16,17]. The same mechanisms involved in killing microorganisms via Ag particles may cause human cell death, limiting its clinical application. Studies have revealed that the consequences of using Ag nanoparticles include potential changes in the cognitive, sensory, and motor functions, which result in brain and liver damage [22,23]. Thus, it is paramount to perform biocompatibility studies to elucidate this issue. In spite of the expanding application of microcrystals within dentistry as antimicrobial agents, the biological responses of these new treatments have been insufficiently evaluated [22][23][24]. Thus, the aim of this study was to evaluate the cytotoxicity of extracts from silver tungstate (Ag 2 WO 4 ) or silver molybdate (Ag 2 MoO 4 ) coating biomaterials (titanium, zircon, and acrylic resin) through the human cell culture method. Materials and Methods The synthesis and characterization by X-ray diffraction (XRD) patterns of Ag 2 WO 4 and Ag 2 MoO 4 microcrystals were performed in the Functional Materials Development Center, directed by Professor Elson Longo, and published in the previous study [25]. In the same way, shapes and sizes of the Ag 2 WO 4 and Ag 2 MoO 4 microcrystals were observed by field emission scanning electron microscopy (FE-SEM), using the same methodology described by Santana et al. [25] Thirty-six discs of each titanium (Ti), zircon (Zi), and acrylic resin (AR) were prepared (8 mm in diameter and 2 mm in thickness) and distributed in the experimental groups as described in Figure 1. Pure titanium alloy discs were donated by Conexão Sistemas de Prótese (Arujá, SP, Brazil). Zirconia Lava™ (3M Espe, Saint Paul, MN, USA) discs were prepared using the Ceramill Motion software system. Acrylic resin specimens were prepared in a metal matrix for discs with the resin denture base Vipi Wave (VIPI Indústria e Comércio Exportação e Importação de Produtos Odontológicos Ltda, Pirassununga, SP, Brazil), according to the manufacturer's instructions. All AR samples were left in distilled water at 37°C for 48 hours to release the residual monomer [26][27][28][29]. The coating procedure of microcrystals on discs was performed using a precipitation technique. Suspensions of Ag 2 WO 4 or Ag 2 MoO 4 microcrystals at a concentration of 1 mg/mL each in isopropyl alcohol were subjected to ultrasound for 20 minutes. Then, 5 μL of each suspension was dripped onto the upper surface of the discs. After 20minute drying, the dripping was repeated 5 times. After deposition of all layers, the samples were heat treated at 250°C (Ti and Zi) or 100°C (AR) for 2 hours. Scanning electron microscopy (SEM) analysis was carried out to demonstrate the deposition of microcrystals onto disc surfaces. To analyze the cytotoxic effect, the coated biomaterials were subjected to procedures to obtain eluates of soluble substances. Specimens were ultrasonically cleansed in distilled water for 20 minutes and kept for 20 minutes under ultraviolet light to prevent microbial contamination [26][27][28][29]. Then, three specimens of each biomaterial (coated or not with the different microcrystals) were placed in polypropylene tubes with 2.5 mL of DMEM culture medium and incubated (37°C/24 hours) [26][27][28][29]. Another tube containing only 2.5 mL of culture medium was stored under the same conditions, thus serving as a negative control group. Human keratinocytes (HaCaT cell line 0341) were acquired from the Cell Bank of Rio de Janeiro (Rio de Janeiro, RJ, Brazil). HaCaT cells were grown in plastic Journal of Nanomaterials bottles of 75 cm 2 with DMEM culture medium containing 10% of fetal bovine serum (FBS), with 2.0 mmol/L L-glutamine, 10,000 μg mL -1 penicillin G, 10,000 μg mL -1 streptomycin, and 25 μg mL -1 amphotericin (Sigma-Aldrich, Saint Louis, Missouri, USA), at 5% CO 2 /37°C. For maintenance culture, the cells were cultured until they reach confluence (90%), washed with phosphate buffer 1X PBS (140 mmol L -1 NaCl, 3.0 mmol L -1 KCl, 4.30 mmol L -1 Na 2 HPO 4 , and 1.4 mmol L -1 KH 2 PO 4 ), removed with trypsin solution (0.05%, containing 0.53 mmol L -1 EDTA), and then subjected to centrifugation (400 × g/5 min). Next, the cells were resuspended in DMEM culture medium and counted in a Neubauer chamber. The cells were plated at 1:5 × 10 4 cells/well in sterile 96-well plates and incubated (5% CO 2 /37°C). After 24 hours, the cells were exposed to eluates and incubated for another 24 hours. The exposed cells were subjected to three cytotoxicity assays. In addition, the cell morphology was observed with an inverted optical microscope (Model 403, Optiphase, Van Nuys, CA, USA). The MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay was carried out to assess the cellular metabolism. The HaCaT cells exposed to eluates from coated materials were washed with 150 μL of 1X PBS. Next, 150 μL of MTT solution (5.0 mg/mL -1 ; Sigma-Aldrich, Saint Louis, Missouri, USA) was added to each well of a 96-well microplate, followed by incubation (37°C/5% CO 2 /4 hours in the dark). Then, the formazan crystals were solubilized in 75 μL of acidified 2-propanol with 0.04 N HCl. After stirring and checking the homogeneity of the solutions, absorbance reading was performed at 570 nm in Microplate Reader EZ 400 (Biochrom, Cambourne, Cambridge, UK). Cell proliferation of the keratinocytes was also assessed using the Alamar Blue® assay. The HaCaT cells exposed to eluates were washed with 150 μL of 1X PBS. Then, an aliquot of 150 μL of diluted Alamar Blue (Molecular Probes, Invitrogen Corporation, Waltham, Massachusetts, USA) solution (10% of Alamar Blue solution plus 90% of DMEM with 10% FBS) was added to each well of a 96-well microplate, followed by incubation (37°C/5% CO 2 /4 hours). After this period, the contents of the wells were transferred to a sterile black 96-well plate with a flat bottom and fluorescence was read immediately using Fluoroskan Ascent FL (Thermo Fisher Scientific, Marietta, Ohio, USA), with a filter of 544 nm of emission and 590 nm of transmission. The test with the CytoTox-ONE™ reagent measures the lactate dehydrogenase (LDH) release from cells with damaged membranes. The production of fluorescent product is proportional to the amount of LDH released. The cell culture plate was removed from the CO 2 incubator, and 3 μL of lysis solution was added to each well. Then, the plate was placed for 30 minutes in an incubator at 22°C. After this period, 150 μL of the reagent CytoTox-ONE (Homogeneous Membrane Integrity Assay, Promega, Madison, WI, USA) was added to each well and the plate was incubated again at 22°C for 10 minutes in the absence of light. Then, 75 μL of "stop solution" was added to all wells, and the contents were gently homogenized and transferred to a new sterile black 96-well plate with a flat bottom. The fluorescence was read immediately using Fluoroskan Ascent FL, with a filter of 544 nm of emission and 590 nm of transmission. The assays were performed in three separated experiments. Besides the assessment of cytotoxicity of biomaterials after coating with both microcrystals, cytotoxicity of Ag 2 WO 4 and Ag 2 MoO 4 microcrystals in solution was also analyzed. For the preparation of solutions, each sintered powder, after sterilization, was dispersed in sterile distilled water in Falcon tubes, with the final concentration of 1 mg/mL. To perform the cytotoxicity assay, serial dilutions, in DMEM culture medium with 10% FBS, were made to reach concentrations of 0.25 mg/mL and 0.125 mg/mL for both microcrystals. These concentrations were selected according to the minimum inhibitory concentrations (MIC) determined after previous microbiological tests [18,19,30]. The cells (HaCaT cell line 0341) were plated at 1:5 × 10 4 cells/well in sterile 96-well plates and incubated (5% CO 2 /37°C). After 24 hours, the cells were exposed to microcrystals in solution and incubated for another 24 hours. Cells that were not exposed to the solution of microcrystals served as a negative control. All cells were subjected to the Alamar Blue assay, as previously described. In the quantitative analysis, the results of the viable cells obtained on different tests (MTT, Alamar Blue, and CytoTox-ONE) were tabulated and subjected to the tests of normality (Shapiro-Wilk) and variance homogeneity (Levene) to verify the distribution of the variables. To assess the cytotoxicity of the three biomaterials coated with Ag 2 WO 4 or Ag 2 MO 4 microcrystals, a two-factor analysis of variance (two-way ANOVA) was applied, followed by the Tukey multiple comparison test, with 5% significance level for decision-making. In these analyses, the percentage of living cells of all the experimental groups was compared to that of the control. For the qualitative analysis, the results of each experimental group were compared with those of the control group (taken as 100% viability). The biomaterials and microcrystals used were ranked according to the cytotoxic effect [31]: noncytotoxic (inhibition less than 25% compared to the control group), slightly cytotoxic (inhibition between 25% and 50% compared to the control group), moderately cytotoxic (inhibition between 50% and 75% compared to the control group), and strongly cytotoxic (inhibition greater than 75% compared to the control group). Results Results about the synthesis and characterization by X-ray diffraction (XRD) patterns of Ag 2 WO 4 and Ag 2 MoO 4 microcrystals can be observed in the study reported by Santana et al. [25]. Figure 2 shows Ag 2 WO 4 and Ag 2 MoO 4 microcrystals on the surfaces of biomaterials: titanium (Ti), zircon (Zi), and acrylic resin (AR). Table 1 shows the results of living cell percentages relative to control (considered 100% living cells), for all experimental conditions. It is noteworthy that for the CytoTox-ONE test, the data was subtracted from 100% to determine the number of living cells since the test determines the number of dead cells by quantification of released LDH. In general, it was observed that the percentage of living cells from all biomaterials coated with both microcrystals was statistically different compared 3 Journal of Nanomaterials to the ones from the uncoated sample groups, except for the results from MTT of specimens of Ti coated with Ag 2 MoO 4 . In the Alamar Blue test, for all the three materials tested, coating with both microcrystals significantly reduced (p < 0:05) the percentage of living cells in comparison to the uncoated groups. When the coated materials were compared, there were no significant differences among them (p > 0:05), regardless of the type of microcrystal used. Within the groups of uncoated biomaterials, only the Ti was considered statistically similar (p > 0:05) to the control of living cells (considered 100%) and different from the other groups (p < 0:05), which showed no significant differences between them (p > 0:05). The MTT assay showed that, except for the Ti specimens coated with Ag 2 MoO 4 , the percentage of Journal of Nanomaterials living cells from the other experimental conditions was significantly lower (p < 0:05) than that from the uncoated groups. When the coated materials were compared, in general, the biocompatibility of Zi was lower than that of Ti, regardless of the type of microcrystal used. There were no significant differences (p > 0:05) among the groups of uncoated biomaterials. All conditions tested were statistically different from the control group of living cells by this test. For the CytoTox-ONE test, the results showed that the samples of Zi and AR coated with Ag 2 MoO 4 had the lowest percentages of alive cells, which were significantly different (p < 0:05) from the percentage obtained for the same materials coated with Ag 2 WO 4 . In addition, all uncoated materials demonstrated higher values (p < 0:05) of viable cells than the coated ones. When the biomaterials were compared, there were no significant differences (p > 0:05) among the Ti, Zi, and AR samples coated with Ag 2 WO 4 . When the specimens coated with Ag 2 MoO 4 were compared, the Zi samples showed the lowest percentage of viable cells. In addition, within the uncoated material group, there were no significant differences (p > 0:05) between Ti and AR samples, which were statistically different from Zi samples (p < 0:05). All conditions tested were statistically different from the control group of living cells by this test. All uncoated biomaterials were classified as noncytotoxic by the three assays used in the present study (Alamar Blue, MTT, and CytoTox-ONE). The same rating was given for Ti and AR specimens coated with Ag 2 WO 4 by the Alamar Blue test and for Ti specimens coated with Ag 2 MoO 4 by the MTT assay. The other biomaterials coated with both microcrystals were classified as slightly cytotoxic because their extracts inhibited cell growth between 25 and 50% compared to the control group. Figure 3 illustrates the cell morphology after the contact with the eluates from biomaterials coated or uncoated (inverted optical microscope). For all groups, it was possible to observe that the cells maintained their characteristic polygonal epithelial shape, forming a confluent monolayer. Also, from Figure 3, it can also be seen that during the incubation period, Ag microcrystals were released into the culture medium. Figure 4 shows the fluorescence results (Alamar Blue assay) of the solutions of Ag 2 WO 4 and Ag 2 MoO 4 microcrystals. It is possible to observe a substantial reduction of the cell viability when the experimental groups were compared to the control group. Because of this discrepant difference, statistical analysis was not necessary to establish the differences among groups. Figure 5 illustrates the percentage of cell viability for the solutions compared to the control cells (considered 100% viability). It was observed that the microcrystals in solution, in both concentrations, were strongly cytotoxic, with death of almost 100% of cells. Thus, the other tests (MTT and CytoTox-ONE) were not carried out because they were considered unnecessary. Discussion Surfaces with antimicrobial properties are highly useful in applications requiring a protective barrier against infection, and the use of nanoparticles or microcrystals represents a promising strategy, since they have the ability to inhibit the growth of microorganisms by different mechanisms [13,32]. Despite the promising and effective antimicrobial activity [18,19], the use of nanoparticles or microcrystals should be indicated with caution since the same death mechanisms of the microorganisms can also cause the death of human body cells. Genetic changes, systemic sclerosis, rheumatoid arthritis, lupus erythematosus, and chronic kidney disease can be caused by the exposure to nanoparticles [33][34][35]. Considering this information, this study evaluated the cytotoxicity of silver tungstate (Ag 2 WO 4 ) and silver molybdate (Ag 2 MoO 4 ), both in solution and in biomaterial coatings. For this, the cytotoxicity assays, using cultured cells and eluates, was selected because it is considered to be relatively simple, reproducible, effective, and controlled [36,37]. MTT and Alamar Blue assays showed similar results regarding the effect of coating Ti, Zi, and AR surfaces with Ag 2 WO 4 or Ag 2 MoO 4 . When these results were compared to the data from the CytoTox-ONE test, a divergent situation was observed, since the CytoTox-ONE test detected some differences among the experimental conditions that were not observed before. The CytoTox-ONE test seems to be more sensitive than the other ones, due the fact that, in some groups, cells with preserved mitochondrial activity showed changes in the membrane, which were detected by the release of LDH. This indicates that these methods are complementary to each other, each providing a capability lacked by the other, and thus, the results should be interpreted within the context of all data [38,39]. The results of this study showed that Ag 2 WO 4 and Ag 2 MoO 4 microcrystals, in solution, decreased the cell proliferation by over 75%, when compared to the control group and, thus, were considered extremely cytotoxic. These results agree with other studies [24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40]. The reduction in cell viability could be explained, in part, due to the release of silver ions [41]. Silver nanoparticles have promising antibacterial activity for combating adhesion and biofilm formation, but their small size and high mobility require security concerns due to increased cytotoxic potential [42,43]. Previous studies have shown that silver nanoparticles have detrimental effects on the cellular membrane [44], causing changes in its structure and, consequently, cell death. In addition, these particles can cause DNA damage and can increase reactive oxygen species, which can irreversibly impair cell functioning, also leading to cell death [45,46]. Another way that silver nanoparticles with 20 nm in size or smaller can lead to cell death is by penetrating into the cell without endocytosis, being distributed within the cytoplasm [47]. Moreover, it has been accepted that silver nanoparticles can connect onto the cell membrane surface, causing protein denaturation and, consequently, irreversible damage to cells [48]. Nanoparticles can penetrate inside the cell and cause damage by interacting with sulfur and phosphorus compounds, such as proteins and DNA [49]. Additionally, when these nanoparticles are placed in a culture medium, they form complexes of protein and nanoparticles [50]. The formation of these complexes can also have a cytotoxic effect, due to the interaction between the protein complex layer and the cells in culture [51]. The cytotoxic effects of the Ag 2 WO 4 and Ag 2 MoO 4 microcrystals, in solution, could also be explained by molybdate and tungstate singly. Molybdate and tungstate are molybdenum and tungsten oxyanions, respectively, which are metallic chemical elements. Metal oxides are known for their semiconductive properties, allowing electrons to transfer between the nanomaterial and aqueous environments [52,53]. Shape and size play an important role in determining the reactivity and the cytotoxicity of the nanoparticles [52]. The presence of metals can also participate in oneelectron oxidation-reduction reactions and lead to the formation of reactive oxygen species [54], and the highly reactive free radical can interact irreversibly with organic compounds of the cells, causing collapse of the membranes and damaging DNA, RNA, and proteins of the intracellular microorganism system [24]. Journal of Nanomaterials Currently, many studies have been published on the toxic effects of nanoparticles or microcrystals, while data on their toxicity, when used as coatings on biomaterials, are sparse. For most of the biomaterials coated with Ag 2 WO 4 and Ag 2 MoO 4 microcrystals, cell inhibition was observed in the order of 25% to 50%, compared to the control group, and both groups were classified as slightly cytotoxic. Although the precipitation technique has been used for coating the samples (Figure 2), the cytotoxic effect was probably due to the release of the nanoparticles from the coated biomaterial during preparation of the extracts, as shown in Figure 3. Therefore, the same effects above described for microcrystals in solution may have caused the cytotoxicity of coated biomaterials because of the release of particles in aqueous medium (eluates). Despite the inhibition of the cellular metabolism, Figure 3 illustrates that the cells maintained their characteristic polygonal epithelial shape, forming a confluent monolayer for all biomaterials coated with Ag 2 WO 4 and Ag 2 MoO 4 microcrystals. The results of this in vitro study provide valuable information about the cytotoxicity of biomaterials coated with microcrystals. However, future studies are needed to understand the complex toxicity mechanisms of microcrystals, which cause cell death. Conclusions According to the results and within the limitations of this study, it can be concluded as follows: (1) In general, the percentage of living cells from all biomaterials coated with both microcrystals was statistically different from that from the uncoated sample groups (2) In the Alamar Blue test, for all the three materials tested, coating with both microcrystals significantly reduced the percentage of living cells in comparison to the uncoated groups (3) In the MTT assay, for the majority of groups, the percentage of living cells from the coated biomaterials was significantly lower than that from the uncoated groups (4) In the CytoTox-ONE test, the results showed that the samples of Zi and AR coated with Ag 2 MoO 4 had the lowest percentages of alive cells, which were significantly different from the percentage obtained for the same materials coated with Ag 2 WO 4 (5) In the CytoTox-ONE test, all uncoated materials demonstrated higher values of viable cells than the coated ones (6) All uncoated biomaterials were classified as noncytotoxic by the three assays used in the present study (Alamar Blue, MTT, and CytoTox-ONE) (7) The majority of the biomaterials coated with both microcrystals were classified as slightly cytotoxic (8) The solutions of Ag 2 WO 4 and Ag 2 MoO 4 microcrystals were ranked as strongly cytotoxic Data Availability The statistical data used to support the findings of this study are available from corresponding author upon request. Conflicts of Interest The authors declare no conflict of interest.
5,476.8
2020-01-28T00:00:00.000
[ "Materials Science", "Medicine" ]
EFFECT OF MECHANICAL TREATMENT ON PROPERTIES OF CELLULOSE NANOFIBRILS PRODUCED FROM BLEACHED HARDWOOD AND SOFTWOOD PULPS Bleached hardwood and softwood South African kraft pulps were passed through a commercially available micro grinder for varying number of passes and the properties of the resultant pulps were assessed periodically using microscopy, Fourier transform infrared spectroscopy (FTIR), X-ray crystallography (XRD) and Thermogravimetric analysis (TGA). The ultrastructural analysis of the pulp fibres revealed that after 120 passes both hardwood and softwood bleached fibres showed the presence of cellulose nanofibres (CNFs). The FTIR analysis showed no modification to the cellulose structure and side groups upon treatment with the supermasscolloider (SMC). Both hardwood and softwood pulp fibres showed a decline in crystallinity after SMC treatment. For the hardwood pulps there were no major differences between the untreated pulps and those passed through the SMC. In the case of the softwood pulps, the SMC treatment resulted in more thermally stable CNFs compared with the untreated bleached pulps. This was observed at several levels of treatment (40, 120 and 200 passes). After 200 passes both the hardwood and softwood kraft pulp fibres produced CNFs with an average width of 11 nm and lengths with several micrometers. INTRODUCTION Cellulose is one of the main constituents of lignocellulosic biomass composed of β-1,4-linked glucose molecules.Using various mechanical and chemical methods cellulose can be disintegrated into cellulose nanofibrils (CNFs).CNFs supposedly consist of alternating crystalline and amorphous domains and are long, flexible entangled cellulose nanofibres that have lateral dimensions in the order of 10 to 100 nm, and lengths generally in the micrometer scale (Kalia et al. 2014).Recently, interest in CNFs has been increasing because the materials exhibit high specific surface area, high strength and stiffness, low weight and biodegradability (Kalia et al. 2014).Great potential has been shown by several researchers for the use of CNFs in the production of optically transparent composites for flexible electronics, improved barrier membranes, biomedical applications, and as additives in paper and paper board products (Nogi et al. 2005, Paralikar et al. 2008, Clemons et al. 2013, Kalia et al. 2014). Current research has focused on finding environmentally friendly, high efficiency and low costs methods to isolate CNFs.However, one of the key remaining challenges is translating current laboratory scale technologies to pilot/industrial scale and being able to produce cellulose nanomaterials with consistent properties (Eichhorn 2011). In this study, the use of micro-grinding was chosen for producing CNFs from South African hardwood (Eucalyptus) and softwood (pine) kraft pulps.Micro-grinding was selected over homogenisation and microfluidisation because of the large energy consumption and clogging of the system associated with those techniques (Spence et al. 2011, Khalil et al. 2014).The micro-grinding method involves passing pulp slurries between two grinding stones, one stone remains static and the other rotates (Abe et al. 2007, Wang et al. 2012, Qing et al. 2013).The mechanism of fibrillation in grinding is to break down hydrogen bonds and cell wall structure by shear forces and individualisation of pulp to nanoscale fibres (Siró and Plackett 2010).The grinding disks have bursts and grooves that contact the fibres to disintegrate them into the substructural components (Abe et al. 2007).The material used to manufacture the disks is usually non-porous resins containing silicon carbide.The SMC is an example of a commercial micro-grinding system that mechanically fibrillates cellulosic fibres.Even though micro-grinding has been done on Eucalyptus bleached kraft pulp fibres previously (Wang et al. 2012, Qing et al. 2013), a comparison of hardwood and softwood bleached kraft pulps originating from South Africa has not been studied. The aim of this study was to assess the effect of the SMC on cellulose properties during the production of CNF using bleached South African hardwood and softwood kraft pulps.Samples were extracted at different number of passes through the SMC to examine the change in cellulose structure, crystallinity, thermal stability and morphology. Material Fully bleached air-dried Eucalyptus (hardwood) and pine (softwood) kraft pulps obtained from South African commercial sources were used in this study.The pulp sheets were soaked in deionised water overnight prior to processing. Mechanical nanofibrillation A suspension was made of 1 wt.% kraft pulp samples dispersed in water using a mechanical stirrer at 2000 rpm for 30 min.Afterwards, the suspension was ground using an ultra-fine grinder (Supermasscolloider (SMC), Masuko Sangyo Co., Ltd, Japan) at 2507 rpm in the contact mode to obtain nanofibres.The samples were passed 200 times through the SMC, during which, samples were collected after different number of passes (0, 40, 120 and 200) to investigate any changes in morphological and chemical properties. Chemical composition of samples The cellulose, hemicellulose and lignin contents of the samples were analysed using standard methods such as TAPPI-T222 om-88 and TAPPI T19m-54. Fourier transform infrared spectroscopy (FTIR) FTIR of the samples were obtained on a Spectrum 100 FTIR (Perkin Elmer, Waltham, MA, USA) in Attenuated Total Reflection (ATR) mode.The scan of each sample was recorded from 4000 to 400 cm -1 at a resolution of 2 cm -1 in the transmission mode. X-ray crystallography (XRD) analysis The XRD patterns of the bleached softwood and hardwood kraft pulps were measured using a BRUKER AXS (Germany) X-ray Diffractometer D8 Advance equipped with PSD (Position sensitive detector) Vantec-1 detector and Cu-Kα radiation (λKα1=1,5406Å).Scattering radiation was detected at 2θ= 5-90° at a rate of 1 second per step.The crystallinity indices of the materials analysed were calculated using the peak height method (Segal et al. 1959), crystallinity index (CI) was calculated from the height ratio between the intensity of the crystalline peak (I 200 -I non-cr ) and total intensity (I 200 ) after subtraction of the background signal measured without cellulose according to the following equation: Thermogravimetric analysis (TGA) Thermogravimetric analysis was performed using a Perkin Elmer TGA1 from Waltham, Massachusetts, U.S.A.The analyses were done under flowing nitrogen at a constant flow rate of 20 mL min -1 .Samples (5-10 mg) were heated from 25 to 600°C at a constant heating rate of 10°C min -1 . Optical microscopy (OM) A 100 µl drop of the dilute suspension was placed on a cleaned glass microscope slide and covered with a glass cover slip.This preparation was then viewed under the Nikon Eclipse i80 compound light microscope.Soft imaging software (SIS) was used to capture images at different magnifications and used for image analysis.Three images were captured for each sample and a total of 15 measurements were performed per image. Transmission electron microscopy (TEM) A 20 µl drop of the dilute suspension of a CNF was placed on a formvar-coated grid and allowed to dry under the fume hood.The grids were then stained with a drop of 2% uranyl acetate solution, for contrast.The stained grids were allowed to dry for 20 min before being imaged on a JEOL 1010 TEM.Image capture was performed at 100 kV at different magnifications (x20000 to x250000).Three micrographs were captured for each sample and a total of 15 measurements were performed per image. Chemical composition The average results of three measurements for cellulose, hemicellulose and lignin are tabulated in Table 1.The softwood bleached pulp showed higher cellulose and lignin contents compared with the hardwood bleached pulp.On the other hand, the bleached hardwood contained higher hemicellulose content.These observations are in line with literature reports which show that softwoods are inherently associated with higher lignin contents while hardwoods are associated with higher percentages of polysaccharides (Betts et al. 1997, Malherbe and Cloete 2002, Kiaei et al. 2014, Mathews et al. 2015).The high hemicellulose content is known for facilitation of disintegration of Eucalyptus pulp into nanofibrils and capable of reducing energy input during mechanical grinding (Iwamoto et al. 2008, Syverud et al. 2011). FTIR Figure 1 represents the FTIR spectra of fully bleached hardwood and softwood kraft pulp fibres.FTIR peaks of cellulose are mainly located at 3500-3200 cm -1 (O-H stretching), 3000-2800 cm -1 (CH stretching), 1476 cm -1 (HCH and HOC bending vibration), 1376 and 1334 cm -1 (HCC, HCO and HOC bending), 1290 cm -1 ( HCC and HCO bending) and 1118 and 1095 cm -1 (CC and CO stretching) (Gibril et al. 2014).The fingerprint regions of both hardwood and softwood pulps are similar.The grinding did not affect the chemical structure of cellulose since the functional groups before and after grinding were similar with a negligible shift in wavenumbers, if any (Figure 1).However, it seems as though the peak intensities of the softwood cellulose were slightly higher than those of the hardwoods, particularly at the highest number of passes through the SMC.If that is the case, the observation could likely account for the obtained higher quantity of cellulose in the softwood from the chemical composition analysis. XRD analysis Figure 2 displays XRD patterns of bleached soft and hardwood kraft pulps as well as SMC treated pulps and their respective crystalline index values are shown in Table 2.All the diffractograms of the samples were semi-crystalline displaying an amorphous and crystalline peak.Three peaks of cellulose I were displayed.The peak at 2θ= 16,4° was assigned to [(1-10) and ( 110)] crystallographic planes, 2θ= 22,7° assigned to (200) crystallographic plane, and 2θ= 34,6º was assigned to (400) crystallographic plane.It is worth noting that the treatment did not significantly alter the crystal structure of the material.However, the crystallinity of both hardwood and softwood pulps decreased after SMC treatment.In fact, generally both hardwood and softwood untreated pulps had higher crystallinity values in comparison to SMC treated pulps.The results show that there was negligible percentage difference of crystallinity index values after the various passes of hardwood and softwood pulp.The observations could be due to the fact that SMC randomly broke apart both crystalline and amorphous regions.Similar findings have been reported by others (Qing et al. 2013, Yousefi et al. 2013, Mtibe et al. 2015).The breakage of the crystalline region is believed to play a part in the defibrillation of nanofibres and cellulose bundles (Qing et al. 2013). TGA Thermogravimetric analysis and differential thermograms (DTG) curves of 0, 40, 120 and 200 passes of bleached hard and softwood kraft pulp fibres through the SMC are shown in Figure 3 (ad).Both bleached hardwood and softwood pulps as well as SMC treated samples showed a single degradation step.Another common observation is a decrease in thermal stability at the minimal passes within the limits of the experiment.It is clear that there is a slight increase in thermal stability of hardwood proportional to the number of passes; however, the 200 passes indicated a slight drop compared with 120 passes (Figure 3a and Figure 3b).A similar pattern was observed for the softwood pulps.However, 200 passes in the latter case decreased significantly in thermal stability to about 5°C Intensity / a.u. 2θ lower than the untreated pulp.A decrease in thermal stability of a natural fibre by SMC treatment is well known in literature and typically is related to the degradation of cellulose resulting from friction of two rotating stone disks (Qing et al. 2013, Yousefi et al. 2013, Mtibe et al. 2015).What is more interesting about our results is a decrease in thermal stability at lower pass (120) which is followed by a decrease after more passes (200).Taking into account a catalytic behaviour of lignin and hemicellulose as it is noted in the literature (Qing et al. 2013, Yousefi et al. 2013, Mtibe et al. 2015), it seems as though at 40 passes the SMC mainly reorientated chemical components which favoured catalytic behaviour.The reorientation of the chemical components and/or a high possibility of lignin and hemicellulose degradation during the passing process are likely responsible for the increased thermal stability at 120 passes.However, 200 passes appeared to have resulted into degradation of cellulose, which obviously ensued in low thermal stability. Morphology of SMC treated samples The untreated fibres analysed by optical microscopy showed that the hardwood fibres measured an average of 16 µm in width whereas the softwood fibres measured 30 µm before processing (Figure 4).A gradual disintegration of the sample was observed as the number of passes through the SMC increased (Figure 5).More fibrillation of the fibres was apparent and the quantity of nanoparticles increased in the suspension.After 120 passes the suspensions analysed by TEM showed that nanofibrils had a Maderas.Ciencia y tecnología 18(3): 457 -466, 2016 Effect of mechanical treatment on: Lekha et al. mean width of approximately 18 nm for hardwood and 13 nm for softwood (Figure 5).However, there were still large fibres present that had to be fractionated using centrifugation as suggested by Wang et al. (2012).Wang et al. (2012) used the SMC on a bleached Eucalyptus pulp originating from Brazil.Those authors fed the pulp fibre suspension (1,5% w/w consistency) into the disk grinder continuously by gravity using a loop consisting of a peristaltic pump and plastic tubing.In that study, even after 11 h of grinding with the SMC large fibre particles were still present.Two main structures were obtained during their experiments, first highly kinked and untwisted fibrils, and second entangled and twisted nanofibres.They found that extended fibrillation could form nanowhiskers with high crystallinity from the untwisted nanofibres. In this study, after 200 passes, cellulose nanofibrils from both hard and softwood bleached kraft pulp fibres were obtained with mean dimensions of approximately 11 nm.No difference between hardwood and softwood was observed after 200 passes.One of the main advantages of using microgrinding systems is that the mechanical fibre shortening pretreatment necessary for homogenisation and microfluidisation may not be required (Spence et al. 2011). Recently, Tonoli et al. (2012) showed that Eucalyptus hardwood pulp undergoes easier disintegration and may require less energy as opposed to softwood during grinding because hardwoods have shorter fibres and higher hemicellulose contents.According to Iwamoto et al. (2008) andSyverud et al. (2011), high contents of hemicelluloses can facilitate the release of nanofibrils during the mechanical treatment of the pulp.Therefore these advantages indicate a favourable situation for producing nanofibrils using Eucalyptus pulp fibres as raw material.However, in this study, both hardwood and softwood bleached kraft pulps resulted in approximately similar dimensions of CNFs after 200 passes through the SMC.In fact, an easier disintegration of softwood was observed compared with the hardwood after 120 passes. CONCLUSIONS Bleached soft and hardwood pulp as well as SMC treated pulp were characterised by TAPPI standard methods, FTIR, XRD, TGA and TEM.FTIR showed negligible differences between SMC treated and untreated pulps.XRD analysis showed that the crystallinity of the samples decreased after SMC treatment.TGA showed that the SMC treated pulps did not follow the very same degradation mechanisms.Untreated hardwood pulp was more thermally stable as compared with treated hardwood pulp.In the case of softwood pulp, 120 passes was more thermally stable in comparison to untreated softwood pulp and 40 and 200 passes treated pulps.The difference between the soft and hardwood disintegration mechanisms could be due to the difference in cellulose content of pure pulps that could be explained in relation to their botanical classification, which also possibly led to different reactions to treatments.After SMC treatment bleached hardwood and softwood pulps individual fibres were defibrillated and produced nanofibres with average diameters of 11 nm and lengths estimated to be in the micro-scale. expresses the apparent crystallinity [%] defined by I 200 gives the maximum intensity of the peak corresponding to the plane in the sample with the Miller indices 200 at a 2θ angle of between 22-24° and I non-cr represents the intensity of diffraction of the non-crystalline material, which is taken at an angle of about 18° 2θ in the valley between the crystalline peaks. treatment on:Lekha et al. Figure 2 . Figure 2. XRD analysis of bleached hardwood and softwood kraft pulp fibers after 0, 40, 120 and 200 passes through the SMC. Figure 4 . Figure 4. Optical microscopy analysis of South African bleached kraft pulp fibres prior to SMC treatment (a) Hardwood and (b) Softwood. Table 1 . Percentage chemical compositions of bleached kraft pulp samples. Table 2 . The average crystallinity indexes of hardwood and softwood kraft pulp (mean ± standard deviation (SD); number of samples (n =3).
3,689
2016-06-27T00:00:00.000
[ "Materials Science" ]
The Automated System of Unified Templates as an Element of Trainability of Microprocessor Relay Protection Devices The article discusses the possibility of further modernization of the standard microprocessor relay protection of AC overhead system feeders DPA-27.5-TNF, which is operated on the Trans-Baikal Railway by creating an additional automated system of unified templates necessary for the occurrence of “trainability” elements. The templates will be formed via a separate dedicated channel for transmission, processing and storage of the necessary information, not related to the operation of the terminal, with its subsequent visualization at the workplace of the duty personnel of traction substations, together with information from the “GID” software received via another dedicated wired channel. With the help of such a base of unified preset templates, in the fu-ture, it will be possible not only to identify the specific causes of each emergency shutdown but also to reduce their number by dynamically adjusting the existing presets of the standard operation algorithm. Introduction Earlier in their articles [1] the authors already considered the features of the operation of microprocessor relay protection DPA-27.5-TNF on the Trans-Baikal railway and proposed options for modernization and automation to increase the selectivity of its operation [2]. So, the previous work of this team of authors considered an alternative option for possible visualization, retrieval and storage of BC. It is also proposed to use data from the automated system for maintaining and analyzing the train sheet "GID" of the Ural-VNIIZhT [3], received by the duty personnel of traction substations via a dedicated wired channel. All the information obtained after analysis and the necessary transformations would be used to subsequently form a base of dynamic unified templates of presets of microprocessor relay protection. In works with similar problems [4]- [11] foreign authors propose adaptive systems of neuro-fuzzy inference to classify faults in transmission lines and remote relay protection of the transmission line, as well as new approaches to protection transmission lines using wavelet transform and neural networks, etc. However, for the most part, such technical solutions are used in the design of new microprocessor protection systems for general industrial three-phase cable or overhead power lines and cannot be used in our case with a single-phase load (railway overhead system) and a non-standard voltage level U = 27.5 kV. Separately, I would like to note interesting from the point of view of similarity problems of the article [12] [13]. The objective of this study is not to develop new approaches, methods and designs of microprocessor relay protection, but to modernize and automate the terminals that are already in operation at the moment with a minimum level of interference both in their instrumental base and in the standard algorithm of their operation. The DPA-27.5-TNF device is made in the form of two blocks-a protection and automation unit (PAU) and a control unit (CU). In turn, the PAU, according to the principle of "cassette" formation, contains the following modules, connected one after another ( Figure 1 Further, a method for further modernization and automation of existing microprocessor relay protection terminals will be considered and proposed by creating an automated system for generating and storing unified preset templates and its potential integration into a standard operating algorithm to obtain "trainability" elements. Materials and Methods The object of research is the already partially modernized and automated pro- Setting up the Experiment The section of the Sedlovoy-Buryatskaya-Mogoytui railway was chosen as an experimental one; measurements were made at the Buryatskaya traction substation (TS) (Figure 3). The switchgear 2 × 25 kV at the Buryatskaya TS is a system consisting of two modules of outdoor switchgear (OS), including the cells in which the measuring When organizing the collection of the necessary information in the form of oscillograms of current and voltage in the online mode (according to the diagram shown in Figure 3), they will be broadcast and recorded on the PC of the workplace of the duty personnel of the traction substation. However, this method also has significant drawbacks -it requires a large amount of memory of the PC recorder, and, consequently, a large period of time to search and view the necessary fragment of oscillograms characterizing the pre-emergency or emergency operation of the traction power supply system [15]. for its power supply, as well as an auxiliary microcontroller [18]. Experimental Results The result of the experiment will be the proposed methodology and structural diagram of an automated system for organizing, collecting, processing, comparing and storing parameters necessary for the formation of unified templates typical for a particular train situation in order to potentially reduce the number of emergency actuations of relay protection devices, which is already partially inte- Conclusions For the first time, the authors obtained a unique opportunity to create a base of unified templates for electrical and train parameters, with the help of which it is possible to form presets for adjusting the operation of microprocessor relay protection for each individual train situation in online mode, using the proposed automated systems for this, and the access to the automated system of maintaining and analyzing the "GID" train sheet of the Ural-VNIIZhT, already available on the PC of the duty personnel. In turn, the database created and operating in this way will also be an automated system that not only "trains itself" (constantly accumulating and comparing unique templates corresponding to only one train situation), but also introduces "trainability" elements into the standard algorithm of operation of the microprocessor-based relay protections of the DPA-27.5-OSF (TNF) brand currently operating on the AC railway network, moving away from the rigid "coarse" manually set boundaries of the monitored electrical parameters of the corresponding presets. Most significantly, the necessity should arise to make revisions, repair or other operations for which it is required to physically disconnect a separate removable circuit panel (containing the proposed automated systems); the operation of the microprocessor relay protection will not be interrupted or disrupted in any way but will continue as usual. Confirmation The research was carried out with the financial support of the Trans-Baikal In-
1,402.8
2021-01-01T00:00:00.000
[ "Engineering" ]
Bounds for the ratio of two gamma functions: from Wendel’s asymptotic relation to Elezović-Giordano-Pečarić’s theorem In the expository review and survey paper dealing with bounds for the ratio of two gamma functions, along one of the main lines of bounding the ratio of two gamma functions, the authors look back and analyze some known results, including Wendel’s asymptotic relation, Gurland’s, Kazarinoff’s, Gautschi’s, Watson’s, Chu’s, Kershaw’s, and Elezović-Giordano-Pečarić’s inequalities, Lazarević-Lupaş’s claim, and other monotonic and convex properties. On the other hand, the authors introduce some related advances on the topic of bounding the ratio of two gamma functions in recent years. MSC: 33B15, 26A48, 26A51, 26D07, 26D15, 44A10. Introduction where μ is a nonnegative measure on [, ∞) such that the integral () converges for all x > . This tells us that a completely monotonic function f on [, ∞) is a Laplace transform of the measure μ. It is well known that the classical Euler gamma function may be defined for x >  by The logarithmic derivative of (x), denoted by ψ(x) = (x) (x) , is called the psi or digamma function, and ψ (k) (x) for k ∈ N are called the polygamma functions. It is common knowl-©2013 Qi and Luo; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. http://www.journalofinequalitiesandapplications.com/content/2013/1/542 edge that the special functions (x), ψ(x), and ψ (k) (x) for k ∈ N are fundamental and important and have many extensive applications in mathematical sciences. The history of bounding the ratio of two gamma functions has been longer than sixty years since the paper [] by Wendel was published in . The motivations for bounding the ratio of two gamma functions are various, including establishment of asymptotic relation, refinements of Wallis' formula, approximation to π , needs in statistics and other mathematical sciences. In this review paper, along one of the main lines of bounding the ratio of two gamma functions, we would like to look back and analyze some known results, including Wendel's asymptotic relation, Gurland's approximation to π , Kazarinoff 's refinement of Wallis' formula, Gautschi's double inequality, Watson's monotonicity, Chu's refinement of Wallis' formula, Lazarević-Lupaş's claim on monotonic and convex properties, Kershaw's first double inequality, Elezović-Giordano-Pečarić's theorem, alternative proofs of Elezović-Giordano-Pečarić's theorem and related consequences. On the other hand, we would also like to describe some new advances in recent years on this topic, including the complete monotonicity of divided differences of the psi and polygamma functions, inequalities for sums and related results. Wendel's asymptotic relation Our starting point is the paper published in  by Wendel, which is the earliest related one we could search out to the best of our ability. In order to establish the classical asymptotic relation for real s and x, by using Hölder's inequality for integrals, Wendel [] proved elegantly the double inequality for  < s <  and x > . Wendel's original proof Let and apply Hölder's inequality for integrals and the recurrence formula Replacing s by s in (), we get from which we obtain by substituting x + s for x. Combining () and (), we get Therefore, inequality () follows. Letting x tend to infinity in () yields () for  < s < . The extension to all real s is immediate on repeated application of (). Remark  Inequality () can be rewritten for  < s <  and x >  as Remark  Using recurrence formula () and double inequality () repeatedly yields for x >  and  < s < , where m and n are positive integers. This implies that basing on recurrence formula () and double inequality (), one can bound the ratio (x+a) (x+b) for any positive numbers x, a and b. Conversely, double inequality () reveals that one can also deduce corresponding bounds of the ratio (x+) (x+s) for x >  and  < s <  from bounds of the ratio (x+a) (x+b) for positive numbers x, a, and b. Remark  In [, p., ..], the following limit was listed: For real numbers a and b, Limits () and () are equivalent to each other since Hence, limit () is called Wendel's asymptotic relation in the literature. http://www.journalofinequalitiesandapplications.com/content/2013/1/542 Remark  Double inequality () or () is more meaningful than limit () or (), since the former implies the latter, but not conversely. Gurland's double inequality By making use of a basic theorem in mathematical statistics concerning unbiased estimators with minimum variance, Gurland [] presented the following inequality: for n ∈ N, and so taking respectively n = k and n = k +  for k ∈ N in () yields a closer approximation to π : () Remark  Taking respectively n = k and n = k - for k ∈ N in () leads to This is better than double inequality () for x = k and s =   . Remark  Double inequality () may be rearranged as It is easy to see that the upper bound in () is better than the corresponding one in (). This phenomenon seemingly hints that sharper bounds for the ratio (k+) (k+/) can be obtained only if letting m ∈ N in n = m - in (). However, this is an illusion, since the http://www.journalofinequalitiesandapplications.com/content/2013/1/542 lower bound of the following double inequality: which is derived from taking respectively n = (k + m -) and n = (k + m -) - for k ∈ N in (), is decreasing and the upper bound of it is increasing with respect to m. Then how can we explain the occurrence that the upper bound in () is stronger than the corresponding one in ()? Remark  The left-hand side inequality in () or () may be rearranged as From this, it is easier to see that inequality () refines double inequality () for x = k and s =   . Kazarinoff's double inequality Starting from one form of the celebrated formula of John Wallis, which had been quoted for more than a century before s by writers of textbooks, Kazarinoff proved in [] that the sequence θ (n) defined by Remark  It was said in [] that it is unquestionable that inequalities similar to () can be improved indefinitely but at a sacrifice of simplicity, which is why inequality () had survived so long. http://www.journalofinequalitiesandapplications.com/content/2013/1/542 Remark  Kazarinoff 's proof of () is based upon the property for - < t < ∞. Inequality () was proved by making use of well-known Legendre's formula for x >  and estimating the integrals Since () is equivalent to the statement that the reciprocal of φ(t) has an everywhere negative second derivative, therefore, for any positive t, φ(t) is less than the harmonic mean of φ(t -) and φ(t + ), which implies As a subcase of this result, the right-hand side inequality in () is established. Remark  Using recurrence formula () in () gives for t > , which extends the left-hand side inequality in () and (). Replacing t by t - in () or () produces for t >   , which extends the right-hand side inequality in (). Replacing t by t +  in () or () and rearranging gives for t > -  , which extends the right-hand side inequality in (). Remark  By the well-known Wallis' cosine formula [], the sequence θ (n) defined by () may be rearranged as for n ∈ N. Then inequality () is equivalent to Remark  Inequality () may be rewritten as for t > -. Letting u = t+  in the above inequality yields for u > . This inequality has been generalized in [] to the complete monotonicity of a function involving divided differences of the digamma and trigamma functions as follows. Theorem  [] For real numbers s, t, α = min{s, t}, and λ, let . Then the function s,t;λ (x) has the following complete monotonicity: Remark  Taking in Theorem  λ = st >  produces that the function (x+s) (x+t) on (-t, ∞) is increasingly convex if st >  and increasingly concave if  < st < . Watson's monotonicity In , motivated by the result in [] mentioned in Section , Watson [] observed that for x > -  , which implies that the more general function for x > -  , whose special case is the sequence θ (n) for n ∈ N defined in () or (), is decreasing and This apparently implies the sharp inequalities for x ≥ -  , and, by Wallis' cosine formula [], In [], an alternative proof of double inequality () was also provided as follows. Let for x >   . By using the fairly obvious inequalities and that is to say, Remark  It is easy to see that inequality () extends and improves inequalities (), (), and () if s =   . Remark  The left-hand side inequality in () is better than the corresponding one in () but worse than the corresponding one in () for n ≥ . Gautschi's double inequalities The main aim of the paper [] was to establish the double inequality for x ≥  and p > , where or c p = . By an easy transformation, inequality () was written in terms of the complementary gamma function for x ≥  and p > . In particular, letting p → ∞, the double inequality for the exponential integral E  (x) = (, x) for x >  was derived from (), in which the bounds exhibit the logarithmic singularity of E  (x) at x = . http://www.journalofinequalitiesandapplications.com/content/2013/1/542 As a direct consequence of inequality () for p =  s , x =  and c p = , the following simple inequality for the gamma function was deduced: The second main result of the paper [] was a sharper and more general inequality for  ≤ s ≤  and n ∈ N than () by proving that the function is monotonically decreasing for  ≤ s < . Since ψ(n) < ln n, it was derived from inequality () that which was also rewritten as n!(n + ) s- (s + )(s + ) · · · (s + n -) ≤ ( + s) ≤ (n -)!n s (s + )(s + ) · · · (s + n -) () and so a simple proof of Euler's product formula in the segment  ≤ s ≤  was shown by letting n → ∞ in (). This suggests us the following double inequality: Remark  Double inequalities () and () can be further rearranged as for real numbers s, t and x ∈ (-min{s, t}, ∞), where α(x) ∼ x and β(x) ∼ x as x → ∞. For detailed information on the type of inequalities like (), please refer to [] and related references therein. Remark  Inequality () can be rewritten as  ≤ (n + ) (n + s) for n ∈ N and  ≤ s ≤ . Remark  In the reviews on the paper [] by the Mathematical Reviews and the Zentralblatt MATH, there is not a word to comment on inequalities in () and (). However, these two double inequalities later became a major source of a series of studies on bounding the ratio of two gamma functions. Chu's double inequality In , by discussing that Remark  After letting n = k + , inequality () becomes which is the same as (). Taking n = k +  in () leads to inequalities () and ( for n ∈ N. Therefore, Chu discussed equivalently the necessary and sufficient conditions such that the sequence B c (n) for n ∈ N is monotonic. Recently, necessary and sufficient conditions for the general function Lazarević-Lupaş's claim In , among other things, the function on (, ∞) for α ∈ (, ) was claimed in [, Theorem ] to be decreasing and convex, and so Kershaw's first double inequality In , motivated by inequality () obtained in [], among other things, Kershaw presented in [] the following double inequality: for  < s <  and x > . In the literature, it is called Kershaw's first double inequality for the ratio of two gamma functions. Kershaw's proof for () Define the function g β by for x >  and  < s < , where the parameter β is to be determined. It is not difficult to show, with the aid of Wendel's limit (), that To prove double inequality (), define from which it follows that This leads to  , then G strictly decreases, and since G(x) →  as x → ∞, it follows that G(x) >  for x > . However, from (), this implies that g β (x) > g β (x + ) for x > , and so g β (x) > g β (x + n). Take the limit as n → ∞ to give the result that g β (x) > , which can be rewritten as the left-hand side inequality in (). The corresponding upper bound can be verified by a similar argument when β = -  + (s +   ) / , the only difference being that in this case g β strictly increases to unity. Remark  The spirit of Kershaw's proof is similar to Chu's in [, Theorem ], as shown by (). This idea or method was also utilized independently in [-] to construct, for various purposes, a number of inequalities of the type for s >  and real number x ≥ . http://www.journalofinequalitiesandapplications.com/content/2013/1/542 Remark  It is easy to see that inequality () refines and extends inequalities () and (). Remark  Inequality () may be rearranged as for x >  and  < s < . Theorem  The function z s,t (x) is either convex and decreasing for |t -s| <  or concave and increasing for |t -s| > . Remark  Direct computation yields To prove the positivity of function (), the following formula and inequality are used as basic tools in the proof of [, Theorem ]. . For x > -, Remark  As consequences of Theorem , the following useful conclusions are derived. . The function is decreasing and convex from (-t, ∞) onto (t -  , t), where t ∈ R. http://www.journalofinequalitiesandapplications.com/content/2013/1/542 . For all x > , . For all x >  and t > , holds if |t -s| <  and reverses if |t -s| > . Remark  It is clear that double inequality () can be deduced directly from the decreasingly monotonic property of (). Furthermore, from the decreasingly monotonic and convex properties of () on (-t, ∞), inequality () and Recent advances Finally, we would like to state some new results related to or originating from Elezović-Giordano-Pečarić's Theorem  above. Alternative proofs of Elezović-Giordano-Pečarić's theorem The key step of verifying Theorem  is to prove the positivity of the right-hand side in (), which involves divided differences of the digamma and trigamma functions. The biggest barrier or difficulty to prove the positivity of () is mainly how to deal with the squared term in (). Chen's proof In [], the barrier mentioned above was overcome by virtue of the well-known convolution theorem [] for Laplace transforms, and so Theorem  for the special case s +  > t > s ≥  was proved. Perhaps this is the first try to provide an alternative of Theorem , although it was partially successful formally. In [, ], by making use of the convolution theorem for the Laplace transform and the logarithmically convex properties of the function q α,β (x) on (, ∞), an alternative proof of Theorem  was supplied. Guo-Qi's first proof In [, ], by considering monotonic properties of the function and still employing the convolution theorem for the Laplace transform, Theorem  was completely verified again. Remark  For more information on the function q α,β (t) and its applications, please refer to [, , -] and related references therein. Guo-Qi's second proof In [-], the complete monotonic properties of the function on the right-hand side of () were established as follows. Since the complete monotonicity of the functions s,t (x) ands,t (x) mean the positivity and negativity of the function s,t (x), an alternative proof of Theorem  was provided once again. One of the key tools or ideas used in the proofs of Theorem  is the following simple but specially successful conclusion: If f (x) is a function defined on an infinite interval I ⊆ R and it satisfies lim x→∞ f (x) = δ and f (x)f (x + ε) >  for x ∈ I and some fixed number ε > , then f (x) > δ on I. It is clear that Theorem  is a generalization of inequality (). Complete monotonicity of divided differences In order to prove Theorem , the following complete monotonic properties of a function related to a divided difference of the psi function were discovered in [], the preprint of []. To the best of our knowledge, the complete monotonicity of functions involving divided differences of the psi and polygamma functions were investigated first in [-]. Inequalities for sums As consequences of proving Theorem  along different approach from [] and its preprint [], the following algebraic inequalities for sums were procured in [, ] accidentally. Theorem  Let k be a nonnegative integer and let θ >  be a constant. If a >  and b > , then holds for ba > -θ and reverses for ba < -θ . If a < -θ and b > , then inequality () holds and inequality () is valid for a + b + θ >  and is reversed for a + b + θ < . If a >  and b < -θ , then inequality () is reversed and inequality () holds for a+b+θ <  and reverses for a + b + θ > . Moreover, the following equivalent relation between inequality () and Theorem  was found in [, ]. Theorem  Inequality () for positive numbers a and b is equivalent to Theorem . Recent advances Recently, some applications, extensions, and generalizations of Theorems  to , and related conclusions have been investigated in several recently or immediately published manuscripts such as [-]. For example, Theorem  stated in Remark  was obtained in []. The complete monotonicity of the q-analogue of the function δ , defined by () was researched in [, ]. http://www.journalofinequalitiesandapplications.com/content/2013/1/542 Remark  This article is a slightly revised version of the preprint [] and a companion paper of the preprint [] and the articles [, ] whose preprints are [, ], respectively.
4,488.4
2013-11-19T00:00:00.000
[ "Mathematics" ]
Cutoff for product replacement on finite groups We analyze a Markov chain, known as the product replacement chain, on the set of generating n-tuples of a fixed finite group G. We show that as n→∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n \rightarrow \infty $$\end{document}, the total-variation mixing time of the chain has a cutoff at time 32nlogn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{3}{2} n \log n$$\end{document} with window of order n. This generalizes a result of Ben-Hamou and Peres (who established the result for G=Z/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G = {{\mathbb {Z}}}/2$$\end{document}) and confirms a conjecture of Diaconis and Saloff-Coste that for an arbitrary but fixed finite group, the mixing time of the product replacement chain is O(nlogn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(n \log n)$$\end{document}. Introduction Let G be a finite group, and let [n] := {1, 2, . . . , n}. We consider the set G n of all functions σ : [n] → G (or "configurations"). We may define a Markov chain (σ t ) t≥0 on G n as follows: if we have a current state σ , then uniformly at random, choose an ordered pair (i, j) of distinct integers in [n], and change the value of σ (i) to σ (i)σ ( j) ±1 , where the signs are chosen with equal probability. We will restrict the chain (σ t ) t≥0 to the space of generating n-tuples, i.e. the set of σ whose values generate G as a group: S := σ ∈ G n : σ (1), . . . , σ (n) = G . It is not hard to see that for fixed G and large enough n, the chain on S is irreducible (see [8,Lemma 3.2]). We will always assume n is large enough so that this irreducibility holds. Note that the chain is also symmetric, and it is aperiodic because it has holding on some states. Thus, the chain has a uniform stationary distribution π with π(σ ) = 1/|S|. This Markov chain was first considered in the context of computational group theory-it models the product replacement algorithm for generating random elements of a finite group introduced in [6]. By running the chain for a long enough time t and choosing a uniformly random index k ∈ [n], the element σ t (k) is a (nearly) uniformly random element of G. The product replacement algorithm has been found to perform well in practice [6,10], but the question arises: how large does t need to be in order to ensure near uniformity? One way of answering the question is to estimate the mixing time of the Markov chain. It was shown by Diaconis and Saloff-Coste that for any fixed finite group G, there exists a constant C G such that the 2 -mixing time is at most C G n 2 log n [8,9] (see also Chung and Graham [3] for a simpler proof of this fact with a different value for C G ). In another line of work, Lubotzky and Pak [12] analyzed the mixing of the product replacement chain in terms of Kazhdan constants (see also subsequent quantitative estimates for Kazhdan constants by Kassabov [11]). We also mention a result of Pak [14] which shows mixing in polylog(|G|) steps when n = (log |G| log log |G|). The reader may consult the survey [15] for further background on the product replacement algorithm. Diaconis and Saloff-Coste conjectured that the mixing time bound can be improved to C G n log n [9, Remark 2, Section 7, p. 290], based on the observation that at least n log n steps are needed by the classical coupon-collector's problem. This was confirmed in the case G = Z/2 by Chung and Graham [4] and recently refined by Ben-Hamou and Peres, who show that when G = Z/2, the chain in fact exhibits a cutoff at time 3 2 n log n in total-variation with window of order n [2]. In this paper, we extend the result of Ben-Hamou and Peres to all finite groups. Note that this also verifies the conjecture of Diaconis and Saloff-Coste for a fixed finite group. To state the result, let us denote the total variation distance between P σ (σ t ∈ · ) and π by d σ (t) := max A⊆S |P σ (σ t ∈ A) − π(A)|. Theorem 1.1 Let G be a finite group. Then, the Markov chain (σ t ) t≥0 on the set of generating n-tuples of G has a total-variation cutoff at time 3 2 n log n with window of order n. More precisely, we have (2) A connection to cryptography We mention another motivation for studying the product replacement chain in the case G = (Z/q) m for a prime q ≥ 2 and integers m ≥ 1. It comes from a public-key authentication protocol proposed by Sotiraki [16], which we now briefly describe. In the protocol, a verifier wants to check the identity of a prover based on the time needed to answer a challenge. First, the prover runs the Markov chain with G = (Z/q) m and n = m, which can be interpreted as performing a random walk on SL n (Z/q), where σ (k) is viewed as the k-th row of a n × n matrix. (In each step, a random row is either added to or subtracted from another random row.) After t steps, the prover records the resulting matrix A ∈ SL n (Z/q) and makes it public. To authenticate, the verifier gives the prover a vector x ∈ (Z/q) n and challenges her to compute y := Ax. The prover can perform this calculation in O(t) operations by retracing the trajectory of the random walk. Without knowing the trajectory, if t is large enough, an adversary will not be able to distinguish A from a random matrix and will be forced to perform the usual matrix-vector multiplication (using n 2 operations) to complete the challenge. Thus, the question is whether t n 2 is large enough for the matrix A to become sufficiently random, so that the prover can answer the challenge much faster than an adversary. Note that when n > m, the product replacement chain on G = (Z/q) m amounts to the projection of the random walk on SL n (Z/q) onto the first m columns. Thus, Theorem 1.1 shows that when m is fixed and n → ∞, the mixing time for the first m columns is around 3 2 n log n. One then hopes that the mixing of several columns is enough to make it computationally intractable to distinguish A from a random matrix; this would justify the authentication protocol, as n log n n 2 . We remark that when t is much larger than the mixing time of the random walk on SL n (Z/q) generated by row and additions and subtractions, it is information theoretically impossible for an adversary to distinguish A from a random matrix. However, the diameter of the corresponding Cayley graph on SL n (Z/q) is known to be of order n 2 log q n [1,5], so a lower bound of the same order necessarily holds for the mixing time. Diaconis and Saloff-Coste [8,Section 4,p. 420] give an upper bound of O(n 4 ), which was subsequently improved to O(n 3 ) by Kassabov [11]. Closing the gap between n 3 and n 2 log n remains an open problem. Outline of proof The proof of Theorem 1.1 analyzes the mixing behavior in several stages: • an initial "burn-in" period lasting around n log n steps, after which the group elements appearing in the configuration are not mostly confined to any proper subgroup of G; • an averaging period lasting around 1 2 n log n steps, after which the counts of group elements become close to their average value under the stationary distribution; and • a coupling period lasting O(n) steps, after which our chain becomes exactly coupled to the stationary distribution with high probability. The argument is in the spirit of [2], but a more elaborate analysis is required in the second and third stages. To analyze the first stage, for a fixed proper subgroup H , the number of group elements in H appearing in the configuration is a birth-and-death process whose transition probabilities are easy to estimate. The analysis of the resulting chain is the same as in [2], and we can then union bound over all proper subgroups H . In the second stage, for a given starting configuration σ 0 ∈ S, we consider quantities n a,b (σ ) counting the number of sites k where σ 0 (k) = a and σ (k) = b. A key observation (which also appears in [2]) is that by symmetry, projecting the Markov chain onto the values (n a,b (σ t )) a,b∈G does not affect the mixing behavior. Thus, it is enough to understand the mixing behavior of the counts n a,b . One expects these counts to evolve towards their expected value E σ ∼π n a,b (σ ) as the chain mixes. To carry out the analysis rigorously, we write down a stochastic difference equation for the n a,b and analyze it via the Fourier transform. Intuitively, as n → ∞, the process approaches a "hydrodynamic limit" so that it becomes approximately deterministic. It turns out that after about 1 2 n log n steps, the n a,b are likely to be within O( √ n) of their expected value. Our analysis requires a sufficiently "generic" initial configuration, which is why the first stage is necessary. Finally, in the last stage, we show that if the (n a,b (σ )) a,b∈G and (n a,b (σ )) a,b∈G for two configurations are within O( √ n) in 1 distance, they can be coupled to be exactly the same with high probability after O(n) steps of the Markov chain. A standard argument involving coupling to the stationary distribution then implies a bound on the mixing time. The main idea to prove the coupling bound is that even if the 1 distance evolves like an unbiased random walk, there is a good chance that it will hit 0 due to random fluctuations. A similar argument is used to prove cutoff for lazy random walk on the hypercube [13,Chapter 18]. However, some careful accounting is necessary in our setting to ensure that in fact the 1 distance does not increase in expectation and to ensure sufficient fluctuations. Organization of the paper The rest of the paper is organized as follows. In Sect. 2, we state (without proof) the key lemmas describing the behavior in each of the three stages and use these to prove the upper bound (1) in Theorem 1.1. Sections 3 and 4 contain the proofs of these lemmas. Finally, in Sect. 5, we prove the lower bound (2) in Theorem 1.1; this is mostly a matter of verifying that the estimates used in the upper bound were tight. Notation Throughout this paper, we use c, C, C , . . ., to denote absolute constants whose exact values may change from line to line, and also use them with subscripts, for instance, C G to specify its dependency only on G. We also use subscripts with big-O notation, e.g. we write O G ( · ) when the implied constant depends only on G. Proof of Theorem 1.1 (1) Let us fix a finite group G and denote its cardinality by Q := |G|. For a configuration σ ∈ S, let n a (σ ) denote the number of sites having group element a, i.e., n a (σ ) := |{i ∈ [n] : σ (i) = a}|. Thus, S non (c) is the set of states σ where the group elements appearing in σ are not mostly confined to any particular proper subgroup of G. The next lemma shows that we reach S non (1/3) in about n log n steps, and once we reach S non (1/3), we remain in S non (1/6) for n 2 steps with high probability. Note that n 2 is much larger than the overall mixing time, so we may essentially assume that we are in S non (1/6) for all of the later stages. The burn-in period Moreover, there exists a constant C G depending only on G such that Let (N t ) t≥0 be the birth-and-death chain with the following transition probabilities for 1 ≤ k ≤ n: We start this chain at N 0 = n H non (σ 0 ); note that because the elements appearing in σ 0 generate G, we are guaranteed to have n H non (σ 0 ) > 0. The above birth-and-death chain corresponds to the behavior of (n H non The chain (N t ) is precisely what is analyzed in [2] for the case G = Z/2. Let [2, (2) in the proof of Lemma 1] and thus and The proof of Lemma 1]. Hence by Chebyshev's inequality for all large enough β > 0, Moreover, we have P n/3 T n/6 ≤ n 2 ≤ n 2 e −n/10 . Indeed, this follows from the fact that for m < k, we have where π BD (k) = n k /(2 n − 1) [2, (5) and the following in the proof of Proposition 2]. We now take a union bound over all the proper subgroups H . The averaging period In the next stage, the counts n a (σ t ) go toward their average value. We actually analyze this stage in two substages, looking at a "proportion vector" and "proportion matrix", as described below. Proportion vector chain For a configuration σ ∈ S, we consider the Q-dimensional vector (n a (σ )/n) a∈G , which we call the proportion vector of σ . One may check that for a typical σ ∈ S, each n a (σ )/n is about 1/Q. For each δ > 0, we define the δ-typical set where · denotes the 2 -norm in R G . The following lemma implies that starting from σ ∈ S non (1/3), we reach S * (δ) in O δ (n) steps with high probability. The proof is given in Sect. 3.4. Lemma 2.2 Consider any σ ∈ S non (1/3) and any constant δ > 0. There exists a constant C G,δ depending only on G and δ such that for any T ≥ C G,δ n, we have for all large enough n. Proportion matrix chain We actually need a more precise averaging than what is provided by Lemma 2.2. Fix a configuration σ 0 ∈ S. For any σ ∈ S and for any a, b ∈ G, define If we run the Markov chain (σ t ) t≥0 with initial state σ 0 , then n σ 0 a,b (σ t ) is the number of sites that originally contained the element a (at time 0) but now contain b (at time t). Note that We can then associate with (σ t ) t≥0 another Markov chain n σ 0 a,b (σ t ) a,b∈G for t ≥ 0, which we call the proportion matrix chain (with respect to σ 0 ). The state space for the proportion matrix chain is {0, 1, . . . , n} G×G , and the transition probabilities depend on σ 0 . The proportion matrix acts like a "sufficient statistic" for analyzing our Markov chain started at σ * , because of the permutation invariance of our dynamics. In fact, as the following lemma shows, the distance to stationarity of the proportion matrix chain is equal to the distance to stationarity of the original chain. . Let π σ * be the stationary measure for the for the set of configurations with N as their proportion matrix. Since the distribution of σ t is invariant under permutations on sites i ∈ [n] preserving the set {i : σ * (i) = a} for every a ∈ G, the conditional probability measures P σ * σ t ∈ · | σ t ∈ X (N ) and π( · | X (N ) ) are both uniform on X (N ) . This implies that for each σ ∈ X (N ) , and summing over all σ ∈ X (N ) and all N , we obtain the claim. For σ 0 ∈ S and r > 0, define the set of configurations Roughly speaking, the following lemma shows that starting from a typical configuration σ * ∈ S * 1 4Q , we need about 1 2 n log n steps to reach S * σ * , R √ n , where R is a constant. We show this fact in a slightly more general form where the initial state need not be σ * ; the proof is given in Sect. 3.5. Lemma 2.4 Consider any σ * , σ * ∈ S * 1 4Q , and let T := 1 2 n log n . There exists a constant C G > 0 depending only on G such that for any given R > 0, we have for all large enough n. The coupling period After reaching S * σ * , R √ n , we show that only O(n) additional steps are needed to mix in total variation distance. The main ingredient in the proof is a coupling of proportion matrix chains so that they coalesce in O(n) steps when they both start from configurations σ,σ ∈ S * σ * , R √ n . We construct such a coupling and prove the following lemma in Sect. 4. Lemma 2.5 Consider any Then, there exists a coupling (σ t ,σ t ) of the Markov chains with initial states (σ,σ ) such that for a given β > 0 and all large enough n, To translate this coupling time into a bound on total variation distance, we need also the simple observation that the stationary measure π concentrates on S * σ * , R √ n except for probability O(1/R 2 ), as given in the next lemma. Lemma 2.6 For the stationary distribution π of the chain (σ t ) t≥0 , for every R > 0 and for all n > m, Moreover for every δ < 1/(2Q), for every R > 0 and for all n > m, where C G and m are constants depending only on G. Proof Observe that since the stationary distribution π is uniform on S, it is given by the uniform distribution Unif on G n conditioned on S. Note that we can always generate G using each of its |G| elements, so we have an easy lower bound of |S| ≥ |G| n−|G| . Consequently, we have Concerning the second assertion, we note that n a (σ * ) ≥ (1/Q − δ)n for each a ∈ G; the rest follows similarly, so we omit the details. Remark 2.7 In Lemma 2.6 above, we have given a very loose bound on C G for sake of simplicity. Actually, it is not hard to see that holding G fixed, we have lim n→∞ |S|/|G| n = 1. See also [9, Section 6.B.] for more explicit bounds for various families of groups. Together, Lemmas 2.4, 2.5, and 2.6 imply the following bound for total variation distance. where C G is a constant depending only on G. Proof Letσ be drawn from the stationary distribution π . Define where (σ t ) is a Markov chain started atσ . Let π σ * denote the stationary distribution for the proportion matrix with respect to σ * . Sinceσ was drawn from π , the proportion matrix ofσ t remains distributed as π σ * for all t. We first run σ andσ independently up until time T 1 := 1 2 n log n . For a parameter R to be specified later, consider the events Lemma 2.4 implies that P(G c ) ≤ C G e −R + 1 n , and Lemma 2.6 implies that P(G c ) ≤ Let T 2 := βn . Starting from time T 1 , as long as both G andG hold, we may use Lemma 2.5 to form a coupling (σ t ,σ t ) so that Setting R = β 1/4 , we conclude that We have T = T 1 + T 2 , and recall that the proportion matrix forσ is stationary for all time. This yields The result then follows by Lemma 2.3. Proof of the main theorem We now combine the lemmas from the burn-in, averaging, and coupling periods to complete the proof of the upper bound in Theorem 1.1. Let τ 1/3 be the first time to hit S non (1/3) as in Lemma 2.1. Then, Lemma 2.1 implies that for any σ 1 ∈ S and any t ≥ 0, we have Next, by Lemma 2.2, for any σ 2 ∈ S non (1/3) and when β and n are sufficiently large, we have that P Finally, Lemma 2.8 states that Thus, combining (3), (4), and (5), we obtain for any σ ∈ S that sending n → ∞ and then β → ∞ yields (1). Proofs for the averaging period In this section, we prove Lemmas 2.2 and 2.4. The proofs are based on analyzing stochastic difference equations satisfied by the Fourier transform of the proportion vector or matrix. The Fourier transform for G We first establish some notation and preliminaries for the Fourier transform. Let G * be a complete set of non-trivial irreducible representations of G. In other words, for each ρ ∈ G * , we have a finite dimensional complex vector space V ρ such that ρ : G → G L(V ρ ) is a non-trivial irreducible representation, and any non-trivial irreducible representation of G is isomorphic to some unique ρ ∈ G * . Moreover, we may equip each V ρ with an inner product for which ρ ∈ G * is unitary. For a configuration σ ∈ S and for each ρ ∈ G * , we consider the matrix acting on V ρ given by so that x ρ (σ ) is the Fourier transform of the proportion vector at the representation ρ. We write x(σ ) := (x ρ (σ )) ρ∈G * . Let V := ρ∈G * End C (V ρ ), and write d ρ := dim C V ρ . For an element x = (x ρ ) ρ∈G * ∈ V , we define a norm · V given by where A, B HS = Tr (A * B) denotes the Hilbert-Schmidt inner product in End C (V ρ ) and · HS denotes the corresponding norm. (Note that ·, · HS and · HS depend on ρ, but for sake of brevity, we omit the ρ when there is no danger of confusion.) The Peter-Weyl theorem [7, Chapter 2] says that where the isomorphism is given by the Fourier transform. The Plancherel formula then reads Thus, in order to show that σ ∈ S * (δ), it suffices to show that x(σ ) V is small. A similar argument may be applied to the proportion matrix instead of the proportion vector. Finally, for an element A ∈ End C (V ρ ), we will at times also consider the operator norm A op := sup v∈V ρ ,v =0 Av / v . We will also sometimes use the following (equivalent) variational characterization of the operator norm: The special case of G = Z/q On a first reading of this section, the reader may wish to consider everything for the special case of G = Z/q for some integer q ≥ 2. In that case, each representation is one-dimensional, and the representations can be indexed by = 0, 1, 2, . . . , q − 1. The Fourier transform is then particularly simple: the coefficients are scalar values where ω := e 2πi q is a primitive q-th root of unity. This special case already illustrates most of the main ideas while simplifying the estimates in some places (e.g. matrix inequalities we use will often be immediately obvious for scalars). A stochastic difference equation for the n a For a ∈ G, we next analyze the behavior of n a (σ t ) over time. For convenience, we write n a (t) = n a (σ t ). Let F t denote the σ -field generated by the Markov chain (σ t ) t≥0 up to time t. Then, our dynamics satisfy the equation Note that |n a (t + 1) − n a (t)| ≤ 1 almost surely. Thus, for each a ∈ G, we can write the above as a stochastic difference equation where E[M a (t + 1) | F t ] = 0 and |M a (t)| ≤ 2 almost surely. It is easiest to analyze this equation through the Fourier transform. Writing x ρ (t) = x ρ (σ t ), we calculate from (8) that so that our equation becomes Note that we have and thus, A general estimate for stochastic difference equations Before proving Lemma 2.2, we also need a technical lemma for controlling the behavior of stochastic difference equations, which will be used to analyze (9) as well as other similar equations. (z(t)) t≥0 be a sequence of [0, 1]-valued random variables adapted to a filtration (F t ) t≥0 . Let ε ∈ (0, 1) be a small constant, and let ϕ : R + → (0, 1] be a non-decreasing function. Suppose that there are F t -measurable random variables M(t) for which and which, for some constant D, satisfy the bounds Then, for each t and each λ > 0, we have for constants c D,ϕ , C D,ϕ depending only on D and ϕ. Taking conditional expectations in the inequality relating z(t + 1) to z(t), we have Rearranging and using the fact that ϕ(t) is non-decreasing, we have Consequently, is a supermartingale, and its increments are bounded by Recall that ϕ is non-decreasing, so that for all t ≥ s ≥ 0, we have Using this with (11), we see that the sum of the squares of the first t increments is at most By the Azuma-Hoeffding inequality, this yields which in turn implies The result then follows upon shifting and rescaling of λ. Proportion vector chain: Proof of Lemma 2.2 We first prove a bound for the Fourier coefficients x ρ (t). (1/3) and any ρ ∈ G * . We have a constant c G depending only on G for which Lemma 3.2 Consider any σ ∈ S non for all large enough n. This immediately implies Lemma 2.2. Proof of Lemma 2.2 With c G defined as in Lemma 3.2, take C G,δ large enough so that for any T ≥ C G,δ n, Then, Lemma 3.2 and Plancherel's formula yield for large enough n, as desired. We are now left with proving Lemma 3.2, which relies on the following bound on the operator norm. Lemma 3.3 There exists a positive constant γ G depending on G such that for any ρ ∈ G * and any σ ∈ S non (1/6), Proof Let G denote the set of all probability distributions on G, and for c ∈ (0, 1), let Consider a representation ρ ∈ G * , and consider the function h : Then, h(μ) is hermitian, and since ρ is unitary, we clearly have We claim that λ(μ) < 1 for each μ ∈ G (c). Indeed, suppose the contrary. Then, there exists a non-zero vector v ∈ V ρ such that Re ρ(a)v, v = 1 for all a ∈ G with μ(a) > 0. This implies that the support of μ is included in the subgroup Since ρ is a (non-trivial) irreducible representation, H is a proper subgroup of G, and μ(H ) = 1, contradicting the assumption that μ ∈ G (c). Note that μ → λ(μ) is continuous. We may define Then, we have for any σ ∈ S non (1/6), Taking 0 < γ G < 1 −γ G , and plugging this into the definition of X ρ gives X ρ (σ ) − γ G n I d ρ . Note that X ρ (σ ) − 2 n−1 I d ρ . Combining these together gives the result. Remark 3.4 A much more direct approach is possible in the case G = Z/q. The condition σ ∈ S non (1/6) implies that n 0 (σ ) ≤ 5 6 . Then, we have for some positive γ G . Some rearranging of equations then yields the desired result. Proof of Lemma 3.2 Fix where γ G is taken as in Lemma 3.3. Since our chain starts at σ ∈ S non (1/3), Lemmas 2.1 and 3.3 together imply that P σ (G c n 2 ) ≤ C G n 2 e −n/10 . Next, we turn to (9). Rearranging (9) and squaring, we have Substituting into (12), we obtain Note that we have the bounds We now apply Lemma 3.1 with ε = 1 n , ϕ(t) = γ G , D = 6Q 2 d ρ , and λ = n 1/4 . This yields Consequently, The lemma with c G = γ G /2 then follows from union bounding over all 1 ≤ t ≤ n 2 and taking n sufficiently large. Proportion matrix chain: Proof of Lemma 2.4 We carry out a similar albeit more refined strategy to analyze the proportion matrix. Throughout this section, we assume our Markov chain (σ t ) t≥0 starts at an initial state σ * ∈ S * 1 4Q . We again write n a (t) = n a (σ t ) and n a,b (t) = n σ * a,b (σ t ), and similar to before, the n a,b (t) satisfy the difference equation where We can again analyze this equation via the Fourier transform. In this case, for each a ∈ G, we take the Fourier transform of n a,b (t)/n a (σ * ) b∈G . For ρ ∈ G * , let becomes y a,ρ (t + 1) − y a,ρ (t) = y a,ρ (t)X ρ (t) + M a,ρ (t + 1). (14) Note that E σ [ M a,ρ (t +1) | F t ] = 0. Also, since we assumed σ * ∈ S * 1 4Q , it follows that n a (σ * ) n ≥ 1 2Q . Thus, we also know M a,ρ (t + 1) HS ≤ Again, our main step is a bound on the Fourier coefficients y a,ρ (t), which will also be useful later in proving Lemma 2.5. Lemma 3.5 Consider any σ * , σ * ∈ S * 1 4Q . There exist constants c G , C G > 0 depending only on G such that for all large enough n, we have for all t and R > 0. The above lemma directly implies Lemma 2.4. Proof of Lemma 2.4 We apply Lemma 3.5 to each a ∈ G and ρ ∈ G * . Recall that T = 1 2 n log n , so that Then, Lemma 3.5 implies Union bounding over all a ∈ G and ρ ∈ G * and using the Plancherel formula, this yields for sufficiently large C G and n. We now prove Lemma 3.5. Before proceeding with the main proof, we need the following routine estimate as a preliminary lemma. By spherical symmetry, we have which is the first inequality. Again by spherical symmetry, the eigenvalues of the Hessian ∇ 2 θ n (x) can be directly computed to be f ( x ) and f ( x )/ x . But these are bounded by Thus, ∇ 2 θ n (x) √ n I , and the second inequality follows from Taylor expansion. where the second inequality follows from the variational formula for operator norm (i.e. that B A HS ≤ A op B HS ), and the third inequality follows from the fact that θ n is convex with θ n (0) = 0. Thus, we may write Now, let z t := 1 H t θ n (y a,ρ (t)), and note that since X ρ (σ ) n whenever H t holds. Thus, We may then apply Lemma 3.1 with ε = 1 n and D = 8Q 4 d ρ . Note that for all large enough n. Thus, Lemma 3.1 implies that Consequently, as desired. Construction of the coupling: Proof of Lemma 2.5 For each δ > 0, we define a subset of {0, 1, . . . , n} G×G by for every a, b ∈ G and a,b∈G Lemma 4.1 Consider a configuration σ * ∈ S and a constant 0 < δ ≤ 1 2Q 2 , and assume that (1 − δ)n/Q 2 is an integer. Let (σ t ) t≥0 and (σ t ) t≥0 be two product replacement chains started at σ andσ , respectively. Then, there exists a coupling (σ t ,σ t ) of the Markov chains satisfying the following: Let Proof Let us abbreviate n a,b (t) = n σ * a,b (σ t ) andñ a,b (t) = n σ * a,b (σ t ). Let m a,b (t) := min(n a,b (t),ñ a,b (t)). For each a ∈ G, we define the quantity so that D t = a∈G d a (t). For accounting purposes, it is helpful to introduce two sequences of elements of G × G. These sequences are chosen so that the number of x k equal to (a, b) is exactly n a,b , and similarly the number ofx k equal to (a, b) isñ a,b . Moreover, we arrange their indices in a coordinated fashion, as described below. We define three families of disjoint sets: P a,b , Q a , and R a ⊂ [n]. • For each a, b ∈ G, let P a,b be a set of size (1 − δ)n/Q 2 such that for any k ∈ P a,b , we have x k =x k = (a, b). (This is possible provided that (n a,b (t)), (ñ a,b (t)) ∈ M δ holds.) • For each a ∈ G, let Q a be a set of size b∈G (m a,b − |P a,b |) such that for any k ∈ Q a , x k =x k = (a, b) for some b. (Note that Q a may be empty.) Fig. 1 Illustration of cases (i)-(iv) Case (iv) • For each a ∈ G, let R a be a set of size d a such that for any k ∈ R a , x k andx k both have a as their first coordinate. (This R a is well-defined since b n a,b = bñ a,b for each a; it may also be empty.) Suppose that D t > 0, so that for some a * , b * , b * ∈ G we have n a * ,b * >ñ a * ,b * and n a * ,b * <ñ a * ,b * . Let us consider all possible ways to sample a pair of indices and a sign (k, l, s) ∈ {1, 2, . . . , n} 2 × {±1} with k = l. Suppose x k = (a k , b k ) and x l = (a l , b l ). We think of (k, l, +1) as corresponding to a move on (n a,b (t)) where n a k ,b k is decremented and n a k ,(b k ·b l ) is incremented. Similarly, (k, l, −1) corresponds to a move where n a k ,b k is decremented and n a k ,(b k ·b −1 l ) is incremented. We may also think of (k, l, ±1) as corresponding to moves on (ñ a,b (t)) in an analogous way. We now analyze four cases, as illustrated in Fig. 1. (i) Case (k, l) ∈ (P Q)×(P Q). For all but an exceptional situation described below, we apply the move corresponding to (k, l, s) to both states (n a,b (t)) and (ñ a,b (t)). In these cases, D t+1 = D t . We now describe the exceptional situation. Define Then, the exceptional situation occurs when s = +1 and (k, l) ∈ S S . Take any bijection τ from S to S . If (k, l) ∈ S, then we apply (k, l, +1) to (n a,b (t)) while applying (τ (k, l), +1) to (ñ a,b (t)). This increments n a * ,b * , decrements n a * ,b * , and has no effect on the (ñ a,b (t)). The overall effect is that D t+1 = D t − 1. The exceptional event occurs with probability (1−δ) 2 2Q 3 , and when it occurs, D t increases or decreases by 1 with equal probability. Thus, the exceptional situation plays the role of introducing some unbiased fluctuation in D t and gives us (17). This occurs with probability Apply the move corresponding to (k, l, s) to both states. This increases D t by at most 1. We will see later that the effect of this case is small compared to the other cases. (iii) Case (k, l) ∈ P × R. This occurs with probability Apply the move corresponding to (k, l, s) to both states. Again, this increases D t by at most 1, but there is also a chance not to increase. Suppose that x l = (a 1 , b 1 ) andx l = (a 1 ,b 1 ), and suppose that k ∈ P a 2 ,b 2 . Then the move has the effect of decreasing n a 2 ,b 2 andñ a 2 ,b 2 while increasing n a 2 ,(b 2 ·b s 1 ) andñ a 2 ,(b 2 ·b s 1 ) . Note that conditioned on this case happening, (a 2 , b 2 ) is distributed uniformly over G × G. When (a 2 , (b 2 ·b s 1 )) = (a * , b * ) or (a 2 , (b 2 · b s 1 )) = (a * , b * ), the move does not increase D t . Therefore there is at least a 2/Q 2 chance that D t is actually not increased. Hence, the probability that D t is increased by 1 is at most (iv) Case (k, l) ∈ R × P. This occurs with probability Suppose that x k = (a, b) andx k = (a,b). Let τ be a permutation of P such that for l ∈ P a,c , one has τ (l) ∈ P a,b −1 ·b·c s . Then apply (k, l, s) to (n a,b (t)) and apply (k, τ (l), s) to (ñ a,b (t)). This always decreases D t by 1. Let us now summarize what we know when (n a,b (t)), (ñ a,b (t)) ∈ M δ and D t > 0. From Cases (i), (ii), and (iii), we have From Cases (i) and (iv), we have verifying (16). To fully define the coupling, when D t = 0, we can couple σ t and σ t to be identical, and if either (n a,b (t)) / ∈ M δ or (ñ a,b (t)) / ∈ M δ , we may run the two chains independently. Proof of Lemma 2.5 Since σ ∈ S * σ * , R √ n , we must have for each a ∈ G and ρ ∈ G * that y σ * a,ρ (σ ) HS ≤ R √ n . Note that for large enough n, we have S * σ * , R √ n ⊆ S * 1 5Q 3 . Thus, we may apply Lemma 3.5 to obtain for large enough n. Define the event G t := σ s ∈ S * σ * , 1 5Q 3 for all 1 ≤ s ≤ t . The Plancherel formula applied to (18) implies that P(G c n 2 ) ≤ 3Q 2 n . We may analogously define an eventG t forσ and let A t := G t ∩G t . Thus, P(A c n 2 ) ≤ 6Q 2 n . Pick δ ∈ 2 5Q 2 , 3 7Q 2 so that (1 − δ )n/Q 2 is an integer. Note that when A t holds, we have σ t ∈ S * σ * , 1 5Q 3 and σ * ∈ S * 1 5Q 3 ⇒ (n a,b (t)) ∈ M 2 5Q 2 ⊆ M δ , and similarlyσ t ∈ M δ . Thus, we may invoke Lemma 4.1 to give a coupling between σ andσ where on the event A t , the quantity D t is more likely to decrease than increase. Letting D t := 1 A t D t , we see that (D t ) is a supermartingale with respect to (F t ). Recall that T = βn and D 0 ≤ √ QR √ n. As long as β is large enough, we may apply (19) with u = T to get for all large enough n, as desired. Proof of Theorem 1.1 (2) The lower bound is proved essentially by showing that the estimates of Lemmas 2.1 and 2.4 cannot be improved. Let a 1 , a 2 , . . . , a k be a set of generators for G. Let σ ∈ S be the configuration given by otherwise. We will analyze the Markov chain started at σ and show that it does not mix too fast. Recall from Sect. 2 the notation n {id} non (σ ) = |{i ∈ [n] : σ (i) = id}| for the number of sites in σ that do not contain the identity. We first show that if we run the chain for slightly less than n log n steps, most of the sites will still contain the identity. Next, we show that it really takes about 1 2 n log n steps for the Fourier coefficients x ρ to decay to O 1 √ n , as suggested by Lemma 2.4. Note that it suffices here to analyze the x ρ instead of the y a,ρ , which simplifies our analysis. Actually, it suffices to consider (the real part of) the trace of x ρ . Here the orthogonality of characters reads 1 Q a∈G Tr ρ(a) = 0, and it takes about 1 2 n log n steps for ReTr x ρ (t) to decay to O 1 √ n . Lemma 5.2 Consider any ρ ∈ G * and any R > 5. Let T := 1 2 n log n − Rn , and suppose that σ ∈ S satisfies n {id} non (σ ) ≤ n 3 . Then, Proof Let z(t) := (1/d ρ )Tr (x ρ (t) + x ρ (t) * )/2. Then, noting that (9) also holds for x ρ (t) * since x ρ * (t) = x ρ (t) * , we have where E[M(t + 1) | F t ] = 0 and |M(t)| ≤ 2Q n .
10,222.8
2020-02-18T00:00:00.000
[ "Mathematics" ]
On enhancing the noise-reduction performance of the acoustic lined duct utilizing the phase-modulating metasurface This work proposes a noise-reduction structure that integrates phase-modulating metasurface (PMM) with acoustic liners (ALs) to enhance the narrow band absorption performance of a duct with relatively small length-diameter ratio. The PMM manipulates the wavefront by introducing different transmission phase shifts based on an array of Helmholtz resonators, so that the spinning wave within the duct can be generated. Compared with the plane wave, the generated spinning wave has a lower group velocity, which results in a greater traveling distance over the ALs in the duct. The optimization design is performed to determine the final structural parameters of the PMM, which is based on the predictions of the amplitude and phase shift of the acoustic wave at the outlet of the PMM using the theory of passive phased array. With the manipulation of the PMM, the incident plane wave is modulated into a spinning wave, and then enters into the acoustic liner duct (ALD), whose structural parameters are optimized by maximizing the transmission loss using the mode-matching technique. Finally, the noise-reduction performance of this combined structure is evaluated by numerical simulations in the presence of grazing flow. The results demonstrate that, compared with the traditional ALD, the proposed structure exhibits a significant increase in transmission loss within the considered frequency band, especially near the peak frequency of the narrow band noise. M x Mach number of grazing flow n Radial mode N 1 Transfer matrix between the inlet and the first hole for PMM section N 2 Transfer matrix between adjacent holes for PMM section p Acoustic pressure p 1 Acoustic pressure from the inlet to the first hole for PMM section p i (i ≥ 2) Acoustic pressure between the (i − 1)th and the ith holes for PMM section P ± i Propagation coefficient in the directions of ±x in the ith hole for PMM section PMM Phase-modulating metasurface R Duct radius S e Equivalent area of the duct for PMM section S h Section area of the hole for PMM section t ij Corresponding element (i, j) in T T Transmission coefficient of PMM TL Transmission loss T 1 Transfer matrix from the inlet to the first hole for PMM section T i (i ≥ 2) Transfer matrix between the (i − 1)th and the ith holes for PMM section U 1 Volume velocity from the inlet to the first hole for PMM section U i (i ≥ 2) Volume velocity between the (i − 1)th and the ith holes for PMM section V 2 Volume of the cavity of ALD section W Acoustic power Z c Acoustic impedance of the cavity for PMM section Z d Acoustic impedance in duct above the cavity for PMM section Z h Acoustic impedance of the hole for PMM section Z l Specific acoustic impedance of ALD Z t Impedance in duct for PMM section α Correction factor Acoustic wavelength σ Perforation rate µ Dynamic viscosity of air ρ 0 Density of air Reduction of noise generated due to ground or air traffic is currently and will remain an important topic in the future.One of the most effective ways to solve this problem is to use the acoustic absorbers.Absorbers placed on the surface of a structure is usually called acoustic liners (ALs), which is widely employed in aircraft gas turbine engine noise-reduction 1 .Traditionally, ALs are typically fabricated by a hard-backed honeycomb and micro-perforated plate (MPP), called single-degree-of-freedom (SDOF) liners 2,3 .However, the traditional SDOF liners produce a narrow absorption spectra and the maximum absorption occurs at the resonant frequency which is dependent on the depth of honeycomb core.To achieve the aim of noise-reduction over a broad range of frequencies, a septum can be used to separate honeycomb into two parts, forming two-degree-of-freedom (2DOF) liners 4,5 .2DOF liners are capable to cover the necessary source spectrum.However, due to construction difficulties and larger thickness, 2DOF liners are seldom used in aircraft engineering. In recent years, further enhancement of acoustic absorbing performance of a lined duct in a limited length has received widespread attention.To this end, the non-uniform AL structure, characterized by the spatial variations of the impedance, has been proposed.Many theoretical analysis methods of the duct lined with circumferential [6][7][8] and axial non-uniform 9,10 absorbers have been developed.In this regard, Watson evaluated the acoustic absorbing performance of circumferentially segmented duct, and concluded that the circumferentially segmented absorption structure yields better broadband performance than the uniform absorption structure 11 .Palani et al. developed a novel non-uniform acoustic metasurface that incorporates a slanted porous septum design with varying open areas and a multiple folded cavity metasurface concept to enhance broadband absorption 12 .Jiang et al. developed an axially symmetrical flow duct with an azimuthally non-uniform AL on the duct wall to improve the absorption efficiency of spinning wave 13 . As is well known, the noise-reduction performance of a lined duct is closely related to its geometric dimensions.Suppose a cylindrical duct with length L and diameter D is mounted with ALs on its inner wall, then the transmission loss of the duct is proportional to the length-diameter ratio 14 , that is TL ∝ L/D .Therefore, it is difficult to achieve good noise-reduction performance for the duct with relatively small length-diameter ratio.Fortunately, wave manipulation technique based on metasurface may be a promising remedy to this issue.Wave manipulation using artificial materials is a hot topic in the field of materials physics.Introducing the concept of metasurfaces to the fields of materials science and physics via the generalized Snell's law has created opportunities for manipulating optical waves and led to many new applications 15,16 .Based on these pioneering works in optics, significant advancements have also been achieved in acoustics, including the utilization of acoustic metasurfaces for wave manipulation.Acoustic metasurfaces, as a type of wavefront manipulation devices, possess the capabilities to achieve anomalous reflection, anomalous refraction, focusing and absorption of acoustic waves [17][18][19][20] .Recently, there has been a significant surge of interest in the field of acoustic wave manipulation using PMM that is usually composed of a set of Helmholtz resonators.This kind of PMM is widely utilized to manipulate wavefront by adjusting its geometrical dimensions [21][22][23] .Li et al. have developed a comprehensive theory for analyzing transmission and reflection properties of a metascreen consisting of four Helmholtz resonators in series with a straight duct 24 .In this study, the refracted acoustic field was controlled by selecting an appropriate phase profile.Xia et al. utilized a two-layer Helmholtz resonator structure to implement an acoustic focusing lens 25 .Sun et al. utilized metagratings, which are complex elements composed of three Helmholtz resonators, to manipulate wavefront orientation by combining traditional diffraction and interference with the free phase modulation capability of the local resonant structures 26 .Ismail et al. presented a study on transmission loss through an acoustic metasurface based on Helmholtz resonators 27 .To achieve the expected noise-reduction performance, they performed a parametric study on the sensitivity of design variables, including the number of cells, thickness of metasurface and multilayering.Tang et al. achieved an asymmetric accelerating beam both numerically and experimentally by utilizing a bilayer binary acoustic metasurface consisting of a rectangular cavity (bit '0') and a waveguide with seven Helmholtz resonators (bit '1') 28 .Based on the conversion of angular momentum of acoustic orbits, Liu et al. proposed the concept of acoustic geometric phase element arrays.Welldefined geometric phases can be obtained through a variety of topological charge conversion processes, which provided a new approach to acoustic wave control 29 . This paper focuses on the design of a noise-reduction structure to absorb the narrowband exhaust noise in a duct with relatively small length-diameter ratio.The spectrum of narrowband noise typically has a high peak representing the main harmonic component.Narrowband noise usually occurs in the scenes of the car exhaust pipe 30 , the engine cooling fan 31 , the water pipe 32 , etc. Narrowband noise usually exhibits a whistle-like sound.Kanai 30 reported a narrowband noise with a peak frequency of 3800 Hz in the exhaust duct, which is the target frequency of the noise-reduction structure in this paper.As mentioned above, achieving good acoustic absorption becomes more challenging when dealing with the duct with a relatively small length-diameter ratio.In this case, it may not be feasible to use the ALs alone in the duct.Based on the manipulation capabilities of acoustic waves of metasurface, this paper proposed a combined noise-reduction scheme to create an efficient acoustic absorption for a duct with a relatively small length-diameter ratio.To be specific, the new noise-reduction structure consists of PMM and ALD in series.The PMM is meticulously designed to generate a gradient phase distribution, which transforms the plane wave into a desired spinning wave, and then efficiently absorbed by the ALs.Compared with the plane wave, the generated spinning wave in higher-order circumferential mode exhibits a lower group velocity along the axis of the duct, which yields greater travelling distance in the lined duct 33 .It makes sense, therefore, that a better noise-reduction performance can be achieved. Overall design of the noise-reduction structure The basic idea of enhancing the noise-reduction performance of a lined duct is to utilize a well-designed metasurface to manipulate the phase of plane wave such that a spinning wave in the higher-order mode is produced and then enters into a lined duct to achieve a more efficient acoustic absorption.In this way, the noise-reduction structure proposed in this paper can be divided into two parts, namely, PMM and ALD, they are shown in Fig. 1. The PMM structure uses a set of Helmholtz resonators in series to manipulate the phase of the transmitted wave.In our design, the PMM is arranged on the inner wall of the duct, shifting the phase of the acoustic wave travelled in each passage to generate a spinning wave at the PMM exit.Compared with the plane wave, the generated spinning wave in higher-order circumferential mode exhibits a lower group velocity.As a result, the contact time between the spinning wave and the ALD is prolonged, thus a greater transmission loss is expected.To attenuate the generated spinning wave transmitted from the PMM exit, the ALs mounted on the inner wall of the duct is constructed with micro-perforated plates and backing cavities.The ALs, together with the duct, is called the ALD structure.The flowchart of the overall design process, including theoretical calculation methods and parameter optimization, is depicted in Fig. 2. Fundamental phase-modulating unit and transmission properties To obtain a circumferential acoustic mode, the complete phase coverage (that is an integer multiple of 2π ) is necessary.To this end, the PMM structure is designed to partition the cross-section of the duct into eight phasemodulating units, as shown in Fig. 3.In each unit, a total of four Helmholtz resonators in series are used for phase-modulating purpose.This design means that the phase difference between two adjacent units is π/2 .In this case, the entire PMM structure is capable of generating a phase difference of 4π so that the spinning wave in a circumferential mode of 2 is generated.In Fig. 3, unit 1 employs a null structure, while units 2-4 are designed with different structural parameters to achieve distinct phase shifts. Helmholtz resonator is used as the fundamental resonance element in unit 2-4, which comprises a perforated plate and a backing cavity, as illustrated in Fig. 4. The diameter and height of the hole on the perforated www.nature.com/scientificreports/plate is denoted by d 1 and h 1 , respectively.The length, width and height of the backing cavity are a 1 , b 1 and l 1 , respectively.Each fundamental phase-modulating metasurface includes four resonance elements, as illustrated in Fig. 5. Integrating the fundamental phase-modulating metasurface with a 1/8 sector duct yields the fundamental phase-modulating unit, as shown in Fig. 6.The impedance transfer method 34 is used to establish a set of theoretical prediction formula for the transmission characteristics of the fundamental phase-modulating unit.First, the acoustic impedance of the hole and the cavity can be written as where, S h = π (d 1 /2) 2 is the sectional area of the hole, ρ 0 , c 0 are the density of air and the acoustic velocity in air, respectively.In the absence of grazing flow, the axial component of wave number k x = k , k is the wave number.In the presence of grazing flow with Mach number M x , k x = k/(1 + M x ). According to the impedance transfer method, the acoustic impedance in duct above the cavity can be expressed as where, h a = h 1 + αd 1 is the corrected height of the hole, and α = 0.85 is the correction factor 35 . The impedance in the duct can be expressed as in which, S e = 1.072S d is the equivalent area of the duct section, which is obtained from COMSOL simulations to account for the irregularity of the duct cross-section. The acoustic pressure and volume velocity from the inlet to the first hole are where, P + 1 and P − 1 are the propagation coefficients in the directions of +x and −x.At x = 0 , we have So, Eq. ( 5) can be reduced to the following form (1) where, T 1 is the transfer matrix, is expressed as Acoustic pressure and volume velocity between the first and the second holes can be written as Appling the continuous conditions on acoustic pressure and volume velocity at z = D 1 , we have where, U d is the velocity component of the resonator at z = D 1 .Substituting Eqs. ( 5) and ( 8) into Eq.( 9), the following transfer matrix are obtained where Similarly, we have the following transfer relationships where, P ± 3 and P ± 4 are the propagation coefficients between the resonator 2 and 3, 3 and 4, respectively.P ± 5 is the propagation coefficient between the resonator 4 and the outlet of the duct, N 2 is At x = A 1 , the relationship between P ± 5 and transmitted acoustic pressure and velocity can be written as where Now, we have the following transfer relationship where, the transfer matrix Transmitted wave Incident wave From Eq. ( 16), we have where, t ij is the corresponding element (i, j) in T. Finally, the transmission coefficient T of PMM is The transmission characteristics obtained by the above theoretical formulae are shown in Fig. 7.The simulation parameters are taken as follows: the hole diameter is d 1 = 3.6 mm , and the height is h 1 = 2 mm .The length, width and height of the backing cavity are a 1 = 10 mm , b 1 = 10 mm , and l 1 = 5 mm , respectively.The length, width and height of fundamental phase-modulating metasurface are A 1 = 45mm , B 1 = 12 mm and L 1 = 8 mm , respectively.The distance from the hole center to the boundary is D 1 = 6 mm , the distance between two adja- cent holes is D 2 = 10 mm .The radius of the duct is R = 50 mm and the central angle is θ = 45 • .As depicted in Fig. 7, the theoretical results obtained from Eq. ( 18) exhibit good agreements with the COMSOL predictions, thus confirming the effectiveness of the theoretical method.( 7) In order to ensure that each unit has a specified phase shift, optimal design of the metasurface was performed by genetic algorithm using with the theoretical formulas given in this section.Note that unit 1 employs a null structure, hence only unit 2-4 need to be optimized.The optimization variable is taken as the hole diameter of the unit.Assuming that the four Helmholtz resonators in one unit have the same hole diameter, hence a total of three optimization variables are used.In addition, during the optimization process, except for design variables, all the other parameters remain unchanged and their values are the same as those used in Fig. 7.The objective of optimization is to minimize the discrepancy between transmission phase shift and a predetermined value.Optimization was carried out at an incident wave frequency of 3800Hz, the population size and the mutation rate are taken as 20 and 0.2, respectively.The optimization process terminates when after 1500 generations.At this time, the transmittance is kept above 95%.The optimized hole diameters of unit 2-4 are d 12 = 3.748 mm , d 13 = 3.841 mm and d 14 = 3.949 mm , respectively.The acoustic pressures and phase shifts generated by the optimized PMM are illustrated in Figs. 8 and 9, respectively.It can be seen that for the incident wave frequency of 3800Hz, the phase shifts (multiples of 2π ) generated by unit 1-4 are 0.01, 0.76, 0.51 and 0.26, respectively.When converted to multiples of π/2 , the phase shifts of unit 1-4 are 0.04, 3.04, 2.04 and 1.04, respectively.As expected, the phase difference of π/2 at 3800Hz between two adjacent units is successfully achieved.In addi- tion, the phase shifts under different grazing flow Mach numbers are shown in Fig. 10, in which we can see that the phase curves shift to the higher frequency with the increase of Mach number.To verify the effectiveness of the spinning wave generated by the optimized PMM structure, the COMSOL simulations were conducted with an incident plane wave at 3800 Hz.The pressure acoustic physical field is adopted, the sound pressure of the incident wave is 20 Pa and the direction of propagation is the positive z-axis.The calculated acoustic pressure www.nature.com/scientificreports/field is presented in Fig. 11.As expected, the plane wave is successfully transformed into a spinning form at the exit of PMM. Theoretical predictions of noise-reduction characteristics of ALD The fundamental resonance element of ALs consists of a micro-perforated plate with 16 holes and an enclosed backing cavity, as illustrated in Fig. 12.The diameter and height of the hole in perforated plate are d 2 , and h 2 , respectively.The length, width and height of the backing cavity are a 2 , b 2 , and l 2 , respectively.A total of five fundamental resonance elements, named acoustic absorption unit, are arranged longitudinally along the duct, as illustrated in Fig. 13.The acoustic field within the duct is governed by the wave equation 36 where, p is the acoustic pressure.In a polar coordinate system, k is the acoustic wave number, the radial distance and angle of the duct section are denoted by r and θ respectively, while x represents axial coordinate, M x is the Mach number of the grazing flow. Separating the variables r , θ and x , modal solutions of Eq. ( 19) can be expressed as 37 where, m and n are the circumferential and radial modes, respectively.k 1 and κ 1 are the axial and circumferential wave numbers of the rigid wall sections, respectively.Symbols " + ", " − " represent the running modes to the right and left, respectively.J m is the mth order Bessel function, A m,n is the amplitude of the (m, n) mode.k ± 1m,n can be written as A modal solution of Eq. ( 20) is The boundary condition on the wall of the duct (r = R) in the ALD section is www.nature.com/scientificreports/where, R is the radius of the duct.In which, the specific acoustic impedance of the ALD Z l is expressed as 38 where, is the acoustic wavelength, σ the perforation rate, µ the dynamic viscosity of air, V 2 = a 2 b 2 l 2 the volume of the cavity.From Eqs. ( 22) and ( 23), the following eigenvalue problem can be obtained For the rigid wall section, the circumferential wave number can be obtained by solving the following nonlinear equation In a rigid duct, only a limited number of modes can propagate and transfer acoustic power at a fixed frequency, while the remaining modes are truncated.The cut-off ratio of different modes are defined as x .If g m,n > 1 , the mode (m, n) is cut-on and can propagate in duct.The frequency corresponding to g m,n = 1 is called cut-off frequency and is shown in Table 1 for each mode (m, n) .It can be seen that only modes (0, 1) , (1, 1) and (2, 1) can propagate under the condition of the duct radius of 0.05m and the incident wave of 3800Hz in this paper, and all modes are truncated except for the first order radial mode.Therefore n = 1 is utilized in subsequent derivation. Acoustic scattering may arise from variations in the wall impedance of a duct.The rigid and impedance boundaries are represented as a superposition of Fourier-Bessel modes 33 , encompassing right (+) and left (−) modes.For the circumferential mode m , we have where, κ 2 and k 2 are the axial and circumferential wave numbers of the ALD section, respectively. The inlet and outlet sections are characterized as rigid wall boundaries, while the acoustic absorption section is applied the impedance boundary.The axial component of acoustic velocity in each section can be determined through application of the momentum equation where, m,1 is the amplitude of the inlet incident acoustic source and is a known quantity.The reflection amplitude at the exit defaults to 0, i.e., A 3− m,1 = 0 .To determine the amplitude of other sections, it is necessary to ensure acoustic pressure and velocity continuity at the junction of each region by matching them accordingly.( 23) www.nature.com/scientificreports/where, D 3 is the distance between the inlet of the duct and the ALD section.Converting Eq. ( 29) into matrix form, we obtain where, E 1 , E 2 are denoted by with Equation (32a) can be solved analytically, given by Matrices D 1 , D 2 are expressed as From Eq. ( 30), the relationship between A 1+ m,1 and A 3+ m,1 can be finally obtained.In the section with rigid walls, the total acoustic power is obtained by summing up the acoustic power in all transmission modes.The modal acoustic power W ± m,1 can be mathematically expressed as follows Assuming that all acoustic energy is contained in the first-order radial mode at the entrance, the transmission loss of acoustic energy can be defined as Further analysis indicates that mode (m, n) = (2, 1) exhibits a transmitted acoustic power level of 1.58 × 10 −3 W at 3800 Hz, which is much higher than the other modes, as shown in Table 2.It is further dem- onstrated that the PMM section can transform plane waves into incident waves of (2,1) mode. Optimization design of the acoustic absorbing structure The radius of ALD structure is set to R = 50 mm , and the length, width of the acoustic absorption unit in Fig. 13 are set to A 2 = 76 mm , B 2 = 21 mm , respectively.The length and width of the cavity are a 2 = 14 mm and b 2 = 19 mm , respectively.The number of cavities along the axial and circumferential directions is 5 and 16, respectively.Genetic algorithm is used to obtain the optimized structural parameters of acoustic absorption segment of the PMM-ALD structure.The objective of optimization is to maximize the transmission loss at 3800 Hz.The optimization variables are the aperture of the microperforated plate and the height of the cavity.The population size and the mutation rate are taken as 20 and 0.2, respectively.The optimization process terminates when after 1500 generations.The constraint condition is that the radial dimension of the acoustic absorbing structure does not exceed the PMM thickness.The optimized structural parameters are as follows: the hole diameter and the height of the perforated plate are d 2 = 0.88 mm and h 2 = 0.796 mm , respectively, the height of the cavity is l 2 = 6.11 mm .The traditional ALD structure is optimized using the same algorithm as above, and the optimized structural parameters are as follows: the hole diameter and the height of the perforated plate are d 2 = 0.85 mm and h 2 = 0.823 mm , respectively, the height of the cavity is l 2 = 6.62 mm .In addition, the total length of the traditional ALD structure is the same as the sum of the lengths of the PMM and ALD segments for the PMM-ALD structure.After optimization, the PMM-ALD structure and the traditional ALD structure have the same resonance frequency of 3800 Hz. To demonstrate the advantages of the proposed noise-reduction structure, we also designed a traditional ALD structure consisting of only acoustic absorbers as a contrast, as shown in Fig. 14.For the purpose of comparison, the axial dimension of the traditional ALD structure is designed as the sum of the PMM and ALD lengths of the PMM-ALD structure.The corresponding simulation results are given in Section "Simulation results". Simulation results In this section, the noise-reduction performances of the proposed structure are validated through COMSOL Multiphysics platform.In acoustic simulations, the inlet section is modeled as the background pressure field.The incident plane wave has a frequency of 3300-4300Hz and an amplitude of 20 Pa.According to Table 1, the port boundary conditions with modes from (0,1) to (3,2) are implemented at both the inlet and outlet sections of the duct, which encompass all modes where reflected and transmitted waves occur.The remaining boundaries are subject to hard boundary conditions.Figure 15 illustrates the schematic of acoustic finite element meshes, wherein boundary layers are adopted on the duct wall and hole wall to account for viscosity effects.In simulations, the number of boundary layers is taken as 6 and the stretch factor of boundary layers is 1.2.The free The predicted acoustic pressure fields of both structures are depicted in Fig. 16.Different form the traditional ALD (see Fig. 16b), it can be observed form Fig. 16a that the spinning wave is formed after passing through the PMM section and subsequently absorbed by the ALD section.By comparing the outlet sections depicted in Fig. 16c and d, it can be inferred that the PMM-ALD structure exhibits a higher degree of pressure concentration along the duct wall, whereas the traditional ALD structure tends to a uniform pressure distribution.Figure 17 gives transmission losses of the PMM-ALD and the traditional ALD structures obtained by theoretical calculations and COMSOL simulations.It can be seen that the results obtained by two methods are in good agreement, and the transmission loss of the proposed structure is much higher than that of the traditional one near the 21) and ( 25), an increase in circumferential mode m results in an increase in circumferential wave number κ m,n and a decrease in axial wave number k m,n .As a result, the axial velocity component of acoustic waves in the duct decreases, leading to a prolonged contact time of spinning wave with the ALD structure.In this way, the overall noise-reduction performance is improved.In addition, after the acoustic wave leaves the duct, the spinning wave exhibits a more uniform scattering compared to the plane wave, which results in a further reduction of the acoustic energy per unit area.To demonstrate this, the far-field acoustic pressure is calculated and is shown in Fig. 18.In simulations, the radius of the far-field hemisphere is taken as 0.3 m.The boundary condition of perfect matching layer is applied in the far-field.The predicted acoustic pressure field on the hemisphere in the absence of grazing flow is illustrated in Fig. 19.According to Fig. 19a,b, it is evident that acoustic waves exhibit a helical divergent propagation pattern upon exiting the PMM-ALD structure, whereas they propagate in a concentrated horizontal manner for the traditional ALD structure.Figure 19c,d displays the absolute value of internal sound pressure within a hole with diameter of 0.2m excavated at the far field boundary.By comparing the results of Fig. 19c,d, we can observe that the far-field acoustic pressure per unit area of the PMM-ALD structure is significantly lower than that of the traditional ALD structure.Three curves on the far-field hemisphere surface are chosen to calculate the far-field acoustic pressure, as shown in Fig. 20.Curves 1 and 2 represent the arcs that follow the maximum horizontal and vertical circumferences, respectively.Curve 3 located at the right end of the hemisphere represent a circle with an area equal to that of the duct.Figures 21 and 22 gives the calculated absolute value of acoustic pressure on curves 1-3 in differnet Mach numbers of grazing flows.These results clearly demonstrate the advantages of the PMM-ALD structure in noise-reduction.Besides, for the transmission loss from the duct outlet to the circle area enclosed by curve 3, the PMM-ALD structure generates a transmission loss of 7.5 dB in the absence of grazing flow at 3800 Hz, while the ALD structure is 3.9 dB.The total transmission loss from the entrance of the structure to the far-field hemispherical surface is 32.5 dB for the PMM-ALD structure and 17.5 dB for the traditional ALD structure.Obviously, the spinning acoustic wave produced by the developed PMM can greatly enhance the noise-reduction performance of the traditional ALD with small length-diameter ratio. Conclusion In order to enhance the noise-reduction performance of the ALD with small aspect ratios, a solution based on the incident wave phase-modulating was proposed.The basic idea is to transform the incidence plane wave into a spinning one by using the optimized PMM, and then use ALD for noise-reduction purpose.The simulation results demonstrated that the optimized PMM structure achieves an expected gradient phase distribution and successfully manipulate an incident plane wave into a spinning wave in circumferential mode, so that the noise-reduction performance of ALD can be greatly improved.By comparing with the COMSOL results, the effectiveness of the theoretical formulae for predicting phase shift and transmission losses has been demonstrated.Compared with the traditional ALD structure, the designed PMM-ALD structure exhibits excellent noise-reduction performance in the frequency ranges from 3300 to 4300 Hz in the presence of grazing flow.In addition, the far-field acoustic pressure also significantly decreases.This study provides a new approach for narrow band noise-reduction in a duct with relatively small length-diameter ratio. Figure 6 . Figure 6.The fundamental phase-modulating unit of PMM. Figure 7 . Figure 7.The transmission properties of the fundamental phase-modulating unit.(a) Amplitude versus frequency.(b) Phase shift versus frequency. Figure 10 . Figure 10.Transmitted phase shifts produced by unit 2 for different grazing flow Mach numbers, the theoretical results are drawn with solid lines and the COMSOL results are drawn with circular lines. Figure 11 . Figure 11.Acoustic pressure field (Pa) of the generated spinning wave. Figure 12 . Figure 12.The fundamental resonance element of the acoustic absorber. Figure 13 . Figure 13.The acoustic absorption unit of ALD. Figure 16 . Figure 16.Acoustic pressure fields of the noise-reduction structure (Pa).(a) Acoustic pressure field of the PMM-ALD structure.(b) Acoustic pressure field of the traditional ALD structure.(c) Absolute value of acoustic pressure field at the exit plane of the PMM-ALD structure.(d) Absolute value of acoustic pressure field at the exit plane of the traditional ALD structure. Figure 17 .Figure 18 . Figure 17.Transmission loss of the noise-reduction structure for different grazing flow Mach numbers.The theoretical results are drawn with solid lines and the COMSOL results are drawn with circular lines.(a) M x = 0 .(b) M x = 0.05 .(c) M x = 0.1 .(d) M x = 0.15. Figure 19 . Figure 19.The far-field acoustic pressure (Pa).(a) PMM-ALD structure.(b) Traditional ALD structure.(c) Absolute value of internal acoustic pressure of the PMM-ALD structure.(d) Absolute value of internal acoustic pressure of the traditional ALD structure. Figure 20 . Figure 20.Curves used for far-field acoustic pressure calculation. Table 2 . Acoustic power level of different modes.
7,327.6
2023-12-13T00:00:00.000
[ "Engineering", "Physics" ]
Gravitationally modulated quantum correlations: Discriminating classical and quantum models of ultra-compact objects with Bell nonlocality We investigate the relation between quantum nonlocality and gravity at the astrophysical scale, both in the classical and quantum regimes. Considering particle pairs orbiting in the strong gravitational field of ultra-compact objects, we find that the violation of Bell inequality acquires an angular modulation factor that strongly depends on the nature of the gravitational source. We show how such gravitationally-induced modulation of quantum nonlocality readily discriminates between black holes (both classical and inclusive of quantum corrections) and string fuzzballs, i.e., the true quantum description of ultra-compact objects according to string theory. These findings promote Bell nonlocality as a potentially key tool in comparing different models of classical and quantum gravity and putting them to the test. I. INTRODUCTION The development of a consistent and predictive theory of quantum gravity is one of the main unresolved conundrums in contemporary physics [1].Relentless efforts in the attempt to reconcile quantum mechanics and general relativity have produced a number of promising candidate models, including asymptotic safety [2], causal dynamical triangulations [3], non-commutative geometry [4], loop quantum gravity [5], doubly special relativity [6] string theory [7] and the more recent proposal of "gravitizing" quantum mechanics [8].All the aforementioned theoretical schemes have their own characteristics and predictions which make them profoundly different among each other.Despite that, it is still possible to recognize similar aspects which are thus likely to be part of a general treatment of quantum gravity (see for instance Refs.[9,10] and therein for a review on this topic).Prominent examples of features foreseen by many of the above models are the emergence of an intrinsic non-local behavior in the theoretical description of quantum gravity (i.e., see Refs.[11,12]) and the existence of a minimal length at the Planck scale with the ensuing modifications of the canonical commutation relations of quantum mechanics and the associated Heisenberg uncertainty principle [13,14]. Concerning the notion of a minimal spatial resolution, this can be deduced also from gedanken experiments involving large [15] and micro [16] black holes, in proximity of which quantum gravitational effects are expected to become dominant.As a matter of fact, the strong gravity regime near a black hole 1 prevents the use of any known approximation in the study of quantum systems.To quote a relevant example along this direction, it is worth observing that, although the extension of quantum field theory to curved backgrounds has provided successful fundamental predictions (such as the Hawking radiation), these findings are still plagued by unphysical divergences when considered beyond their limits of applicability.However, this fact does not undermine the validity of the aforementioned results, but it rather points towards the quest for a unified description of quantum and gravitational phenomena.Achievements in addressing these difficulties would yield major progress towards a viable theory of quantum gravity able to settle open issues such as the information paradox and the singularity problem that arise in the context of classical and semi-classical approaches to gravitational phenomena. An interesting resolution for both of the above issues in the framework of superstring theory is represented by the fuzzball proposal [17,18], according to which the supposed black hole is in fact conceived as a massive object made of a very large number of microscopic strings which, by definition, feature a minimal length extension qualitatively of the order of the Planck scale.Even though the original arguments leading to the fuzzball solution were purely theoretical, it has been recently pointed out that concrete realizations of fuzzballs lead to a phenomenology that might be accessible, for instance via the observational investigation of gravitational waves [19]. Nevertheless, the fuzzball proposal is not the only self-consistent and robust alternative concerning the generalization of black hole physics that incorporates quantum gravitational effects.Indeed, also loop quantum gravity predicts the settlement of the problems discussed above by relying on the underlying spacetime discretization [20], whilst for the asymptotic safety paradigm the solution is to be found in the quantum scale invariance [21].On a final note, it is worth stressing that another important class of black holes can be derived in the context of higher-derivative theories and non-local gravity [22].These models cure the issues stemming from merging quantum field theory and gravitation (i.e., non-renormalizability, non-unitarity, etc.) by adding higher-derivative terms in the Lagrangian of the gravitational interaction.Interestingly, these contributions are able to remove the unwanted features that affect the canonical quantization of Einstein's general relativity and may give rise to potentially detectable effects in a significant number of physical phenomena.In a parallel development, the community active in quantum information science, atomic physics and quantum optics has picked up in recent years on the original ideas by Bronstein and Feynman [23][24][25], suggesting to test the hypothetical quantum nature of gravity in the laboratory by measuring witnesses of the bipartite entanglement between two test masses induced by a quantized gravitational field, i.e. a quantum gravitational mediator [26,27]. Motivated by the above considerations, in the present work we address the broader question of gravitationallyinduced modifications of nonlocal quantum correlations.Given that a classical gravitational mediator cannot induce any form of quantum nonlocality, be it, in ascending hierarchical order, entanglement, steering, or Bell nonlocality, we investigate whether classical and quantum gravity can have different effects on already existing quantum correlations, previously established by other physical interactions on pairs of test masses.To this end, we study a gedanken experiment which revolves around the dynamics of the Bell nonlocality of particle pairs in the gravitational field generated by ultra-compact objects of diverse nature, such as black holes and fuzzballs, in order to assess whether and how different gravitational sources affect the dynamical evolution of quantum nonlocality. Historically, establishing a relation between cosmological objects and quantum entanglement was the central result of a celebrated paper by Maldacena and Susskind [28], where it was conjectured that the entanglement shared by two particles can be interpreted as a non-traversable wormhole; such a correspondence may be viewed as a precondition for the unification of quantum and gravitational effects.In addition to the aforesaid achievement, the interplay between gravity and entanglement can be identified in a significant number of relevant frameworks.For instance, it is worth recalling that, by means of entanglement entropy, it is possible to deduce another theoretical evidence of black hole thermodynamics related to the area law [29].Along the same direction, by means of thermodynamical arguments one can also show that Einstein's field equations of general relativity must necessarily be fulfilled if entanglement equilibrium is established [30]. Here, instead, by relying on Einstein-Podolsky-Rosen (EPR) nonlocal correlations [31] shared by particle pairs orbiting around ultra-compact objects, we investigate what insights Bell nonlocality, rather than entanglement, can provide on the nature and properties of gravitational structures.In this respect, it is important to recall once more that nonlocality and entanglement are distinct concepts that stand in a hierarchical relation: whilst a violation of Bell inequality always implies entanglement, the opposite implication does not necessarily hold, a notorious counterexample being that of the Werner mixed two-qubit states [32], which can be entangled without violating Bell inequality. Proceeding to evaluate explicitly the amount of Bell nonlocality in an extreme astrophysical scenario, we resort to the physically transparent Clauser-Horne-Shimony-Holt (CHSH) form of Bell inequality [33][34][35] for massive spin-1/2 particle pairs, and we find that gravity in the strong-field regime affects significantly the quantum nonlocality shared by the test particles.Indeed, the overall degree of violation of the CHSH inequality is modulated by an angular factor that strictly depends on the nature of the ultra-compact object under consideration. This result, which is completely general and may be adapted also to different frameworks, is elucidated by focusing on three relevant cases: the classical Schwarzschild black hole, the Schwarzschild black hole within a quantumcorrected treatment at leading (perturbative) order and the string fuzzball solution.We find that the gravitational modulation of bipartite Bell nonlocality discriminates unambiguously between all of them.In order to proceed in our investigation of the thought experiment, we make use of some recently introduced techniques that allow to evaluate EPR correlations in different gravitational scenarios [36][37][38]; as a side result of the main analysis, we generalize such techniques and extend their range of validity to include any static and spherically symmetric spacetime whose metric tensor is expressed in isotropic coordinates. The paper is organized as follows: in Sec.II we introduce the necessary mathematical tools based on the concept of Wigner rotation in curved spacetime, as it is needed in the analysis of the EPR correlations shared by two spin-1/2 particles in the gravitational field of an ultra-compact object.Section III is devoted to the explicit computation of the Wigner rotation in various regimes; this result is then applied to the evaluation of the EPR correlations in Sec.IV.In Sec.V we discuss and compare three relevant instances of static and spherically symmetric ultra-compact gravitational objects, i.e. the string fuzzball, the classical Schwarzschild black hole and the quantum-corrected Schwarzschild black hole, and we show how for each of them the orbiting particle pairs feature a different degree of Bell nonlocality.Finally, in Sec.VI we comment on our results and perspectives on future research. II. WIGNER ROTATION IN CURVED SPACETIME For a consistent treatment of spin-1/2 particles in curved spacetime it is necessary to make use of the tetrad (or vierbein) formalism; for a comprehensive introduction on this subject, the interested reader can consult Ref. [39].A tetrad field e µ a evaluated at a spacetime point x is completely characterized by the relation where the summation over repeated indexes is understood, g µν (x) is the metric tensor defined on the Riemannian manifold and η ab is the Minkowski metric acting on the flat plane tangent to the manifold in the point x.Henceforth, to discriminate between the indexes of the manifold and of the tangent bundle, we employ Greek letters for the former and Latin letters for the latter. The expression ( 1) is essential to analyze spin-1/2 particle states in curved backgrounds, since they are defined as the states that belong to the spin-1/2 representation of the local Lorentz transformation (LLT) group, while general relativity is based upon invariance under diffeomorphisms.Precisely in order to build a bridge between these two notions, one can introduce tetrads, as they allow to "project" diffeomorphism-covariant tensors of the differentiable manifold onto local Lorentz-covariant quantities defined on a flat tangent plane. By virtue of this procedure, a generic spin state with four-momentum k µ = mu µ (where u µ u µ = −1) at the spacetime point x can be unambiguously labeled with |k a , σ; x⟩, where k a = e a µ k µ and σ =↑, ↓ is the third component of the spin.Naturally, the field e a µ is the inverse of the one appearing in Eq. ( 1); consequently, the following identities hold: If we now want to describe the dynamical evolution of a spin-1/2 particle moving in curved spacetime, we have to account for different flat tangent spaces, each of which is associated to a given point of the particle's trajectory.As a first step, we consider what happens after an infinitesimal interval of proper time dτ , after which the particle is located at the new point x ′µ = x µ + u µ dτ .Accordingly, the shift in momentum is given by where the variation is made of two distinct contributions, namely 2 : By defining the four-acceleration a µ = u ν ∇ ν u µ originated by an external force and recalling that k µ k µ = −m 2 as well as k µ a µ = 0, it is straightforward to observe that the first variation of Eq. ( 4) becomes On the other hand, the second factor of Eq. ( 4) can be rewritten by introducing the expression for the connection one-form [39], that is, ω a µb = e a ν ∇ µ e ν b , and hence In so doing, exploiting Eqs. ( 5) and ( 6) to rewrite Eq. ( 4), one can identify an infinitesimal local Lorentz transformation occurring for the quantity k a .As a matter of fact where 2 When there is no need for disambiguation, the dependence on the spacetime position will be omitted. is an infinitesimal LLT.This means that the momentum of the particle as viewed by a local reference frame (i.e., the one belonging to the tangent space) undergoes the transformation which is precisely a LLT.Consequently, the evolution of a spin-1/2 state must be described in terms of a representation of the spin-1/2 local Lorentz group.Bearing this in mind, we recall that, in the context of flat spacetime, under the action of a given Lorentz transformation Λ a b , the spin-1/2 one-particle state |k a , σ⟩ transforms as follows [40,41]: (1/2) being a 2 × 2 unitary matrix that allows for the the Wigner rotation W a b (Λ, k) of the spin.The Wigner rotation [42] can be written as where L a b is the Lorentz boost with Ξ = | ⃗ k| 2 + m 2 and the indexes i, j = 1, 2, 3. When generalizing to include the case of a curved spacetime, we have to resort to local Wigner rotations stemming from the LLTs described in Eq. ( 9).Accordingly, Eq. ( 11) becomes Notice that a similar scenario holds true not only for spinors, but for Dirac bispinors as well; for recent applications of the latter, see Refs.[43] and references therein.The form of the infinitesimal local Wigner rotation can be extracted from Eq. ( 9); indeed, one can verify that where i, j = 1, 2, 3, as they are the only non-vanishing terms of ϑ a b . III. WIGNER ROTATION FOR A GENERIC METRIC IN ISOTROPIC COORDINATES In the following, we compute the Wigner rotation angle for a general class of static and spherically symmetric spacetime solutions, thus going beyond the standard Schwarzschild case treated in Ref. [36] and the weak-field limit considered in Ref. [38].To this aim, we make use of a generic line element that can be cast in isotropic spherical coordinates as follows: We note in passing that, by setting f (r) = (1 − M/2r) 2 /(1 + M/2r) 2 and g(r) = (1 + M/2r) 4 , one recovers the results of Ref. [36] within a different coordinate system, while if f (r) = 1 + 2ϕ(r) and g(r) = 1 − 2ψ(r), with ϕ(r) and ψ(r) being weak gravitational potentials arising in extended theories of gravity, one recovers the findings of Ref. [38]. As the metric tensor is diagonal, we can compute the tetrads rather straightforwardly From the above equation, it is possible to deduce the non-vanishing components of the connection one-form where the prime denotes derivation with respect to the coordinate r. Without loss of generality, we can investigate the circular motion3 of the entangled particles around the ultracompact object by assuming that the dynamics takes place on the equatorial plane θ = π/2.Additionally, we let the EPR source be located at φ = 0 and the two observers performing the local spin measurements at ±φ.A sketch of the physical setup is shown in Fig. 1. FIG. 1: A pair of spin-1/2 particles initially sharing a perfect EPR correlation is produced at φ = 0. Particles travel along a circular orbit around the ultra-compact object in opposite directions.At the end of each propagation, the spins are rotated due to the presence of a non-trivial background spacetime. Due to the above requirements, the expression of the four-velocity is simplified, as where ζ denotes the rapidity in the local reference frame.We recall that the motion under investigation is not a geodesic one; therefore, there must be an external force acting on the system that perfectly compensates the presence of gravity and prevents the emergence of geodesic instabilities in proximity of the horizon.Such a force produces the non-vanishing acceleration We can now compute the Wigner angle introduced in Eq. ( 14).First, by means of simple algebraic manipulations one determines the quantities ξ a b in Eq. ( 6), finding the following non-vanishing components: By virtue of the above expressions, we can write the infinitesimal LLT (8) explicitly.Indeed, recalling that k a = mu a and u a = e a µ u µ , then u a = (cosh ζ, 0, 0, sinh ζ).In a similar fashion, a a = e a µ a µ , and thus the only non-vanishing component of a a is a 1 = e 1 r a r = √ ga r .Therefore, After the evaluation of the infinitesimal LLT, the only quantity left to compute is the infinitesimal Wigner rotation (14). Because of the choice of the physical setup summarized in Fig. 1, the only terms of ϑ a b different from zero are Next, we consider the finite transformation as a Dyson series of infinitesimal ones [36], whose formal sum reads where T denotes the time ordering operator. IV. GRAVITATIONALLY-INDUCED MODULATION OF BELL NONLOCALITY According to the setting of our gedanken experiment, the EPR source emits a pair of particles, A and B, moving away from the source in opposite directions with constant four-momenta k a ± = (m cosh ζ, 0, 0, ±m sinh ζ) after having been prepared in the maximally entangled spin singlet The CHSH inequality [33] and the associated CHSH measurements are a powerful toolbox to access and test the degree of quantum nonlocality in the correlations between two dichotomous variables; for the problem at hand, such variables are the spins of the entangled particles.As a key ingredient, we need two sets of measurements { Â1 , Â2 } and { B1 , B2 } performed on parties A and B, respectively, with the aim of detecting the orientation of the third component of the spin.If correlations of the spins in a given shared state are local in the sense of Bell theorem [34,35], then the inequality [33,35] holds, where ⟨ Âi Bj ⟩ = ⟨Ψ| Âi Bj |Ψ⟩.If Eq. ( 25) is violated, spin correlations are nonlocal and local hidden-variable theories are falsified.Together with the state described in Eq. ( 24), the employment of the observables allows to reach the maximum violation of the inequality allowed by quantum mechanics, namely S[|ψ⟩] = 2 √ 2, also known as the Tsirelson bound [44].Now, the maximally entangled initial state evolves in a curved spacetime, and because of the Wigner rotation the spins of the entangled particles undergo a precession motion that prevents the perfect EPR correlation of the initial state from being preserved.Clearly, we expect that whether and how much the propagation of the particles along a closed path will change the orientation of the spins and, in turn, the violation of the CHSH inequality, should depend on the nature of the gravitational object around which the particles are orbiting.Now, assume that, after a finite proper time τ f − τ i = r √ g φ/ sinh ζ, particles A and B have reached their respective detection points; in this proper time interval, the Wigner transformation can be viewed as a rotation about the 2axis [36][37][38] where Θ can be derived from Eq. ( 23) The physical meaning of the rotation angle Θ and the spin precession can be readily visualized from Fig. 1. Having the explicit expression for the Wigner rotation, the transformation acting on the spin states can be computed as shown in Eq. ( 13).Specifically, one can verify that [36][37][38] with σ y being the Pauli matrix with imaginary entries.Crucially, we see that, as the particles progress travelling along the orbit that circles the ultra-compact object, the initial spin-singlet state gets embroiled in a linear superposition with the spin-triplet states, which implies that measurements of the spin along the same direction are no longer perfectly correlated in the local reference frame for ±φ [36][37][38]. In order to preserve perfect correlation in the local reference frame, it is sufficient to rotate the bases by ∓φ while keeping the 2-axis fixed in the point that is denoted by ±φ.In so doing, we obtain and so that the evolved state reads Before we can evaluate the CHSH inequality (25) for the observables introduced in Eq. ( 26), the measurement operators must be rewritten in the new reference frame obtained as a result of the rotation, that is Note that, in terms of experimental complexity, the preparation of the detectors does not require difficult steps.Indeed, as the observers lie in a locally inertial reference frame, the only required additional action with respect to a standard CHSH test would consist in rotating the experimental apparatus so as to match the angular distance spanned by the pair of particles (that is, ±φ).However, this distance is known a priori once the endpoints (and thus the location of the observers) have been determined on the circular orbit.Collecting all the above results, we finally obtain We can interpret Eq. ( 35) as follows: when a CHSH-like experiment is carried out after the observables have been rotated, the maximal initial violation of the CHSH inequality 2 √ 2 becomes modulated by a factor cos 2 ∆.Inspecting Eq. ( 33), we see that, in the presence of the gravitational interaction, the phase shift parameter ∆ responsible for the overall violation of the CHSH inequality acquires contributions that depend on the details of the spacetime in which the entangled particles propagate.Therefore, depending on the actual nature of the ultra-compact gravitational source being considered, we expect to find distinct and possibly significantly different modulations in the violation of the CHSH inequality.A comment is in order here: since the action of the Wigner rotation on the initial state is but a local unitary map, the total degree of nonlocality should not be influenced by it.As a matter of fact, one can check that, with a suitable selection of different directions for the observables [36], the perfect EPR correlation would be restored even after the propagation of the two spins along the orbit.Hence, the effect of gravity and of the external acceleration required to maintain a circular trajectory only amounts to changing the orientation of the measurements; in other words, nonlocality remains essentially unaffected. On a final note, it is worth observing that, as long as the initial state is prepared in close analogy to the one introduced in Eq. ( 24) and the ensuing observables are chosen in such a way to reproduce the maximally allowed degree of nonlocality, no qualitative deviation from the current analysis is expected to arise when a different pair of particles with arbitrary spins is used to study the problem at hand. V. COMPARING MODELS OF ULTRA-COMPACT OBJECTS WITH BELL NONLOCALITY The gravitational modulation of the Bell nonlocality derived in the previous Section, i.e., Eqs. ( 33) and (35), can be exploited to compare relevant alternative models of ultra-compact structures, such as the string fuzzball and the black hole (classical or with perturbative quantum corrections).We would like to stress again that these equations are completely general, and can thus be applied to an arbitrary black hole-like solution whose metric can be cast in the form (15). A. String fuzzballs Within string-theory inspired cosmology, fuzzballs [17,18,45] are spheres of strings of definite, finite volume that simulate the behavior of black holes, but having the two main problems plaguing the latter (the singularity and the information paradox) removed by the finite length extension of their microscopic components. A concrete fuzzball solution amenable to quantitative investigation is obtained from N = 2 four-dimensional supergravity, with a non-minimal coupling between gravity, four U (1) gauge fields and three complex scalars.This particular case allows for some explicit phenomenological predictions that might soon be tested via gravitational waves detection by studying ringdown (gravitational wave peak in merging events), quasi-normal modes, and spectroscopy [19]. In isotropic spherical coordinates, a four-dimensional fuzzball geometry can be described by the line element [19] where A being electric and magnetic charges.The total mass of the fuzzball is given by M , and one recovers the extremal Reissner-Nordström black hole solution when all the charges are equal.A straightforward comparison with Eq. ( 15) yields the identification Before concluding, we observe that, for the sake of comparing different spherically-symmetric scenarios, we consider a metric that does not include "tidal" effects due to the non-overlapping of the three centers with which fuzzball multicenter microstate solutions are built in [19].Effectively, this amounts to considering only the metric of the extremal black hole solution.However, this approximation becomes more precise the more massive the compact object is, since the impact of tidal forces on nearby test masses scales as the inverse of the mass [46].Therefore, by focusing on supermassive gravitational sources, we could actually regard such effects as negligible from a physically sound perspective. B. Classical and quantum Schwarzschild black holes Black holes can be investigated both in a general-relativistic classical context as well as in a quantum-corrected one.The last instance occurs when one considers gravity as an effective field theory, so that quantum gravitational radiative corrections influence the energy-momentum tensor appearing in Einstein equations.In turn, such a modification gives rise to long-range corrections appearing in the expression of the metric tensor g µν .In general, the magnitude of such corrections is extremely small and can be neglected, but in the proximity of a black hole they actively affect the metric and the ensuing gravitational phenomenology.Therefore, we can investigate the implications of the CHSH experiment in the strong-gravity regime, both in the classical and in the quantum-corrected framework. Specifically, we are interested in the extrapolation of the Schwarzschild-like solution inclusive of quantum corrections in the isotropic coordinate system [47].The line element associated with the quantum-corrected spacetime reads where M is the mass of the black hole.The standard Schwarzschild solution is recovered when the additive corrections that depend on 1/r 3 in Eq. ( 38) are removed.By comparing the metric of Eq. ( 38) with the one of Eq. ( 15), the following identification holds: C. Comparison We can now compare the different ultra-compact objects and establish if and how the fuzzball and black hole solutions differ in their response to the CHSH quantum nonlocality test. Firstly, we observe that the two classes of objects already differ at the classical level in the behavior of the gravitational potential in regions sufficiently close to the event horizon, as illustrated in Fig. 2. Next, in order to estimate quantitatively the distinct predictions in the quantum regime, we need to evaluate the gravitational modulation parameter of Bell nonlocality ∆, i.e., Eq. ( 33), in each case.Since we have specialized the general line element (15) to the instances (36) and (38), we are left with the task of taking advantage of the expressions of f (r) and g(r) appearing in Eqs.(37) and (39) and derive the explicit form of ∆. As a preliminary step, we determine the form ∆ CS of the parameter ∆ holding for the isotropic, classical standard Schwarzschild solution, that is Accounting for the quantum perturbative corrections appearing in Eq. ( 38), one can derive the form ∆ QS of the parameter ∆ holding for the quantum-corrected Schwarzschild solution at leading order where β(r) = 54000π 3 r 9 −M 3 31 − 15πr 2 2 79 + 15πr 2 +900M π 2 r 6 451 − 45πr 2 −60M 2 πr 3 2263 − 3930πr 2 + 675π 2 r 4 .(42) Finally, making use of the potentials f (r) and g(r) in Eq. (37), we obtain the form ∆ SF of the parameter ∆ holding for the string fuzzball solution The three expressions ∆ CS , ∆ QS and ∆ SF and the ensuing modulations differ significantly when evaluated along orbits sufficiently close to the ultra-compact objects, as illustrated in Fig. 3 and Fig. 4, respectively.We see that, in the strong-gravity regime, the specific nature of the gravitational source affects dramatically the degree of violation of the CHSH inequality; hence, the gravitational modulation of Bell nonlocality allows to distinguish different models of ultra-compact objects and to discriminate between the string theory and quantum field theory approaches to quantum gravity. In particular, in Fig. 4 we plot the oscillatory modulation of Bell nonlocality as measured by the degree of violation of the CHSH inequality for orbits close to the event horizon, that is, for an interval of the radial coordinate that does not exceedingly deviate from the Schwarzschild radius.For a wider range of values of the radial coordinate, the frequency of the oscillations cos 2 ∆ grows very rapidly, thereby blurring the interpolation patterns.Moreover, as the quantities ∆ S , ∆ QS and ∆ F share the same behavior in the limit r ≫ 2M , the phase shifts of the CHSH correlations can no longer be resolved in this regime. In connection with the above reasoning, it is important to recall that, by selecting a given circular trajectory, we are essentially fixing the value of the radius.Therefore, the figures and the modulating parameters do not really have to be interpreted as functions of r, because once its value is chosen they cannot vary with the particles' dynamical evolution.Hence, the evaluation of the differences between the predictions of the distinct models has to be intended for a fixed value of the radius.For instance, by looking at Fig. 4, it is immediate to verify that, for certain values of r (i.e., for certain circular orbits), the magnitude of S ′ is exactly zero according to some models, thereby signaling a complete absence of correlations between the two spins.For these exact values, instead, the other models predict a nonvanishing degree of CHSH correlations, thus entailing that the outcome of the gedanken experiment is unambiguously distinguishable. VI. DISCUSSION We have discussed a gedanken experiment that shows how quantum nonlocality in strong gravitational fields leads to predictions that discriminate between different models of quantum gravity phenomenology, both perturbative and non-perturbative ones.CHSH nonlocality tests with entangled particle pairs on circular orbits near ultra-compact objects show that the spin precession occurring in curved spacetime is responsible for a modulation of the degree of quantum nonlocality.In the presence of a non-trivial spacetime background, the standard maximally allowed violation 2 √ 2 of the CHSH inequality becomes 2 √ 2 cos 2 ∆, with the angular modulation factor ∆ strictly dependent on the metric tensor components and heavily influenced by the conjectured underlying nature of the ultra-compact object considered.A simple measurement of quantum nonlocality can thus be used to validate or falsify some phenomenological models of quantum gravitational effects in the strong-field regime. The thought experiment we have conceived provides further evidence supporting the use of quantum information concepts like entanglement and Bell nonlocality as a key tool in the investigation of yet hypothetical quantum gravitational phenomena.In deriving the main result of our work, we have also generalized the formalism of Refs.[36,38] on the study of quantum correlations in gravitational fields, so as to make it applicable in general, beyond the Schwarzschild solution and weak-field limit in the isotropic coordinate system. Concerning possible experimental tests probing the quantum nature of the gravitational field, while high-energy scattering processes are still too far from the experimental scales needed to detect coexisting quantum and gravitational effects, it is a largely shared belief that tiny signatures of such a coexistence might still be revealed with currently available means through satellite experiments, gravitational wave analysis or tabletop laboratory tests centered around foundational aspects of quantum mechanics.In this respect, it is worth remarking that credited proposals which aim at collecting a direct measurement of quantum gravity phenomenology are essentially based either upon decoherence/gravity-based wave function collapse models [48][49][50][51][52][53][54][55][56][57] or, as already mentioned, the detection of gravitationally-induced quantum correlations by quantum gravitational mediators [26,27,[58][59][60][61][62][63][64].The hard challenge facing the experimental implementation of these table-top laboratory tests is that of realizing correlated and delocalized superpositions of large enough masses.So far, spatial superpositions have been observed with masses at most of the order of 10 −23 Kg (large molecules) [65], while the faintest gravitational field that can be currently measured is the one generated by masses of the order of 10 −4 Kg [66]. While one may hope to significantly improve these numbers by considering some amplification mechanisms, tabletop probing of gravitational effects on Bell inequality might turn out to be significantly less challenging and might open the way to laboratory simulations of extreme cosmological conditions suitable for the verification of quantumgravity-induced modulations of quantum nonlocality as described in the present work.In this respect, a particularly promising avenue might involve designing experimental tests of the CHSH inequality near the horizon of sonic and optical analogues of black holes [67][68][69][70]. On a final note, our findings strongly suggest that resorting to the entire spectrum of quantum resources, from nonlocality, entanglement and steering to discord, coherence and complementarity may provide very useful insights in the investigation on the actual nature of gravity. FIG. 2 : FIG. 2: The gravitational potential of an ultra-compact object as a function of the distance (radius) from the object center in units of the Schwarzschild radius for different spacetimes.Sample values are fixed at M = 2.5, Q1 = 1, Q2 = 2, Q3 = 3, and Q4 = 4.
7,711.2
2023-04-21T00:00:00.000
[ "Physics" ]
Exterior powers in Iwasawa theory The Iwasawa theory of CM fields has traditionally concerned Iwasawa modules that are abelian pro-p Galois groups with ramification allowed at a maximal set of primes over p such that the module is torsion. A main conjecture for such an Iwasawa module describes its codimension one support in terms of a p-adic L-function attached to the primes of ramification. In this paper, we study more general and potentially much smaller Iwasawa modules that are quotients of exterior powers of Iwasawa modules with ramification at a set of primes over p by sums of exterior powers of inertia subgroups. We show that the higher codimension support of such quotients can be measured by finite collections of p-adic L-functions under the relevant CM main conjectures. Introduction Iwasawa theory studies the growth of Selmer groups in towers of number fields. In the commutative setting, these towers have Galois groups isomorphic to Z r p for some r ≥ 1, and their Iwasawa algebras are isomorphic to a power series ring in r variables over Z p . The Selmer groups are typically attached to Galois-stable lattices in p-adic Galois representations that come from geometry. The local conditions defining the Selmer groups are chosen so that the Pontryagin dual of a limit up the tower is a finitely generated torsion module over the Iwasawa algebra. For example, when the Galois representation is the trivial representation, these dual Selmer groups are abelian pro-p Galois groups with restricted ramification. In many instances, one can construct a power series that gives rise to a p-adic L-function attached to the lattice and the Selmer conditions. In what is known as a main conjecture, this power series is conjectured to generate the characteristic ideal of the Iwasawa module. In this paper, we develop a method to study the support of Iwasawa modules in arbitrary codimension, focusing specifically on the Iwasawa theory of CM fields for onedimensional Galois representations. To study the codimension n support of a finitely generated Iwasawa module, we use the nth Chern class of its maximal codimension n submodule. This Chern class, as defined in [2], is the sum of the lengths of its localizations at the prime ideals of codimension n. For instance, the first Chern class of a finitely generated torsion Iwasawa module is the divisor defining its characteristic ideal. A CM main conjecture describes the first Chern class of an Iwasawa module unramified outside of a (p-adic) CM type of primes over p in terms of a Katz p-adic L-function. Recall that a CM type is a set of one from each pair of complex conjugate primes over p in a CM field, supposing that the primes over p split from the maximal totally real subfield. We aim to construct an Iwasawa module which has support in higher codimension related to a tuple of p-adic L-functions for distinct CM types. For this, we take the quotient of the top exterior power of a p-ramified Iwasawa module by a sum of top exterior powers of composites of inertia groups at certain of the primes. The main results of this paper relate higher Chern classes of these exterior quotients to the first Chern classes of Iwasawa modules unramified outside of a CM type, and therefore to Katz p-adic L-functions if the relevant CM main conjectures hold. The idea of taking top exterior powers occurs frequently in number theory, as characteristic ideals arise as determinants. The quotient of the top exterior power of a finitely generated free module by the top exterior power of a free submodule of full rank has first Chern class equal to that of the quotient of the two free modules. For this reason, exterior powers figure heavily in equivariant formulations of main conjectures using determinants, as in the work of Fukaya and Kato [4]. They also appear prominently in Stark's conjectures, in which one considers the top exterior powers of isotypic components of unit groups in order to arrive at regulators which are related to the special values of derivatives of Artin L-series. Our work has the seemingly unique aspect that we take a quotient of a top exterior power of an Iwasawa module by a sum of two or more top exterior powers of submodules. Let us briefly describe our main theorems, as we shall state after introducting the necessary framework. Theorem A relates the codimension 2 support of an exterior quotient to a pair of first Chern classes corresponding to arbitrary distinct choices of CM types. In Theorem B, by localizing away from bad primes, we obtain an isomorphism between an exterior quotient and the quotient of an Iwasawa algebra by the ideal generated by a tuple of first Chern classes. Theorem C involves two CM types differing in a degree one prime, in which case our quotient is the classical Iwasawa module unramified outside the intersection of the two CM types. We relate the sum of second Chern classes of this module and another for the complex conjugate set to the ideal generated by the two first Chern classes of the CM types. Finally, in Theorem D, we describe a quotient of second exterior powers as a Galois group with restricted ramification. We turn to details of our work, starting with the formal definition of our key invariant. An index of notations is given in Section B at the end of the paper. For a finitely generated Iwasawa module M, we let t n (M) denote the nth Chern class of the maximal submodule over height n prime ideals P in the Iwasawa algebra. In the case that M = T n (M), this is the nth Chern class c n (M) of M considered in [2]. The invariant t 1 (M) is naturally identified with the characteristic ideal of the torsion submodule of M, matching the classical definition. Note that t n is not additive on arbitrary exact sequences of finitely generated modules, but it is on exact sequences of modules supported in codimension at least n. Now, let p be an odd prime, and let E be a CM field of degree 2d. We suppose that each prime over p in the maximal totally real subfield E + of E splits in E. Let F be a finite abelian extension of E of degree prime to p containing the pth roots of unity. Let K be the compositum of F with all of the Z p -extensions of E, and let Γ = Gal(K/F ) and G = Gal(K/E). Let Σ be a subset of the set of primes of E over p. We consider the Σramified Iwasawa module X Σ that is the Galois group over K of the maximal unramified outside of Σ abelian pro-p extension of K. Then Γ is isomorphic to Z r p for some integer r ≥ d + 1, where r = d + 1 if the Leopoldt conjecture is true. Let ψ : ∆ = Gal(F/E) → W × be a p-adic character, where W denotes the Witt vectors of an algebraic closure F p of F p . (In our main results, W may be replaced by the ring generated by the values of ψ.) Let Λ = W [[Γ]] be the completed group ring of Γ over W , which is a power series ring in r variables over W . We are interested in the finitely generated Λ-module X ψ Σ = X Σ⊗Zp[∆] W for the map Z p [∆] → W induced by ψ, which is to say the ψ-isotypical component of X Σ , or more precisely of its completed tensor product with W . Let S f be the set of all primes over p in E. A (p-adic) CM type Σ is a subset of S f which contains exactly one prime of each conjugate pair. One has a power series L Σ,ψ ∈ Λ that gives rise to a certain Katz p-adic L-function attached to Σ and ψ. Hida and Tilouine [8] showed that X ψ Σ is Λ-torsion and stated an Iwasawa main conjecture that says that the characteristic ideal of X ψ Σ is generated by L Σ,ψ . They proved an anticyclotomic variant of this conjecture under certain hypotheses. Work of Hsieh [9] shows that the characteristic ideal of X ψ Σ is divisible by L Σ,ψ under certain assumptions. In particular, this relates the codimension one support of the algebraically defined module X ψ Σ to that of the analytically defined module Λ/(L Σ,ψ ). We will use L Σ,ψ to denote a choice of generator of the characteristic ideal of X ψ Σ . The CM main conjecture for Σ is then the statement that (L Σ,ψ ) = (L Σ,ψ ). Fix a set S of primes over p properly containing a CM type. Let us write S as a union of two distinct CM types S 1 and S 2 . Let θ be a greatest common divisor in Λ of L S 1 ,ψ and L S 2 ,ψ . For a discussion of a possible construction of examples in which θ is a non-unit, see Remark 5.8. The first Chern class of the quotient Λ/(L S 1 ,ψ , L S 2 ,ψ ) is the ideal Λθ. Our interest in this paper is the more subtle information contained in the pseudo-null module We aim to relate the codimension two support of the module (1.1) to that of some naturally defined algebraic modules, as was done in [2] for imaginary quadratic fields E under the assumption of coprimality of L S 1 ,ψ and L S 2 ,ψ . This requires overcoming a serious obstruction for E an arbitrary CM field. Namely, the Λ-rank ℓ of X ψ S may now be larger than 1: that is, we show in Lemma 3.1 that where Σ is any CM type contained in S. If ℓ > 1, then the first Chern class of X ψ S i for i ∈ {1, 2} is insufficient to identify, up to errors supported in codimension greater than 2, the Λ-submodule I ψ T i of X ψ S generated by inertia groups at primes over T i = S − S i . We make the simple but key observation that the ℓth exterior powers of X ψ S and the I ψ T i are indeed rank one Λ-modules. We therefore replace the quotient X ψ S /(I ψ T 1 + I ψ T 2 ) ∼ = X ψ S 1 ∩S 2 found in the imaginary quadratic setting by the exterior quotient where a subscript "tf" denotes maximal Λ-torsion-free quotient. Here, we view each ( ℓ I ψ T i ) tf as a submodule of ( ℓ X ψ S ) tf and take their sum within the latter group. We will compare the second Chern classes of the maximal pseudo-null submodules of (1.2) and of Λ/(L S 1 ,ψ , L S 2 ,ψ ). For a compact Λ-module A, we let A(1) be the Tate twist of A by the cyclotomic character of Γ. Let A ι denote the Λ-module which as a topological Z p -module is A and on which γ ∈ Γ now acts by γ −1 . For A finitely generated, we define , we let ℓ A denote the ℓth exterior power of A over Λ, and we let Fitt(A) denote the 0th Fitting ideal of A. Write S c for the set of primes over p not in S. Then X ωψ −1 S c is a torsion Λ-module because S c is contained in a CM type of primes over p. To simplify statements of our main theorems as stated in the body of this paper, we suppose in this introduction that ψ (resp. ωψ −1 ) is nontrivial on all decomposition groups in ∆ at primes p ∈ S (resp. p ∈ S), for S the complex conjugate set to S. Under this assumption, each I ψ (The latter comment applies to the theorems in this introduction, so we omit the "tf" notation on such groups in them.) Theorem A. For a union S of two distinct CM types S 1 and S 2 and its complement S c , we have an equality of second Chern classes where ℓ = rank Λ X ψ S , where θ is a gcd of the characteristic elements L S i ,ψ of X ψ S i for i ∈ {1, 2}, and where θ 0 is a generator of t 1 ( ℓ X ψ S ). Remark 1.1. In Theorems 5.6 and 5.9, we generalize Theorem A to treat n-tuples of CM types, without any assumption on ψ. for each of the 2 ℓ CM types Σ containing S c , each of which has first Chern class (L Σ,ωψ −1 ), and these lack obvious dependencies in general. When ℓ > 1, we therefore suspect that the Λ-module X ωψ −1 S c frequently has annihilator of height greater than 2, in which case the last term in (1.3) vanishes. (Recall that for a Cohen-Macaulay ring R, the height of the annihilator of a finitely generated Rmodule M is at most the smallest i such that Ext i (M, R) is nonzero [14,Theorem 17.4].) In fact, the proof of Theorem A and a spectral sequence argument lead to the following. Theorem B. Let S be a subset of S f that properly contains a CM type. Let q be a prime of Λ not in the support of (X ωψ −1 S c ) ι (1). Then the following hold. (ii) Let S 1 , . . . , S n be distinct CM types contained in S for some n ≥ 1. Then ℓ X ψ S,q ℓ I ψ T 1 ,q + · · · + ℓ I ψ The rank ℓ of X ψ S equals 1 if and only if S is a union of two CM types S 1 and S 2 that differ in a single completely split prime. In this case, supposing that L S 1 ,ψ and L S 2 ,ψ are relatively prime, we prove the following remarkably clean refinement of Theorem A, which rests on proving that X ψ S 1 ∩S 2 and X ωψ −1 are pseudo-null under this assumption. Theorem C. Suppose that ℓ = 1, and suppose that L S 1 ,ψ and L S 2 ,ψ are relatively prime. Then we have [2], the fact that X ψ S f has rank [E + : Q] stood as a serious obstacle to a generalization to arbitrary CM fields. While one can derive Theorem C itself through Theorem A (in particular, as X ψ S is torsion-free when the torsion module X ωψ −1 S c is pseudo-null), we give a finer and more subtle version without assumption on ψ and an entirely separate proof in Theorem 5.12. We will show in Proposition 5.10 that if ℓ = 1, then L S 1 ,ψ and L S 2 ,ψ are relatively prime if and only if both X ψ S 1 ∩S 2 and X ωψ −1 Remark 1.3. Let us elaborate on a comment made earlier. One can ask about the relationship between X ψ S 1 ∩S 2 and Λ/(L S 1 ,ψ , L S 2 ,ψ ) when ℓ > 1. The maximal pseudo-null submodules of X ψ S 1 and X ψ S 2 are trivial. Therefore, L S 1 ,ψ and L S 2 ,ψ are annihilators of X ψ S 1 and X ψ S 2 , respectively, so they annihilate their common quotient X ψ S 1 ∩S 2 . Consequently, any prime ideal in the support of X ψ S 1 ∩S 2 should contain both L S 1 ,ψ and L S 2 ,ψ , and hence should be in the support of Λ/(L S 1 ,ψ , L S 2 ,ψ ). However, even under the simplifying assumption that X ψ S is a free Λ-module, the converse is unlikely to hold in general. A prime ideal P of Λ could be in the support of both X ψ S /I ψ T 1 and X ψ S /I ψ T 2 but fail to be in the support of X ψ S /(I ψ For example, Λ-module bases for I ψ T 1 and I ψ T 2 (assuming they are free) could each be linearly dependent modulo P, but their union might easily contain a linearly independent subset modulo P. When ℓ > 1, it is natural to ask there is an interpretation of the first term on the righthand side of (1.3) as the second Chern class of a suitable Galois group. We provide such an interpretation in the case that ℓ = 2. Definition 1.4. Let L be the maximal abelian pro-p extension of K that is unramified outside of S = S 1 ∪ S 2 , so that X S = Gal(L/K). Let N be the maximal abelian pro-p extension of L unramified outside S with the following properties: We show that there is a canonical square root of the conjugation action of G = Gal(K/E) on U and on V ; see Remark 7.6. We consider the ψ-isotypical components U whose kernels are supported in codimension at least 3. Theorem D is proved in Theorem 7.9. For a field diagram summarizing the groups and fields it involves, see Appendix A. The significance of this theorem is that when ℓ = 2, a particular graded piece of a higher term in the lower central series of the Galois group of the maximal unramified outside S pro-p extension K (p) S of K arises when one seeks a Galois-theoretic interpretation of natural modules defined by p-adic L-functions. If V √ ψ is pseudo-null, one has However, t 2 is not an exact functor on exact sequences of modules that are not pseudonull, and we do not know in general whether V √ ψ is pseudo-null. We end this introduction with two comments on potential research directions. First, we remark that though we have restricted ourselves to classical Iwasawa modules, we expect that the approach we have outlined in this paper will apply to general Selmer groups. This is already illustrated in the recent work of Lei and Palvannan on Selmer groups of supersingular elliptic curves [11] and tensor products of Hida families [12]. Secondly, we note that congruences between Eisenstein series and cusp forms play a key role in proofs of one of the divisibilities in main conjectures, whereby the existence of residually-reducible Galois representations with certain ramification behavior leads to lower bounds for the support of Selmer groups. One can ask how to apply such techniques to directly study the higher codimension behavior of Iwasawa modules. The right hand side of (1.4) has two terms measuring the size of Galois groups of extensions unramified outside the intersection of two CM types. It would be interesting if one could construct Galois representations that separately control each of the two terms. For instance, one might consider congruences between Hida families modulo Eisenstein ideals attached to Λ-adic Eisenstein series with constant terms arising from different p-adic L-functions. Duality Let p be a prime, let E be a number field, and let F be a finite Galois extension of E of prime-to-p degree. We suppose that F has no real places if p = 2. Let ∆ = Gal(F/E). Let K be a Galois extension of E that is a Z r p -extension of F for some r ≥ 1, and set Γ = Gal(K/F ). Note that K/F is unramified outside p as a compositum of Z pextensions. Set G = Gal(K/E) and Let S = S p,∞ be the set of all primes of E over p and ∞, and let S f be the set of all primes of E over p. For any algebraic extension F ′ of F , let G F ′ ,S denote the Galois group of the maximal extension F ′ S of F ′ that is unramified outside the primes over S. ]-module T , we consider the Iwasawa cochain complex that is the inverse limit of continuous cochain complexes under corestriction maps, with F ′ running over the finite extensions of F in K. It has the natural structure of a complex of Ω-modules. We let RΓ Iw (K, T ) denote its class in the derived category and H i Iw (K, T ) its ith cohomology group. We similarly let for any p ∈ S f , where G F ′ P denotes the absolute Galois group of the completion F ′ P . For a finitely generated Ω-module, we have ]-projective). We employ the notation where M ι is the Ω-module M with the new action · ι given by f · ι m = ι(f )m for f ∈ Ω, where ι : Ω → Ω is the continuous Z p -linear involution given on G by inversion. This is a bit cleaner for the purposes of duality, as it alleviates the need to place involutions in the statements of various results. We set M * = E 0 (M) = Hom Ω (M ι , Ω). For later use, we note that there are natural isomorphisms of Ω-modules where M(n) for n ∈ Z is the Ω-module that is M with the modified G-action g · m = χ n p (g)gm for χ p : G → Z × p the p-adic cyclotomic character. Let Σ be a subset of S f . Let Σ c = S f − Σ. We let RΓ Σ,Iw (K, T ) be the class in the derived category of the cone and define H i Σ,Iw (K, T ) to be its ith cohomology group. We define RΓ Σ c ,Iw (K, T ) and H i Σ c ,Iw (K, T ) similarly. We have the following two spectral sequences. ]-module that is finitely generated and free over Z p , and let T # be its Z p -dual. There are convergent spectral sequences of Ω-modules Proof. By definition, we have the commutative diagram of exact triangles (of which we write three terms) with the dashed arrow being the induced morphism. The derived Iwasawa-theoretic versions of Poitou-Tate and Tate duality found in [15,Section 8.5] then yield isomorphisms in the derived category of finitely generated Ω-modules where the lower two isomorphisms yield the isomorphism of cones. (That these are morphisms in the derived category of Ω-modules and not simply Z p [[Γ]]-modules follows from their definitions and the fact that The case that ∆ is abelian is treated in [15], and this can be found in a more general context in [13, Theorem 4.5.1].) Let us now focus on the case of Z p (1)-coefficients. (1)) vanishes unless Σ c is empty, in which case it is isomorphic to Z p as an Ω-module. Proof. The first statement is a consequence of the fact that G E,S and G Ep for all p ∈ S f have p-cohomological dimension 2, the vanishing in degree 0 following from the fact that Γ is infinite. The first map in the exact sequence Let X Σ denote the Σ-ramified Iwasawa module over K. Let X ♭ Σ denote the maximal quotient of X Σ that is completely split at the primes in S f − Σ. We also set For p ∈ S f , let G p denote the decomposition group in G at a place over the prime p in K, , which has the natural structure of a left Ω-module. Set be the kernel of the sum of the augmentation maps. For p ∈ S f , let Γ p = G p ∩ Γ be the decomposition group in Γ at a prime over p in K, and let By [2, Lem. 4.1.13], we have the following. Let D p denote the Galois group of the maximal abelian, pro-p quotient of the absolute Galois group of the completion K p of K at a prime over p. Define I p to be the inertia subgroup of D p . We have completed tensor products These have the structure of Ω-modules by left multiplication. Set (2.5) Proof. We have a long exact sequence By Poitou-Tate duality, the second term is X S f , and by Tate duality, the first term is D Σ c , and the cokernel of the resulting restriction map and again by class field theory, the map K Σ c → Z p is given by summation. In the remainder of this section, we make the following hypothesis: Hypothesis 2.5. The field K contains all p-power roots of unity. This allows us to pull twists out of our Iwasawa cohomology groups and to apply Weak Leopoldt where helpful. One could remove this assumption with appropriate modifications, but we do not need to do so for our applications. Our assumption on K implies that r p ≥ 1 for each p ∈ Σ c , so the canonical injection X Σ ֒→ Y Σ has torsion cokernel which is pseudo-null if r p ≥ 2 for each p ∈ Σ c . Using the spectral sequences of Proposition 2.1, we obtain the following. and for i ≥ 1, there are isomorphisms of Ω-modules. If Σ = S f , then the above statements hold upon localization at any prime of Ω outside the support of Z p , while if Σ = ∅, they hold outside the support of Z p (1). More precisely, if Σ = S f , then (2.6) becomes exact upon replacing the rightmost zero by Z p , and the maps in (2.7) are isomorphisms for i ≥ 2. For i = 1, the map in (2.7) is surjective with procyclic kernel unless it happens that r = 2 and it is injective with finite cyclic cokernel. Proof. Let us first suppose that Σ / ∈ {∅, S f }. Consider the spectral sequence F i,j 2 (Z p ) ⇒ F i+j (Z p ) of Proposition 2.1. By Lemma 2.2 and the fact that Σ = ∅ (resp., . The spectral sequence then yields an exact sequence of base terms and isomorphisms of Ω-modules for i ≥ 1. We then obtain our results by applying two isomorphisms: the first 1 by the vanishing of the terms H i,0 (Z p ) that occurs since Σ = S f , and the second follows by our assumption that K contains all p-power roots of unity. If Σ = ∅, then we have The above arguments go through so long as we localize all terms at a prime of Ω outside of the support of Z p (1), as well as if r = 1. For the more precise statements for Σ = S f , we can use the results of [2], as we explain. Set U = H 1 Iw (K, Z p ) for brevity of notation. As in the proof of [2, Cor. 4.1.6], we have an exact sequence [2,Cor. A.9]). For r = 1, this implies that the map E 1 (U) → Z p given by taking Ext-groups of (2.9) is an isomorphism, forcing the map Z p → E 1 (U) in (2.8) to also be an isomorphism, hence the result. Finally, suppose that r = 2 and the map Y * S f → Z p of (2.9) is nontrivial, hence has image isomorphic to Z p . Taking Ext-groups, we then have an exact sequence of the form in which the first term is finite (again by [2, Cor. A.9]). Since E 3 (Y ∅ ) is finite as well, it follows that the map Z p → E 1 (U) in (2.8) must be injective, and so we also have an exact sequence From these two sequences and a simple application of the snake lemma, we obtain that Proof. We apply Proposition 2.7 with Σ and Σ c reversed. Note that Σ c = 0, the exact sequence (2.6) gives the remaining statements. Remark 2.9. The result of Corollary 2.8 remains true for Σ = S f after localization at a prime away from the support of Z p (1) (and without localization if r = 1), as follows by Proposition 2.7. Let us set Proof. By Corollary 2.8 and Remark 2.9, we have Since Y Σ,q is a finitely generated module over the regular local ring Ω q with vanishing higher Ext-groups to Ω q , it is free (cf. [1, (4.12)]). Proposition 2.11. For any nonempty subset P of Σ, we have a map of exact sequences of Ω-modules in which the vertical maps are the canonical ones. If the primes of K over each p ∈ P have infinite residue field degree, then D P = I P and E 1 (K P ) = 0. Proof. The exactness of the lower sequence was shown in Proposition 2.7. The exactness of the upper sequence is shown in [2, Thm. 4.1.14] via the spectral sequence of derived Tate duality (see (2.11) below), and the map of exact sequences from the corresponding map of spectral sequences. That D P = I P is [2, Lem. 4.2.2], and E 1 (K P ) = 0 follows from Remark 2.3 and r p ≥ 2 (since K is assumed to contain all p-power roots of unity and its completion at p to contain the unramified Z p -extension). Let us refine the above result in the local setting. Moreover, the following statements hold. is Ω-free and fits in an exact sequence is Ω-free and fits in an exact sequence , and there is an exact sequence Proof. The local spectral sequence in the proof of Proposition 2.1 for T = Z p has the form the spectral sequence (2.11) yields an exact sequence We note that Lemma 2.12 tells us that the reflexive Ω-module D * p is not free if r p ≥ 3, since in that case its first Ext-group is nonzero. The following corollary is proven in the same manner as Theorem 2.10 but using Lemma 2.12. Corollary 2.13. Let p ∈ S f , and let q be a prime ideal of Ω that is either • of codimension less than r p or • outside the support of K ι p (1) and, if r p ≥ 3, also outside the support of K p . CM fields Unless otherwise stated, we maintain the notation of the previous section. Let E be a CM extension of Q of degree 2d and E + its maximal totally real subfield. Let p be an odd prime such that each prime over p in E + splits in E. By a (p-adic) CM type, we shall mean a set consisting of one prime of E over each of the primes over p in E + . Let E be the compositum of all Z p -extensions of E. If Leopoldt's conjecture holds for E and p, then E is the compositum of the cyclotomic Z p -extension E cyc and the anticyclotomic Z d p -extension E acyc of E. We set Γ = Gal( E/E). As before, we let r = rank Zp Γ and r p = rank Zp Γ p , and we also set (ii) The extension E/E has infinite residue field degree at p. Proof. Let Σ be a CM type containing p. To prove (ii), it suffices to show that p has infinite order in the inverse limit of the ray class groups of E of conductor a power of q∈Σ q. Let α ∈ O E generate a positive power of p. By class field theory, it suffices to prove that no positive power of α lies in the closure U of the image of the unit group Let T be the set of embeddings of E into Q p that send some prime in Σ into the maximal ideal of the integral closure of Z p in Q p . Then N (α) = σ∈T σ(α) is a product of non-units of the ring of all algebraic integers, so is certainly not a root of unity. Thus, no positive power of α lies in ker N , so no such power lies in U and we have (ii). From (ii), we see that r p = rank Zp J p + 1, where J p denotes the inertia group in Γ p . Local reciprocity maps provide a homomorphism under complex conjugation is finite, the sum of the Z pranks of the inertia subgroups at q ∈ Σ in Gal(E acyc /E) is d. As q∈Σ d q = d, this forces rank J q = d q for all q ∈ Σ. In particular, we have (i). We let ψ denote a one-dimensional character of the absolute Galois group of E of finite order prime to p, and we let E ψ denote the fixed field of its kernel. We set F = E ψ (µ p ) and ∆ = Gal(F/E). Let ω denote the Teichmüller character of ∆. We set K = F E. We take G = Gal(K/E). We shall make the identification Γ = Gal(K/F ) for the isomorphism given by restriction. Let for the map Z p [∆] → W induced by ψ. In particular, we have Ω ψ ∼ = Λ. When dealing with finitely generated Λ-modules M, we abuse notation and set E j (M) = Ext j Λ (M ι , Λ), much as before but now with W -coefficients. For any subset P of S f , let us set where Y S is as in (2.1). Moreover, the canonical map I ψ T → X ψ S is injective with torsion cokernel. Proof. We first note that X S = X ♭ S because of Lemma 3.1(ii). By Lemma 2.4, the cokernel of the injection X S ֒→ Y S is isomorphic to the Λ-torsion module K S c ,0 (noting Γ p = 0). Therefore the ranks of X ψ S and Y ψ S are the same. We know that X ψ S f has Λ-rank d = to have image of rank d in X ψ S f . As S c ⊂ Σ c , the image of I ψ S c in X ψ S f has rank d S c , and therefore X ψ and the kernel of the map I ψ T → X ψ S is then Λ-torsion. On the other hand, the Λ-torsion in I ψ T is isomorphic to a subgroup of (E 1 (K T )(1)) ψ by Proposition 2.11, but the latter group is zero by Remark 2.3 since r p ≥ 2 for all p ∈ S f by Lemma 3.1. As mentioned, for a CM type Σ, the Λ-module X ψ Σ is torsion. We will use L Σ,ψ to denote a generator of c 1 (X ψ Σ ). The Iwasawa main conjecture for Σ and the character ψ states that L Σ,ψ can be taken to be the Katz p-adic L-function for Σ and ψ (or more precisely a power series that determines it). For p ∈ S f , let ∆ p be the decomposition group in ∆ = Gal(F/E). We have K ψ . It follows from Remark 2.3 that is zero unless j = r p and ωψ −1 | ∆p = 1. If nonzero, the latter Λ-module is isomorphic to The codimension s primes of W [[Γ]] in the support of the latter module have the form i is a positive divisor of q i for each i, and Φ n is the nth cyclotomic polynomial. F where χ p denotes the p-adic cyclotomic character on Γ. Remark 3.5. For a CM type Σ, the primes in the support of K ψ p for p ∈ Σ and the primes in the support of (K ωψ −1 p ) ι (1) for p ∈ Σ yield trivial zeros of the Katz p-adic L-functions for Σ and ψ (cf. [10,Sect. 5.3]). In our terminology, this says that L Σ,ψ lies in each of these primes. Exterior powers In this section, we prove some abstract lemmas on exterior powers that we shall use in our study. We fix an integral domain R. Let X and F be R-modules of rank ℓ ≥ 1 with F free. Let λ : X → F be an Rmodule homomorphism with torsion kernel T 1 (X ) and torsion cokernel E, which in our applications will be pseudo-null. The induced homomorphism ℓ λ : ℓ X → ℓ F on exterior powers fits in an exact sequence essentially by definition. We note that if I is an R-submodule of X of rank ℓ, then the induced map ( ℓ I) tf → ( ℓ X ) tf on maximal torsion-free quotients is injective, so we can and do identify ( ℓ I) tf with its image in ( ℓ X ) tf . Lemma 4.1. Suppose that R is a Noetherian UFD. For n ≥ 1 and 1 ≤ i ≤ n, let I i be a rank ℓ submodule of X mapped injectively under λ into a free submodule J i of F with pseudo-null cokernel B i := J i /λ(I i ). Let θ 0 , θ 1 , and L i be generators of of t 1 (X ), c 1 (E), and c 1 (X /I i ), respectively. Then We have an exact sequence where the leftmost map has pseudo-null kernel with support contained in that of the Λmodules Q(B i ). Proof. The existence of and statements about θ 0 , θ 1 , and L i follow from the assumption that R is a UFD. For 1 ≤ i ≤ n, since I i → J i is injective with pseudo-null cokernel, the sequence of morphisms is exact when localized at any codimension one prime of R. We conclude that Since J i and F are free of rank ℓ, we see from (4.1) that the exterior power ℓ J i is equal to the free rank one submoduleL i · ℓ F of ℓ F . We have a commutative diagram of R-modules with exact rows We can pick generators for the free rank one R-modules ℓ J i and ℓ F so that the map g : R n → R has the form g(α 1 , . . . , α n ) = n i=1L i α i . The snake lemma then yields an exact sequence of R-modules on cokernels as in the statement, where the kernel of the first map is the cokernel of the map ker g → ker g ′ induced by h. Let θ be a gcd in R of L 1 , . . . , L n . Then θ 0 divides θ, so ν = θ 1 θ/θ 0 is in R. The maximal pseudo-null submodule of N is and we have an exact sequence of pseudo-null modules where g, g ′ , and h are as in (4.2). In particular, if Q(B i ) = 0 for all i thenL i ∈ Fitt(E) for all i and (4.3) becomes a short exact sequence where g(α 1 , . . . , α n ) = n i=1L i α i , the map h is induced by the canonical quotient map R n → n i=1 Q(B i ), and g ′ is the map induced by g. Alternatively, we have where α i denotes the image of α i ∈ R in Q(B i ). Main theorems We keep the notation and assumptions of Section 3. That is, we work with a CM field E of degree 2d, a prime p such that all primes over it split in E/E + , and a p-adic character ψ of the absolute Galois group of E. We again have • the fields F = E ψ (µ p ) and K = F E for the compositum E of Z p -extensions of E, • the Galois groups G = Gal(K/E) and Γ = Gal( E/E), and For the definitions of the Iwasawa modules X P , X ♭ P , Y P , K P , K P,0 , I P , and D P , ranks r P , and degrees d P attached to subsets P of the set S f of primes over p, we refer the reader to (2.1)-(2.5) and just prior, as well as to (3.2). Recall that for a compact Ω-module A, we denote by ℓ A ψ the ℓth exterior power over Λ of the eigenspace A ψ of A defined in (3.1). Moreover, if A ψ is a finitely generated Λ-module, then Fitt(A ψ ) denotes its 0th Fitting ideal in Λ. For n ≥ 1, let S 1 , . . . , S n be distinct CM types of primes over p viewed as subsets of the set S f of all primes over p in E. Let The complement of S is then given by and note that ℓ = rank Λ I ψ T i for all i by (3.3). Recall that L S i ,ψ ∈ Λ is taken to be an element satisfying c 1 (X ψ S i ) = (L S i ,ψ ). We have that r p = d p + 1 ≥ 2 for each p ∈ S f by Lemma 3.1. Thus, by Remarks 2.6 and 2.3, for every P ⊂ S f we have • X P → X ♭ P is an isomorphism, • K P is supported in codimension min{r p | p ∈ P }, and • X P → Y P is an injective pseudo-isomorphism. We will use these facts without further reference. Since we next work with eigenspaces that are Λ-modules, it is useful to compare their support with those of the original Ω-modules. For this, we have the following remark. in Ω is in the support of M. This will allow us to apply the results of Section 2 to study the ∆eigenspaces of our arithmetically-interesting Ω-modules, as we shall do below. Let We may now state and prove our first main theorem. Proof. Let q be a prime of Λ. If X ψ S,q is free, then we have an isomorphism ℓ X ψ S,q ∼ = Λ q . If I ψ T i ,q is free, then since c 1 (X ψ S,q /I ψ T i ,q ) = (L S i ,ψ ), this isomorphism takes the free rank one submodule ℓ I ψ T i ,q to (L S i ,ψ ). So, we need only avoid those q such that X ψ S,q or some I ψ T i ,q is not free. By Theorem 2.10 (noting Remark 5.1), the module Y ψ S,q is free for q outside the support of (Y ωψ −1 S c ) ι (1) ⊕ Z ψ S , with Z S as in (2.10). Lemma 2.4 provides an exact sequence So, Y ψ S,q is free for q not in the support of (X ωψ −1 Similarly, the homomorphism X ψ S,q → Y ψ S,q is an isomorphism for q not in the support of K ψ S c ,0 by Lemma 2.4. Finally, Corollary 2.13 tells us that every I ψ T i ,q is free for q not in the support of K ψ T ⊕ (K ωψ −1 T ) ι (1). Together, the above conditions say that the desired isomorphism holds if we avoid primes in the support of This may be simplified to the statement of the theorem by the following observations. If n = 1, then S = S f , so Z ψ S = 0, and T = ∅, so K T = 0. Moreover S c = S in this case. If n ≥ 2, then note that S = S c ∪ T and T ⊂ S. Both S and its conjugate set S have more than one element. This implies that Z ψ p is a subquotient of K ψ S,0 and Z ωψ −1 p (1) is a subquotient of (K ωψ −1 S,0 ) ι (1). In turn, these two facts yield that the supports of the third, fourth, and fifth terms in (5.1) are contained in the support of K ψ , and the support of the last term is contained in the support of the second. Remark 5.3. Regarding the disallowed primes in Theorem 5.2, note that as Λ-modules as well), but we have written it as we have to exhibit a certain symmetry. The following notation is used in the statements of the various theorems in this section. Define Z Σ,ψ to be the free abelian group on V Σ,ψ = U Σ c ,ψ ∪ U Σ,ψ , which we view a direct summand of the free abelian group on the codimension two primes of Λ. The groups Γ p ⊗ Zp Q p and Γ p ⊗ Zp Q p are the same inside Γ ⊗ Zp Q p if p and p are conjugate primes in S f . For any CM type Σ, we have from the proof of Lemma 3.1. Thus, if p and p ′ are distinct, non-conjugate primes, then Γ p ∩ Γ p ′ has rank at most one and r ≥ 3, so U p,ψ ∩ U p ′ ,ψ = ∅ and U p,ψ ∩ U p ′ ,ψ = ∅. Since Γ p acts trivially on W [[Γ/Γ p ]] and via the p-adic cyclotomic character on W [[Γ/Γ p ]](1), we have that U p,ψ ∩ U p ′ ,ψ = ∅ for all p, p ′ ∈ S f , as can also be seen from Remark 3.4. The following theorem is an extension of Theorem A without its assumption on ψ. In Theorem 5.9 below, we will provide a more general result in which we eliminate the appearance of Z S,ψ at the cost of introducing kernels and cokernels of maps between pseudo-null modules which are difficult to compute explicitly. Proof. To match the notation of Section 4 and Lemma 4.1, let R be the localization of Λ at a codimension two prime q not in V S,ψ , and set X = X ψ S,q and F = (X ψ S,q ) * * . Since q / ∈ U S c ,ψ , Lemma 2.4 tells us that the injection X ψ S,q → Y ψ S,q is an isomorphism. Similarly, since q / ∈ U S,ψ , we have that is an isomorphism. By Proposition 2.11, we then have E = E 2 (X ωψ −1 S c )(1) q , so θ 1 is a unit. Moreover, Q(E) is pseudo-null as the cokernel of the map from ℓ X to its reflexive hull. We also set I i = I ψ T i ,q and J i = (I ψ T i ,q ) * * . The canonical maps I i → J i are isomorphisms of free Λ q -modules by Corollary 2.13 since q / ∈ U T ,ψ . We may therefore identify the image ( ℓ I i ) tf of ℓ I i in ℓ X with ℓ I i . As B i = 0 in the notation of Lemma 4.1, the result follows from the short exact sequence (4.4) in Corollary 4.2. Corollary 5.7. If n = 2 and V S,ψ = ∅, then the following are equivalent. (ii) One of L S 1 ,ψ and L S 2 ,ψ divides the other, so Proof. The equivalence of (i) and (ii) follows from [2,Lem. A.3]. The fact that (ii) and (iii) are equivalent follows from the fact that the length of the localization of a module at a prime is a nonnegative integer when this localization has finite length. Remark 5.8. We suspect that the greatest common divisor θ in Corollary 5.7 is sometimes nontrivial. To be precise, we believe that this may happen if ψ satisfies the condition ψ · (ψ • j) = ω, where j is the involution of Gal(E ab /E) given by conjugating by any lift of the generator of Gal(E/E + ). The nontrivial θ should be Θ = γ cyc − χ p (γ cyc ), where γ cyc is a topological generator for Γ + and χ p is the p-power cyclotomic character. (In this remark, we assume the validity of Leopoldt's conjecture for E so that Γ + is topologically cyclic.) Note that χ p (γ cyc ) is a principal unit and the square root should be chosen to be a principal unit. There exist continuous characters Ψ of G satisfying the conditions We have Ψ(γ cyc ) = χ p (γ cyc ) for any such Ψ and hence Ψ(Θ) = 0. Conversely, Ψ(Θ) = 0 implies that Ψ · (Ψ • j) = χ p . Let Σ be any CM type, and let L Σ,ψ ∈ Λ be the Katz p-adic L-function attached to Σ and ψ. (This L-function is given up to a certain power of p by integrating the inverse of a character against the Katz measure.) It follows that Θ divides L Σ,ψ if and only if Ψ L Σ,ψ = 0 for all Ψ satisfying the above conditions. In fact, if Ψ 0 is one such Ψ, it is sufficient to have Ψ L Σ,ψ = 0 for all Ψ of the form Ψ = Ψ 0 · ρ, where ρ is a character of Γ − of finite order. It is possible to choose Ψ 0 to be the Galois character attached to a Grössencharacter of type A 0 for E whose infinity type lies in the interpolation range for L Σ,ψ . The corresponding complex L-function will have a functional equation relating that L-function to itself. If the sign in that functional equation is −1, then the central critical value will be forced to vanish. The same thing will be true for Ψ = Ψ 0 · ρ for any finite order character ρ of Γ − . That would mean that Ψ L Σ,ψ = 0 for such Ψ if the corresponding sign is −1. Now it turns out that for a given Σ and ψ, the signs will be constant, either all +1 or all −1. We suspect that each sign will occur for half of the CM types, possibly under some extra assumptions on ψ and E. Therefore, assuming this is the case, if there are at least four p-adic CM-types for E, then at least two will have the corresponding signs equal to −1. Hence the corresponding p-adic L-functions will both be divisible by Θ. Thus, examples where θ is nontrivial may possibly occur when E has at least four primes above p. An illustration of the kind of behavior described above can be found in [6]. That paper considers a case where E is an imaginary quadratic field in which p splits. Note however that there are just two primes above p in that case, and it is proved that Θ is actually not a common divisor of the two p-adic L-functions. The following result provides a more general version of Theorem 5.6 that avoids working modulo Z S,ψ at the expense of a longer statement that includes a new "error term" c 2 (C S,ψ ). Theorem 5.9. Let θ 0 be a generator of t 1 (Y ψ S ), which divides a gcd θ of L S 1 ,ψ , . . . , L Sn,ψ . Let g : Λ n → Λ be given by and let C S,ψ be the cokernel of the map induced by the canonical quotient map, where g ′ is the map induced by g. There is an equality of second Chern classes of pseudo-null modules Proof. Let q be a codimension 2 prime of Λ. Then the localization Y * * S,q is free as a reflexive module over the local ring Λ q of Krull dimension 2. Note that ( Theorem 5.9 then follows from Corollary 4.2, with Remark 4.3 providing the term c 2 (C S,ψ ). We have ℓ = 1 in Theorem 5.6 if and only if n = 2 and the CM types S 1 and S 2 differ by only one prime, which is of degree 1 (i.e., r p = 2). In this case, we obtain the following more explicit results. In particular, Proposition 5.10 and Theorem 5.12 imply Theorem C. Set L i = L S i ,ψ for brevity. As we have remarked, Suppose that (b) holds. In this case, since both L 1 and L 2 annihilate X ψ Σ by definition and are relatively prime by assumption, X ψ Σ is pseudo-null. We now conclude from Proposition 2.11 and [2, Prop. 4.1.17] that there is a map of exact sequences for i ∈ {1, 2}. The leftmost vertical map in (5.3) for a given i has torsion cokernel with first Chern class c 1 (X ψ S i ) = (L i ). This forces the map (I ψ T i ) * * → (Y ψ S ) * * between free Λ-modules of rank one to be injective. From the diagram, we then see that the first Chern class of the torsion Λ-module E 1 (Y ωψ −1 Σ )(1) divides (L i ). Since L 1 and L 2 are relatively prime, this forces E 1 (Y ωψ −1 Σ ) to be pseudo-null, which can only occur if the torsion mod- is pseudo-null as well. Now suppose that (a) holds. We again use the diagram (5.3) but now have that the is a map between free Λ-modules of rank 1, we see that upon appropriate choices of Λ-bases it is given by multiplication by L i . Applying the direct sum of the vertical maps in (5.3) for i ∈ {1, 2}, we get a composite map on cokernels which is a pseudo-isomorphism by the snake lemma. Since X ψ Σ is pseudonull, so is Λ/(L 1 , L 2 ), and therefore L 1 and L 2 are relatively prime. Remark 5.11. We claim that c 2 (E 2 (M)) = c 2 (M ι ) for any finitely generated pseudo-null Λ-module M. Since E 2 (M) ι = Ext 2 Λ (M, Λ), we need only verify that c 2 (Ext 2 Λ P (M P , Λ P )) = c 2 (M P ) upon localization at a height 2 prime P of Λ. Since Λ P is regular of dimension 2, the localization M P has a finite filtration with graded pieces isomorphic to Λ P /P Λ P (cf. [2, Lem. A.2]). For any short exact sequence 0 → N → M P → Λ P /P Λ P → 0 of Λ Pmodules, we have Ext 1 Λ P (N, Λ P ) = 0 since N is pseudo-null, and Ext 3 Λ P (Λ P /P Λ P , Λ P ) = 0 since Λ P has dimension 2. Since Ext 2 Λ P (Λ P /P Λ P , Λ P ) = Λ P /P Λ P and second Chern classes are additive with respect to short exact sequences of pseudo-null modules, our claim now follows by induction. Theorem 5.12. Let ℓ = 1, and suppose that X ψ S 1 ∩S 2 and X ωψ −1 are both pseudo-null. Then there is an equality of second Chern classes of pseudo-null modules Proof. If E is imaginary quadratic, this is [2, Thm. 5.2.5], so we assume in what follows that [E : Q] > 2. As in the proof of Proposition 5.10, we let Σ = S 1 ∩ S 2 and Σ = S c = S 1 ∩ S 2 and set L i = L S i ,ψ for i ∈ {1, 2}. Consider the set T = T 1 ∪ T 2 of cardinality 2. The maps of (5.3) for i ∈ {1, 2} yield a diagram of exact sequences (Note that Z S = 0 since S = S f , so we have the right exactness in the lower row.) We show that f 3 is an injection up to modules supported in codimension greater than 2, so c 2 (coker(f 2 )) = c 2 (coker(f 1 )) + c 2 (coker(f 3 )). From the exact sequence of Lemma 2.4 and the pseudo-nullity of X ωψ −1 Σ , we have an exact sequence of Ext-groups ) as a direct summand. It follows that f 3 is an injection. Since E 3 (K ωψ −1 S,0 ) is supported in codimension greater than 2, using (5.5) and (5.6), we obtain the last equality following from Remark 5.11. As in the proof of Proposition 5.10, the cokernel of f 2 is pseudo-null with second Chern class c 2 (coker(f 2 )) = c 2 (Λ/(L 1 , L 2 )). The cokernel of f 1 is similarly pseudo-null by assumption, and it has second Chern class The result now follows. Remark 5.13. The last two terms in equation (5.4) give "common trivial zeros in codimension 2" for L S 1 ,ψ and L S 2 ,ψ . Here, by "common zeros", we mean codimension two points which are in the support of the maximal pseudo-null submodule of Λ/(L S 1 ,ψ , L S 2 ,ψ ). To illustrate this, note that T 1 and T 2 in Z p [[T 1 , T 2 ]] share a common zero at the point (T 1 , T 2 ) = (0, 0), viewed as functions on the product of two p-adic open discs of radius 1 around the origin in Q p . This corresponds to the fact that Z p [[T 1 , T 2 ]]/(T 1 , T 2 ) is a nontrivial pseudo-null module supported on the codimension two prime (T 1 , T 2 ). By "trivial zeros", we mean arising from trivial zeros of the corresponding Katz p-adic L-functions, as in Remark 3.5. The common trivial zeros of codimension two arise from the triviality of characters on decomposition groups and are described by Remark 5.5. That is, K ψ p for p ∈ S 1 ∩ S 2 (resp., (K ωψ −1 p ) ι (1) for p ∈ S 1 ∩ S 2 ) has nontrivial second Chern class if and only if ψ| ∆p = 1 (resp., ωψ −1 | ∆p = 1) and r p = 2. For such a p, the resulting second Chern class comes from the ideal determining the corresponding quotient in Remark 3.4. Canonical subquotients in the lower central series Let Π be a profinite group. The lower central series of Π is defined by Π 0 = Π, and by letting Π i be the closure of [Π, Π i−1 ] for i ≥ 1. The maximal abelian quotient of Π in the category of profinite groups is Π ab = Π/Π 1 . We have a canonical commutator pairing where [x, y] = xyx −1 y −1 and x is the image of x in Π ab . (Note that Π 1 /Π 2 is central in Π/Π 2 , so this is well-defined.) This is an alternating pairing, and the image of the pairing generates all of Π 1 /Π 2 . Suppose Φ is a subgroup of the group Aut(Π) of continuous automorphisms of Π. Then Φ acts on all terms in the lower central series of Π. The pairing , is equivariant for this action in the sense that σ(x), σ(y) = σ( x, y ) for σ ∈ Φ. The following lemma is clear. There is a largest quotient (Π 1 /Π 2 ) Φ,s of Π 1 /Π 2 by a Φ-stable subgroup of the abelian group Π 1 /Π 2 such that the pairing is self-adjoint in the sense that σ(x), y Φ = x, σ(y) Φ for all σ ∈ Φ and x, y ∈ Π ab . Remark 6.2. We add an "s" to the subscript so that there is no confusion of (Π 1 /Π 2 ) Φ,s with the coinvariants of Φ acting on Π 1 /Π 2 . Suppose that Π is a closed normal subgroup of a profinite groupΠ. The conjugation action ofΠ on Π gives a subgroup Φ of Aut(Π) to which one can apply Lemma 6.1. The following result is a topological variant on exercises in [3]. The key ingredient is the universal coefficient theorem for group homology and group cohomology; see [ Proof. The map θ is the topological version of the map defined in Exercise 8 of §IV.3 of [3]. In part (c) of this exercise, the kernel of θ is identified with Ext 1 (H, A). The steps involved in showing that (6.1) is exact are outlined in Exercise 5 of §V.6 of [3]. For the remainder of this section, G will be a profinite group and Π will be its maximal pro-p quotient. Let X = Π ab be the maximal abelian, pro-p quotient of Π. Applying Proposition 6.3 in this context, we get a surjective homomorphism θ X : H 2 (X, Q p /Z p ) → Hom(X ∧ Zp X, Q p /Z p ), (6.2) and the kernel of θ X is the set of [f ] ∈ H 2 (X, Q p /Z p ) which represent abelian group extensions of X by Q p /Z p . Let us take B = ker(G → X), which is a closed subgroup of G. We have the Hochschild-Serre spectral sequence Lemma 6.4. Suppose that H 2 (G, Q p /Z p ) = 0. Both θ X and the transgression map are isomorphisms, yielding a composite isomorphism Proof. The spectral sequence (6.3) and the triviality of H 2 (G, Q p /Z p ) gives a four-term exact sequence of base terms The inflation map Inf is surjective as Q p /Z p is a direct limit of p-groups and X is the maximal abelian pro-p quotient of Π. Thus Tra is an isomorphism. We know from Proposition 6.3 that θ X is surjective. Since Tra is an isomorphism, we may write any element in the kernel of θ X as Tra(φ) for some φ ∈ Hom(B, Q p /Z p ) X . Then ker(φ) is a subgroup of B such that B/ ker(φ) ∼ = im(φ) is a finite cyclic p-group. We have a central extension of pro-p groups since G/B = X and φ is fixed by X. This extension provides the class of −Tra(φ) (see [16,Lemma 1.1]). By Proposition 6.3 and the discussion which follows it, the statement that θ X (Tra(φ)) = 0 is equivalent to the statement that G/ ker(φ) is an abelian group. However, G/ ker(φ) is then an abelian quotient of Π, and X is the maximal abelian quotient of Π. This proves that B/ ker(φ) is trivial in (6.5). But then φ is trivial on B, so φ = 0. Corollary 6.5. Let Q be the maximal quotient of Π that is a central extension of X, and let Z = ker(Q → X) be the abelian pro-p group giving the extension. Then Proof. Inflation provides an injection from Hom(Z, Q p /Z p ) to Hom(B, Q p /Z p ) X . It is an isomorphism because the kernel of an element of Hom(B, Q p /Z p ) X defines a central extension of X. The corollary now follows upon taking the Pontryagin dual of the isomorphism in (6.4). Central self-adjoint extensions We continue with the notation of Sections 3 and 5, supposing that n = 2 and that ℓ = rank Λ X ψ S = 2. This is equivalent to saying we have two CM types S 1 and S 2 with the property that when S = S 1 ∪S 2 , the sum of the local degrees of the primes in T 1 = S −S 1 is 2, and the same is true for T 2 = S − S 2 . We let K (p) S be the maximal pro-p extension of K inside the maximal S-ramified extension K S of K. Set G K,S = Gal(K S /K), Π = Gal(K (p) S /K), and let L i denote the fixed field of Π i for i ≥ 1. In particular, using our previous notation, L 1 = L is the maximal abelian pro-p extension of K which is unramified outside of S and X S = Gal(L/K) = Π ab . The conjugation action ofΠ = Gal(K (p) S /E) on Π gives a subgroup Φ of Aut(Π) to which one can apply Lemma 6.1, as in Remark 6.2. The resulting pairing on Π ab is the projection of the commutator pairing to the maximal quotient of Π 1 /Π 2 for which it becomes self-adjoint with respect to theΠ-action. The actions ofΠ on Π ab and on Π 1 /Π 2 factor through Gal(K/E) = G = ∆ × Γ, where ∆ = Gal(F/E) is finite, abelian and of order prime to p and Γ = Z r p . That is, Π ab = X S and Π 1 /Π 2 are modules for the group ring The following lemma is clear. We also need the following consequence of weak Leopoldt, which we prove for more general sets S. Lemma 7.2. For any subset S of S f containing a CM type, the group H 2 (G K,S , Q p /Z p ) is trivial. Proof. First, we recall that the weak Leopoldt conjecture implies the statement in the case of S f . That is, [7, Props. 3 and 4] imply that H 2 (Gal(K S f /F ′ E cyc ), Q p /Z p ) = 0 for any number field F ′ in K S f . Since E cyc ⊂ K, we then need only take the direct limit over all finite extensions F ′ of F contained in K to see that H 2 (G K,S f , Q p /Z p ) = 0. Given this, the exact sequence of base terms of the Hochschild-Serre spectral sequence arising from the exact sequence yields an exact sequence (7.1) Thus, it will suffice to show that the restriction map Res is surjective. Setting G = G K,S to shorten notation and letting J denote the maximal abelian pro-p quotient of Gal(K S f /K S ), the Pontryagin dual of Res is the map on Galois groups J G → X S f from the G-coinvariant group of J to the p-ramified Iwasawa module over K. It then suffices to see that this map is injective. By definition, J is generated by its inertia groups at places of K S over S c . By the usual transitivity of the Galois action on places, any two decomposition groups at primes over the same prime of K become identified in the coinvariant group J G . In particular, we may speak of the inertia group T w of J G at a prime w of K lying over a prime in S c . As any such w is unramified in K S /K, any decomposition group in G at a place over w is procyclic. Let N be the subfield of K S f which is the fixed field of the kernel of the natural surjection Gal(K S f /K S ) → J G . We have an exact sequence Consequently, any decomposition group in Gal(N/K) at a place over w is a central extension of a procyclic group by an abelian group and is therefore itself abelian. In particular, T w is a quotient of the inertia group I w in the Galois group of the maximal abelian pro-p extension of the completion K w . The product of all I w over primes w lying over primes in S c can be identified with I S c of (2.5). Since J G is generated by its inertia groups T w , we obtain a surjective map I S c → J G . Composing this with J G → X S f , it remains only to show that I S c → X S f is injective. This follows from the injectivity in Lemma 3.2, since S contains a CM type and the character ψ therein was arbitrary. Because of Lemma 7.2, θ X S of (6.2) is an isomorphism by Lemma 6.4 applied with G = G K,S . Dually, we then have canonical isomorphisms Remark 7.3. Since X S is rank two over Ω, and Ω is free of infinite rank over Z p , the (completed) wedge product X S ∧ Zp X S is not finitely generated over Ω. Thus Gal(L 2 /L) is by Lemma 6.4 also not finitely generated over Ω. In other words, the second graded quotient in the lower central series of the maximal pro-p quotient of G K,S is too big for us to readily attach to it invariants arising from finitely generated Ω-modules. We remedy this by taking (completed) wedge products over Ω and considering the associated quotients of Gal(L 2 /L). (ii) Under the isomorphism in (i), the action of g ∈ G = Gal(K/E) on Gal(N/L) by conjugation corresponds to the action of g 2 on X S ∧ Ω X S which sends v 1 ∧ v 2 to Proof. An element h ∈ Hom(X S ∧ Zp X S , Q p /Z p ) = Hom(Gal(L 2 /L), Q p /Z p ) lies in the subgroup Hom(X S ∧ Ω X S , Q p /Z p ) if and only if for all g ∈ G and x 1 , x 2 ∈ X S , so if and only if h is self-adjoint for the action of G. In view of the definitions of L 2 and N, this shows (i). For (ii), note that the commutator pairing is equivariant with respect to conjugation. Since the commutator pairing is G-adjoint when we take its values in Gal(N/L), we find For part (iii), we have where the sum is over the characters ψ : Thus v 1 ∧v 2 = 0 if ψ 1 = ψ 2 , and X ψ S ∧ Ω W X ψ S is the ψ-isotypical component of X S ∧ Ω X S . By Remark 7.4, the canonical surjection is an homomorphism of Λ-modules which identifies X ψ S ∧ Ω W X ψ S with the quotient of X ψ S ∧ Λ X ψ S by the closure of the subgroup generated by all elements of the form gv ∧ v ′ − v ∧ gv ′ with g ∈ G and v, v ′ ∈ X ψ S . However, G = ∆ × Γ, and all such elements are zero both for g ∈ ∆ and for g ∈ Γ, so we conclude µ is an isomorphism. Remark 7.6. Phrased differently, part (ii) of Proposition 7.5 says that the action of g ∈ G on X S ∧ Ω X S given by g(v 1 ∧v 2 ) = g(v 1 )∧v 2 = v 1 ∧g(v 2 ) for v 1 , v 2 ∈ X S is identified via part (i) with a canonical square root for the action of g by conjugation on Gal(N/L). Part (iii) tells us that X ψ S ∧ Λ X ψ S is identified with the ψ-isotypical component of Gal(N/L) with respect to this square root action. Let P be one of T 1 or T 2 . We need to characterize the image of 2 Ω I P in 2 Ω X S , for I P associated to inertia groups at the primes over those in P , as defined in (2.5). Proposition 7.7. Let N P be the maximal extension of L inside N such that all the inertia subgroups in Gal(N P /K) of primes over P in N P are abelian. Under the map induced by the commutator pairing, the cokernel of the map I P ∧ Ω I P → X S ∧ Ω X S induced by the canonical map I P → X S is identified with Gal(N P /L). Proof. We show that the kernel of the restriction map Hom(X S ∧ Ω X S , Q p /Z p ) → Hom(I P ∧ Ω I P , Q p /Z p ) is Hom(Gal(N P /L), Q p /Z p ). Let f ∈ Hom(B, Q p /Z p ) X S determine h = θ X S • Tra(f ) ∈ Hom(X S ∧ Ω X S , Q p /Z p ) via the isomorphism (6.2). We must determine when h has trivial restriction to Hom(I P ∧ Ω I P , Q p /Z p ). The interpretation of h as a commutator pairing says that this will be the case if and only if inside the central extension G K,S / ker(f ) of X S = G K,S /B by B/ ker(f ), the inverse imageĨ P in G K,S / ker(f ) of the image of I P in X S is abelian. The subgroup I ⋄ P ofĨ P generated by inertia groups of primes over P surjects onto I P . So since G K,S / ker(f ) is a central extension of X S by B/ ker(f ), the commutators of any two elements ofĨ P will be trivial if and only if the same is true of I ⋄ P . Thus the condition that h has trivial restriction to Hom(I P ∧ Ω I P , Q p /Z p ) is the same as requiring that I ⋄ P is abelian. Define M P /L to be the maximal subextension of N/L such that M P /L is unramified at all primes of M P over P . One has M P ⊂ N P because the inertia groups in Gal(M P /K) at primes over P inject into inertia groups of primes over P in the abelian group X S = Gal(L/K), hence are themselves abelian. On the other hand, N P /L need not be unramified at primes over P , so N P may be a nontrivial extension of M P . The following lemma shows that this makes no difference from the point of view of second Chern classes. Proof. Since K ⊂ L ⊂ M P ⊂ N P ⊂ N and Gal(N/K) is finitely generated as an Ω-module, the group Gal(N P /M P ) is finitely generated as an Ω-module. Since M P is the maximal extension of L in N that is unramified over P , it is equal to (N P ) J P for J P the subgroup of Gal(N P /L) generated by the inertia groups of primes of N P over P . Thus Gal(N P /M P ) is generated as an Ω-module by finitely many inertia subgroups J Q of Gal(N P /L) for primes Q over P in N P . Let p ∈ P , and let Q be a prime of N P above p. By Lemma 3.1(ii) and the definition of N P , the completion of N P at Q is contained in the maximal abelian pro-p extension K ab,(p) p of the completion K p of K at the prime under Q. Since M P /L is completely split at all primes over p, the completions of M P and L at primes under Q are equal. Thus J Q is a quotient of the Galois group H p of K ab,(p) p over the completion of L at the prime under Q. Since the J Q for Q over p ∈ P generate Gal(N P /M P ) as an Ω-module, this implies that Gal(N P /M P ) is a quotient of the Ω-submodule of I P given by p∈P Ω⊗ Zp[[Gp]] H p . (7. 2) The ψ-isotypical component of (7.2) is contained in the kernel of the homomorphism I ψ P → Y ψ S since this homomorphism factors through the injection X ψ S → Y ψ S . By Proposition 2.11, Remark 2.3 and Lemma 3.1, the homomorphism I ψ P → (I ψ P ) * * is injective. The localization at codimension two primes of the map (I ψ P ) * * → (Y ψ S ) * * is a map between free modules of the same rank which has torsion cokernel and is therefore injective. Thus, the kernel of I ψ P → Y ψ S must be supported in codimension at least three. 3) whose kernels are supported in codimension at least 3. (Here, we use "im" to denote the not necessarily isomorphic image of a module under a canonical map.) Moreover, we have a congruence of second Chern classes of Ω-modules. Proposition 7.5 further identifies the ψ-isotypical component of the lefthand side of (7.5) with ψ-isotypical component of the right-hand side for the square root of the conjugation action on Gal(N T i /L). From (7.5), we get an isomorphism 2 Ω X S 2 Ω I T 1 + 2 Ω I T 2 ∼ = Gal((N T 1 ∩ N T 2 )/L). (7.6) By Lemma 7.8, Gal ((N T 1 ∩ N T 2 )/(M T 1 ∩ M T 2 )) is supported in codimension at least 3 as a module for Ω so (7.6) gives (7.3). Substituting these facts into Theorem 5.6, we obtain Theorem 7.9.
18,365.2
2019-03-21T00:00:00.000
[ "Mathematics" ]
Precision Study of η ′ → γπ + π − Decay Dynamics Using a low background data sample of 9 . 7 × 10 5 J/ψ → γη ′ , η ′ → γπ + π − events, which are 2 orders of magnitude larger than those from the previous experiments, recorded with the BESIII detector at BEPCII, the decay dynamics of η ′ → γπ + π − are studied with both model-dependent and model-independent approaches. The contributions of ω and the ρ (770) − ω interference are observed for the first time in the decays η ′ → γπ + π − in both approaches. Additionally, a contribution from the box anomaly or the ρ (1450) resonance is required in the model-dependent approach, while the a Also at State Key Laboratory of Particle Detection and Electronics, Beijing 100049, Hefei 230026, People's Republic of China b Also at Bogazici University, 34342 Istanbul, Turkey c Also at the Moscow Institute of Physics and Technology, Moscow 141700, Russia d Also at the Functional Electronics Laboratory, Tomsk State University, Tomsk, 634050, Russia e Also at the Novosibirsk State University, Novosibirsk, 630090, Russia f Also at the NRC "Kurchatov Institute", PNPI, 188300, Gatchina, Russia g Also at University of Texas at Dallas, Richardson, Texas 75083, USA h Also at Istanbul Arel University, 34295 Istanbul, Turkey i Also at Goethe University Frankfurt, 60323 Frankfurt am Main, Germany j Also at Institute of Nuclear and Particle Physics, Shanghai Key Laboratory for Particle Physics and Cosmology, Shanghai 200240, People's Republic of China Using a low background data sample of 9.7 × 10 5 J/ψ → γη ′ , η ′ → γπ + π − events, which are 2 orders of magnitude larger than those from the previous experiments, recorded with the BESIII detector at BEPCII, the decay dynamics of η ′ → γπ + π − are studied with both model-dependent and model-independent approaches.The contributions of ω and the ρ(770) − ω interference are observed for the first time in the decays η ′ → γπ + π − in both approaches.Additionally, a contribution from the box anomaly or the ρ(1450) resonance is required in the model-dependent approach, while the process specific part of the decay amplitude is determined in the model-independent approach. PACS numbers: 13.20.Gd, 14.40.Be The radiative decay η ′ → γπ + π − is the second most probable decay mode of the η ′ meson with a branching fraction of (28.9 ± 0.5)% [1] and is frequently used for tagging η ′ candidates.In the vector meson dominance (VMD) model [2], this process is dominated by the decay η ′ → γρ(770) (hereafter referred to as ρ 0 ).In the past, the dipion mass distribution was studied by several experiments, e.g., JADE [3], CELLO [4], PLUTO [5], TASSO [6], TPC/γγ [7], and ARGUS [8], and a peak shift of about +20 MeV/c 2 for the ρ 0 meson with respect to the expected position was observed.Dedicated studies, using about 2000 η ′ → γπ + π − events, concluded that a lone ρ 0 contribution in the dipion mass spectrum did not describe the experimental data [9].This discrepancy could be attributed to a higher term of the Wess-Zumino-Witten anomaly, known as the box anomaly, in the chiral perturbation theory (ChPT) Lagrangian [10].To determine the ratio of these two contributions, it was suggested to fit the dipion invariant mass spectrum by including an extra nonresonant term in the decay amplitude to account for the box anomaly contribution [11].Using a sample of 7490±180 η ′ events, evidence for the box anomaly contribution with a 4σ significance was reported by the Crystal Barrel experiment [12], whereas the observation was not confirmed by the L3 experiment [13] using 2123±53 events. A recently proposed model-independent approach [14], based on ChPT and dispersion theory, relates the η/η ′ → γπ + π − decay amplitudes directly to the e + e − → π + π − process, which dominates the hadron production cross section at low energies and gives the largest hadronic contribution to the muon anomalous magnetic moment [15].The amplitudes for η/η ′ → γπ + π − therein are given as a product of the pion vector form factor F V (s) and a reaction specific part P (s), where s is the π + π − invariant mass squared.The F V (s) term is extracted from the e + e − → π + π − cross section or from P -wave isovector ππ phase shifts.The P (s) term, which can be expanded into a Taylor series around s = 0, is expected to be similar for η and η ′ decays [16], and has been determined in η decays by WASA-at-COSY [17] and KLOE [18], but not yet for η ′ decays due to the limited statistics. In this Letter, we present a precision measurement of the dipion mass distribution for the η ′ → γπ + π − process originating from the radiative decays J/ψ → γη ′ based on (1310.6±7.0)×10 6J/ψ events [19], which is produced in e + e − annihilation, collected with the BESIII detector [20].Both model-dependent and model-independent approaches are used to investigate the decay dynamics. Candidates of J/ψ → γη ′ , η ′ → γπ + π − are required to have two charged tracks with opposite charge and at least two photons.The selection criteria for charged tracks and photon candidates are the same as those in Ref. [21], except for the minimum energy requirement of the photon candidates on the barrel showers, which is 40 MeV instead of 25 MeV in this analysis. A four-constraint (4C) energy-momentum conservation kinematic fit is performed under the γγπ + π − hypothesis, and a loose requirement of χ 2 4C < 100 is imposed.This requirement removes 39.3% background while the efficiency loss is 2.1%.For events with more than two photon candidates, the combination with the smallest χ 2 4C is retained.In order to remove background events with a π 0 in the final states (e.g., J/ψ → π + π − π 0 , γπ + π − π 0 ), we require that the γγ invariant mass is outside the π 0 mass region, |M (γγ)−m π 0 | > 0.02 GeV/c 2 , where m π 0 is the nominal mass of the π 0 [1].Since the radiative photon from the η ′ is always more soft than that from the J/ψ decays, the γπ + π − combinations closest to the nominal η ′ mass (m η ′ ), are kept as η ′ candidates.After the above selection, a clear η ′ signal is observed in the γπ + π − invariant mass spectrum, as shown in Fig. 1.To select candidate events from η ′ decays, |M (γπ An inclusive Monte Carlo (MC) sample of 1.2 × 10 9 J/ψ decay events that are generated with the lundcharm and evtgen models [22,23] is used to investigate possible background processes.These include events with no η ′ 's in the final state (non-η ′ ) and those from η ′ → π + π − π 0 .We use the events in the η ′ mass sideband regions (0.04 to estimate the non-η ′ background contribution, which is at a level of 1.42%.For the η ′ → π + π − π 0 (γγ) back-ground, a MC study predicts the number of background events to be 0.16%, and its effect is not included in the fit, but taken into consideration in the systematic uncertainty study. With the η ′ mass window requirement, a low background sample of about 9.7 × 10 5 η ′ candidates is obtained, which is about 120 times larger than the previous largest sample reported by the Crystal Barrel experiment [12].The background subtracted and efficiency corrected angular distribution of π + in the helicity frame of the π + π − system, | cos θ π + |, is shown in Fig. 2. The distribution is very well described by dN/d cos θ π + ∝ sin 2 θ π + , which is expected for a P -wave dipion system.A detailed MC study indicates that the reconstructed π + π − invariant mass M (π + π − ) has a small shift with respect to the true value, and this is corrected as a function of M (π + π − ) according to the values obtained in MC studies.The maximum shift is less than 0.75 MeV/c 2 .The M (π + π − ) distribution with the mass shift correction is illustrated as dots with error bars in Fig. 3.The dipion mass dependent differential rate is given by [12] dΓ dM(π and A is the decay amplitude.Both the model-dependent and modelindependent approaches are carried out to investigate the decay dynamics. In the model-dependent study, by assuming that the possible non-ρ 0 contributions are from ω, ρ(1450) (hereafter referred to as ρ ′ ), and the box anomaly, we have [11,12,24] where δ and β are complex numbers representing the con-tributions of the ω and ρ ′ mesons relative to the ρ 0 ; α is a constant accounting for the box anomaly contribution [11]; and BW GS ρ (s), BW ω (s), and BW GS ρ ′ (s) are the propagators for the ρ 0 , ω, and ρ ′ mesons, respectively.Since the ρ 0 component is dominant in the M (π + π − ) distribution, its shape parametrization plays a vital role in the determination of other components, and is represented with the Gounaris-Sakurai approach (GS) [25,26]. , where M ω and Γ ω are the ω-meson mass and width, respectively.The ρ ′ is also described with the GS parametrization.The masses and widths for the ω and ρ ′ mesons are fixed to their nominal values [1], while those for ρ 0 are floated in the fit. Binned maximum likelihood fits are performed to the M (π + π − ) distribution between 0.34 and 0.90 GeV/c 2 with different scenarios, where the decay amplitude is corrected by a M (π + π − )-dependent detection efficiency and is smeared with a M (π + π − )-dependent Gaussian function to account for the experimental mass resolution.The non-η ′ background is represented by the η ′ sideband events as discussed above, and is fixed in the fit.Fits with only the ρ 0 contribution and with additional ρ 0 -ω interference give the goodness of fit χ 2 /ndf =3365/110 and 3094/108, respectively, where ndf is the number of degrees of freedom.The results indicate that these components are insufficient to describe the data and extra contributions are necessary.To improve the description of the data, we performed a fit, shown in Fig. 3(a), including the additional box anomaly term together with ρ 0 -ω interference, and much better agreement with χ 2 /ndf =207/107 is obtained.An alternative fit by replacing the box anomaly with the ρ ′ component gives considerably worse agreement with χ 2 /ndf =303/106, as illustrated in Fig. 3(b).Fit results of the above two cases summarized in Table I.Both cases yield ρ 0 mass and width close to those in the PDG [1].A fit including both the ρ ′ and box anomaly gives a reasonable goodness of fit (χ 2 /ndf =134/105).However, a very strong correlation in amplitude between the box anomaly and the ρ ′ components, i.e. the correlation coefficient is -0.986, is observed, due to the tail of the ρ ′ having a similar line shape as that of the box anomaly.Thus they are not well under control, and it is hard for one to distinguish them in the fitting.Whereas the mass and width of the ρ 0 are stable, which are 776.43 ± 0.36, 150.26 ± 0.56 MeV/c 2 , respectively.Therefore a refined model dependent amplitude beyond including just the ρ ′ or the box anomaly contribution is desirable. A fit to the data gives κ = 0.992 ± 0.039 GeV −2 , λ = −0.523± 0.039 GeV −4 , ξ = 0.199 ± 0.006 with χ 2 /ndf =145/109, where the uncertainties are statistical only.The fit result is shown in Fig. 4, and the statistical significances of nonzero quadratic term and ω term are 13σ and 34σ, respectively, which are estimated with the changes of the log likelihood value and the number of degree of freedoms.An alternative fit without the ω contribution yields κ = 1.420 ± 0.047 GeV −2 and λ = −0.951± 0.046 GeV −4 , which is compatible to a recent prediction λ = −1.0 ± 0.1 GeV −4 [32].However, this fit corresponds to a very poor goodness of fit (χ 2 /ndf =1351/110) and fails to describe the data.Different from the measurements of η → γπ + π − decays [17,18], which are not sensitive to the quadratic term, both the quadratic term and the ω contribution are significant in the η ′ → γπ + π − decays.The systematic uncertainties in the model-dependent and model-independent approaches are discussed in detail in the following and are summarized in the Supplemental Material [33].The total systematic uncertainty is the quadrature sum of the individual values by assuming them to be independent. The uncertainty associated with the 4C kinematic fit originates from the difference between data and MC simulation.This difference is reduced by correcting the track helix parameters of the MC sample as described in Ref. [34].To estimate the corresponding uncertainty, the analysis is repeated without the track helix parameters correction, and the resultant change is assigned as the uncertainty. The MDC tracking and photon detection efficiencies are studied based on a clean sample of J/ψ → ρπ.The differences between data and MC simulation are investigated as a function of momentum (energy), and are less than 1% for each charged track and 1% for each photon [35].To evaluate their impact on the results, an event-by-event correction on the tracking and photon detection efficiency is performed as a function of momentum (energy).The resultant changes on the results are taken as the systematic uncertainties. The uncertainty from the η ′ mass window requirement is evaluated by varying the required values by ± 6 MeV/c 2 , which is the mass resolution from the MC simulation, and the maximum change of the results is taken as the uncertainty. Systematic sources related with the fit procedure include the binning, the fit range, the background, the mass resolution of M (π + π − ), and the input parameters in the fit.The uncertainty from binning is studied with the same fit procedure with varied bin width.For the uncertainty due to the fit range, we take the larger change of the fit result with varied fit ranges as the uncertainty.Two systematic sources, i.e. the η ′ sideband and the small contribution of η ′ → π + π − π 0 , are considered as the uncertainty related with the background in the fit.The former one is estimated by changing the sideband region, while the latter one is studied by including the background in the fit with a fixed magnitude and shape in accordance with the MC study.We assign the quadratic sum of the two uncertainties as the total background uncertainty.The impact caused by the π + π − mass resolution is estimated by varying the resolution by ±10% in the fit, and the maximum change of the fit result is assigned as the uncertainty.For the model dependent study, the uncertainty due to the mass and width of ω, ρ ′ resonances is estimated by varying the input values with ±1σ of the corresponding uncertainties from the PDG [1], respectively, and taking the quadratic sum of the maximum change of the fit results as the uncertainty of the resonance parameters. For the measurement of the branching fraction of η ′ decays into γρ 0 , γω, γ box anomaly and γρ ′ , the ad-ditional uncertainties from the branching fractions of J/ψ → γη ′ [1] and the number of J/ψ events [19] are also taken into account. In the model independent approach, the uncertainty associated with the input pion vector form factor F V (s), is estimated by an alternative fit incorporating the line shape of F V (s) from Ref. [36].The resulting differences, 16.4%, 34.7%, and 3.4% for the κ, λ, ξ parameters, respectively, determine the systematic uncertainty.Since this uncertainty is theoretically dependent, it is treated as a separated uncertainty in the final results. In summary, the η ′ → γπ + π − decay dynamics is studied based on a sample of 9.7×10 5 events originating from the radiative decay J/ψ → γη ′ of 1.31 × 10 9 J/ψ events collected with the BESIII detector.We have measured the dipion invariant mass distribution and performed fits using model dependent and independent approaches.For the first time, the ω contribution is observed in the dipion mass spectrum in the decays η ′ → γπ + π − .The model-dependent fit indicates that only the components of ρ 0 and ω as well as the corresponding interference fail to describe the data, and an extra significant contribution, i.e. the box anomaly or ρ ′ , is found to be necessary for the first time.The corresponding fit results and the measured branching fractions are summarized in Table I.The data call for a more complete model-dependent amplitude beyond just including the box anomaly or ρ ′ contribution for the M (π + π − ) spectrum. The model independent approach [14] provides a satisfactory parametrization of the dipion invariant mass spectrum, and yields the parameters of the processspecific part P (s) to be κ = 0.992 ± 0.039 ± 0.067 ± 0.163 GeV −2 , λ = −0.523± 0.039 ± 0.066 ± 0.181 GeV −4 , and ξ = 0.199 ± 0.006 ± 0.011 ± 0.007, where the first uncertainties are statistical, the second are systematic, and the third are theoretical.In contrast to the conclusion in Ref. [14] based on the limited statistics from the Crystal Barrel experiment [12], our result indicates that the quadratic term and the ω contribution in P (s), corresponding to statistical significances of 13σ and 34σ, respectively, are necessary. The FIG. 1 . FIG. 1. Invariant mass spectrum of γπ + π − .Dots with error bars represent the data, and the hatched histograms are MC simulations, where the backgrounds are normalized to the expected contributions as described in the text. FIG. 2 . FIG.2. Background subtracted and efficiency corrected angular distribution of π + in the helicity frame of the π + π − system.Dots with error bars are data, and the curve is the fit with a sin 2 θ π + function. FIG. 3 . FIG.3.Model-dependent fit results in case (a) ρ 0 -ω-box anomaly and (b) ρ 0 -ω-ρ ′ .Dots with error bars represent data, the green shaded histograms are the background from η ′ sideband events, the red solid curves are the total fit results, and others represent the separate contributions as indicated.To be visible, the small contributions of ω, the box anomaly (ρ ′ ) and the interference between ω and the box anomaly (ρ ′ ) are scaled by a factor of 20. FIG. 4 . FIG.4.The results of the model-independent fit with ω interference.Dots with error bars represent data, the (green) shaded histogram is the background contribution from η ′ sideband events, and the (red) solid curve is the fit result. BESIII Collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support.This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts No. 11565006, No. 11235011, No. 11335008, No. 11425524, No. 11625523, No. 11635010, No. 11675184, No. 11735014; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics (CCEPP); Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts No. U1332201, No. U1532257, No. U1532258; CAS Key Research Program of Frontier Sciences under Contracts No. QYZDJ-SSW-SLH003, No. QYZDJ-
4,680.2
2017-12-05T00:00:00.000
[ "Physics" ]
6G Green IoT Network: Joint Design of Intelligent Reflective Surface and Ambient Backscatter Communication Ambient backscatter communication (AmBC) is one of the candidate solutions for the 6G green internet of things (IoT) network. However, the uncontrollability of the radio frequency (RF) environment is one of the main obstacles hindering the popularization of AmBC. The intelligent reflective surface (IRS) can improve the radio frequency environment by adjusting the phase and amplitude of the incident signal, which provides the possibility for the widespread deployment of AmBC. Currently, there is no discussion about the joint optimization of AmBC and IRS. In this paper, we introduce a novel IRS and AmBC joint design method. The purpose of this method is to jointly design the beamforming vector, the IRS phase shift, and the reflection coefficient of AmBC to minimize the AP’s transmit power while ensuring the quality of service of the AmBC system and the primary communication system. Due to the nonconvexity of the problem, the time complexity of solving the problem through exhaustive search will be very high. Therefore, we propose a joint design method based on an iterative beamforming vector, IRS phase shift, and reflection coefficient to minimize the AP’s transmit power. This method can effectively reduce the transmission power of the access point (AP), and the simulation results prove the effectiveness of the method. Introduction With the rapid development of mobile communication technology, the internet of things (IoT) has been greatly developed and popularized. In particular, B5G/6G further promotes the application range of IoT, such as smart home, smart manufacturing, and smart cities [1,2]. However, due to the increase in diversified requirements for the application scenarios of the IoT, diversified requirements are also put forward for the needs of the IoT devices. If the IoT device [3,4] actively generates signals for wireless communication, it will consume a lot of energy, which will undoubtedly reduce the standby time of the device. Increasing the battery capacity will increase the standby time of the device, but this will undoubtedly increase the size and cost of the device. Especially for IoT devices such as wearable devices, they are very sensitive to device size and standby time. The battery capacity and size of IoT devices are the restrictive factors for their widespread popularity. Therefore, low-energy IoT device transmission solutions are an important research direction to realize the potential of the IoT [3][4][5][6][7]. Radio frequency (RF) energy harvesting technology can obtain energy from external radio frequency sources and is one of the important research directions of low-energy consumption IoT device transmission solutions. RF energy harvesting technology has been widely used in low-power IoT devices. Wireless IoT devices can use RF energy harvesting technology to collect energy to maintain their normal operations. In this way, the wireless device can run for a long time without any manual intervention, thereby reducing the operation and maintenance costs of the device. Therefore, RF energy harvesting is particularly suitable for powerconstrained wireless networks. There are three main types of RF energy harvesting schemes, including the synchronous wireless information and power transmission network (SWIPT), wireless power communication network (WPCN), and wireless power transmission (WPT) [8]. (1) The SWIPT scheme allows the transmitter to send information and energy at the same time, and the user can choose to decode the information or collect energy. (2) The WPCN scheme allows user equipment to collect energy from RF energy signals and then actively send data. (3) The WPT scheme allows the power transmitter to transmit energy to the user equipment. Although these solutions have their application value in wireless networks, there are still some limitations. First, these solutions require a dedicated RF source to send RF energy or information to users. Secondly, active RF data transmission requires a complicated circuit design and consumes a lot of power. As a green communication technology, ambient backscatter communication (AmBC) can effectively solve the above-mentioned limitations of traditional radio frequency energy harvesting technology [8,9]. In the AmBC system, backscatter devices can communicate by using broadcast signals from RF sources such as cellular base stations, FM towers, and TV towers. In the AmBC system, the backscatter transmitter can modulate the data to the surrounding ambient signal and reflect it to the backscatter receiver. Therefore, AmBC does not need a dedicated frequency spectrum for data transmission. Therefore, AmBC has advantages that other communication methods do not have. First, AmBC does not require a dedicated spectrum for data transmission, which improves spectrum utilization. Secondly, since AmBC does not require a dedicated RF source, maintenance costs and deployment costs are reduced. These advantages can make AmBC widely used in many practical applications. AmBC has huge application potential in future low-energy scenarios, but it still faces many challenges. The quality of service (QoS) of AmBC is affected by factors such as the location of the RF, the type of RF, and the RF environment. Therefore, AmBC must be designed specifically for specific RF sources. In addition, to use ambient signals from licensed sources, the AmBC protocol must ensure that it does not interfere with the QoS of licensed users. Intelligent reflective surfaces (IRSs) [10][11][12] can realize an intelligent and reconfigurable radio propagation environment for the B5G/6G wireless communication system [13][14][15][16][17][18][19][20]. The IRS is a plane containing a large number of lowcost passive reflective elements, each of which can independently change the phase and/or amplitude of the incident signal. The IRS can improve the required channel conditions, thereby achieving a substantial increase in wireless communication capacity and reliability. Intelligent reflective surfaces (IRSs) also have various practical advantages in implementation. First, compared with traditional active antenna arrays, IRS can only passively reflect impact signals without generating radio frequency resonance. Second, IRS does not have any noise amplification and self-interference. Third, due to the simple structure of the IRS, it can be easily deployed in any desired location. Finally, IRS has good compatibility and compatibility and can be integrated into existing communication systems. There are many studies on AmBC or IRS [3][4][5][6][7][8][9][10][11], but there are no articles on the joint optimization design of AmBC and IRS. For example, [9] evaluated the performance of the environmental backscattering system but did not consider the role of IRS. Reference [10] used the IRS to enhance the active communication system to achieve the goal of minimum transmission power. Reference [11] combined IRS beamforming and reflection design to enhance Bistatic Backscatter Networks. IRS is a means to optimize the performance of AmBC, so it is necessary to study the joint optimization design of IRS and AmBC. Therefore, in this article, we have conducted a joint optimization design for IRS and AmBC to ensure the quality of service of active communication and AmBC while minimizing the transmission power. The innovations of this paper are as follows: (1) We considered an IRS-assisted spectrum sharing system, where AmBC rides on the primary communication system. The receivers in the two systems are the same receiver and can demodulate the signals of the two systems. We call this receiver a cooperative receiver (CR). Specifically, after the CR demodulates the signal of the primary communication system, the signal of the AmBC is then demodulated based on the demodulated signal (2) Under the condition that both the main communication system and AmBC are constrained by the quality of service, we have studied the issue of the minimum transmit power of the access point (AP) based on IRS assistance. This problem is nonconvex, so convex optimization methods cannot be used directly to solve this problem. At the same time, to solve this problem through exhaustive search methods, the time complexity will be very high. Therefore, we propose an iterative optimization method to optimize the minimum transmit power of the AP. Through joint beamforming and IRS phase shift design, the proposed iterative optimization method can effectively reduce the minimum transmit power of the AP The rest of this paper is organized as follows. Section 2 introduces system model and problem formulation. Section 3 presents the optimization algorithm based on an alternate iteration. Section 4 presents numerical results and Section 5 concludes the paper. Notations: scalars are represented by italic letters, vectors are represented by bold lowercase letters, and matrices are represented by bold uppercase letters. jxj represents the modulus of the complex number. kxk represents the Euclidean norm of the complex-valued vector x. diag ðxÞ represents a diagonal matrix, and each diagonal item is a corresponding item in x. trðXÞ represents the trace of the square matrix X. X ≽ 0 means X is a positive semidefinite matrix. System Model and Problem Formulation 2.1. System Setup. The intelligent reflective surface-(IRS-) enhanced spectrum sharing system includes a primary communication system and a secondary communication system, as shown in Figure 1. The primary communication system is a MISO downlink communication system, which consists of a receiver and an access point (AP) with M antennas. The secondary communication system is an AmBC system, which consists of a receiver and a backscatter device (BD). The backscatter device in the secondary system is a passive device, and its information transmission depends on the AP signal. We assume that the receiver in the primary communication system and the receiver in the secondary communication system are the same receiver. In other words, the receiver receives and demodulates the signal from the backscatter transmitter and the signal from the BS, simultaneously. For ease of expression, we denote the receiver as the cooperative receiver (CR). To improve QoS, an IRS with N passive reflective elements is used to help this spectrum sharing system communicate. The IRS equipped with an intelligent controller can be based on the signal propagation environment, and each reflective element can dynamically adjust the amplitude and phase shift of the incident signal. IRS Model. IRS is a very promising green communication technology, which can reconfigure the wireless propagation environment through software. The IRS can modify the wireless channel between the transmitter and the receiver through a highly controllable reflection unit. This paved the way for the realization of a controllable wireless environment. Since IRS has no RF link, it has the advantages of low cost and low-energy consumption. Because the beam of the IRS is controllable, there is no need for complex interference management between IRSs. Assuming that the IRS is a frequency-selective surface, it allows certain RF signals to pass, absorb, or reflect certain signals. That is to say, IRS can reflect RF signals in a specific frequency band but cannot reflect RF signals in other frequency bands. The IRS consists of N reflect elements, and each element n ∈ f1, 2, ⋯, Ng can reflect the incident signal with a complex reflection coefficient. The complex reflection coefficient of the nth reflection element can be expressed as β n e jθ n , where β n ϵ½0, 1, ∀n ∈ f1, 2, ⋯, Ng is the amplitude gain and θ n ϵ½0, 2πÞ, ∀n ∈ f1, 2, ⋯, Ng is the phase shift. Although in theory, the amplitude gain can be adjusted within the interval [0,1]. But adjusting amplitude gain and phase shift at the same time will greatly increase the complexity of the system. Therefore, without loss of generality, we take the upper bound of the interval [0,1] as the amplitude gain of all reflection elements, i.e., β n = 1, ∀n ∈ f1, 2, ⋯, Ng. Then, the reflec-tion coefficient matrix can be written as Θ = diag ðe jθ 1 , e jθ 2 , ⋯e jθ N Þ. Backscatter Model. Since both IRS and BD can reflect signals, the signal will be reflected multiple times between IRS and BD, which greatly complicates the problem. We assume that the PT transmits a continuous wave signal with carrier frequency f c and bandwidth B to communicate with the PR. In order to avoid the above-mentioned problems, BD adopts the following modulation method. First, BD uses a method similar to FSK modulation to shift the signal frequency f c to frequency f c + Δf c (only performs frequency shift; this process does not carry BD data) and then modulates the data that BD needs to send at frequency f c + Δf c , where Δf c represents the frequency shift of the carrier frequency after BD modulation. Assume that the IRS can only reflect the signals with a specific frequency and bandwidth. Based on the difference in channel conditions, we assume that the IRS allows the reflection of the RF signal with carrier frequency f c and bandwidth B but cannot reflect the RF signal with carrier frequency f c + Δf c and bandwidth B. Therefore, to ensure that the signal sent by the PT and the signal reflected by the BD do not overlap in frequency, Δf needs to meet the constraint condition Δf ≥ B. Although the above process occupies additional spectrum resources, it can effectively solve the problem of multiple reflections between IRS and BD. In the process of ambient backscatter communication, we need to consider the power consumption constraints of the BD circuit; that is, the ambient signal energy received by the BD must meet the circuit power consumption constraints to activate the BD circuit for backscatter communication. Assuming that the minimum received signal power to maintain the normal operation of the BD circuit is P min . When the BD is semipassive, the BD needs other power sources to supply power. In this case, all received signals are used for reflection communication. When BD is passive, the energy of the input signal must be greater than P min . In this case, part of the received signal is used to power the BD circuit, and the other part is used for scatter communication. Since the passive BD has no power supply battery, its volume and cost have advantages compared with the semipassive BD. Therefore, in the following analysis, we mainly consider passive BD. Transmission Model. We assume that the channel is flat fading and does not change during the coherence time. Then, we denote the channels of AP-IRS, BS-CR, AP-BD, AP-CR, IRS-CR, IRS-BD, and BD-CR as , and h bc ∈ ℂ 1×1 . We assume that CR is assigned a linear beamforming vector, which can be denoted as w ∈ ℂ M×1 . Then, the signal transmitted by AP is given as follows: where s is the signal that the main communication system needs to send and Eðjsj 2 Þ = 1. In this paper, we assume that IRS allows reflecting the RF signal with carrier frequency f c IRS AP CR BD Figure 1: IRS-enhanced spectrum sharing system. 3 Wireless Communications and Mobile Computing but cannot reflect the RF signal with carrier frequency f c + Δf c . Then, the signal received by the DB is mainly composed of two parts: one part is from AP and the other part is reflected by IRS. The signal received by BD can be expressed as Since no signal processing is performed in the BD, there is no noise term at (2), which is consistent with the backscatter literature. Since BD is a passive device, it needs to collect energy to power its circuit operation. Therefore, the signal received by the BD will be divided into two parts, which are used for circuit operation and signal reflection. Denote the reflection efficiency as α, then the α needs to satisfy the following constraint: Let c denote the signal of BD, then the signal reflected by BD is given by The remaining part is used to support the normal operation of the BD circuit. The power of the signal input for energy harvesting can be expressed as where η denotes the energy conversion efficiency of BD. Assume that the minimum power required to support the operation of the BD circuit is P min , then the following constraints should be satisfied: Denote the received signal of CR as y c ðnÞ, which is mainly composed of the signal from AP, IRS, and BD. Then, y c ðnÞ is given by where n ∈ CNð0, σ 2 Þ denotes the Gaussian noise. Then, the received signal plus noise ratio (SNR) of demodulated sðnÞ at the CR is given by We assume that the primary communication system has a minimum SINR requirement and denote it as γ p th . Then, the QoS constraints of the primary communication system are given by After the CR successfully demodulates sðnÞ, the CR can decode the received signal cðnÞ by performing successful interference cancellation (SIC). Then, the instantaneous received SNR of demodulated cðnÞ at the CR is given by We assume that AmBC has the minimum SNR requirement γ a th . To ensure the QoS of AmBC, the following conditions must be met: 2.5. Problem Formulation. We study the issue of minimum transmit power under the condition that CR and BD meet their SNR requirements. Therefore, we need to jointly optimize the beamforming vector of AP, the phase shift of IRS, and the backscatter coefficient of BD to minimize transmit power of AP. Then, the corresponding optimization problem can be written as 0 ≤ θ n ≪ 2π, ∀n = 1, 2, ⋯, N: ð12f Þ Obviously, (P1) is a nonconvex problem. There is no optimal solution to this problem. Next, we will analyze and simplify this problem so that it can be solved effectively. Optimization Algorithm Based on Alternate Iteration It can be seen that the problem (P1) is affected by multiple variables, which makes the problem difficult to solve. We use alternating optimization to solve problem (P1), which iteratively optimizes one variable while holding the others constant. In this section, we will introduce in detail how to solve problem (P1). Transmit where problem (14a) is linear in X, then constraints (14b)-(14d) are linear inequalities in X. X ≽ 0 means that the matrix X is a symmetric positive semidefinite matrix, and the set of symmetric positive semidefinite matrices is convex. Note that the rank constraint in (14d) is the only nonconvex constraint. Therefore, we can use the SDR method to relax this constraint. Then, problem (P3) can be rewritten as Obviously, problem (P4) is a standard convex semidefinite program (SDP), which can be optimized by a convex optimization solver such as CVX. Generally, the rank of the solution of problem (P4) is generally not equal to 1, which means that the optimal value of (P4) is the lower bound to satisfy (P3). Therefore, the solution of problem (P4) needs to be further processed to satisfy the constraint of problem (P3). First, we eigenvalue decomposition of X as X = UΣ U H , where U is a unitary matrix and Σ is a diagonal matrix. Then, a suboptimal solution of problem (P3) can be expressed as w = UΣ 1/2 e, where e is uniformly distributed on the unit sphere. w may not satisfy the constraints of (15b)-(15d). However, all constraints can be satisfied by simply scaling w to find a feasible weight vector. IRS Phase Shift Optimization. Since the objective function (14a) in the problem (P1) depends only on w, the optimization of Θ can take the form of a feasibility problem. When w and α are given, problem (P1) can be expressed as where v n = e jθ n , ∀n = 1, 2, ⋯, N. Then, the constraints in (16f) are equivalent to jv n j = 1, ∀n = 1, 2, ⋯, N. In problem (P5), the variables related to Θ are g ab , g ac , and g bc , so we need to change the forms of g ab , g ac , and g bc to get the ideal expressions. Obviously, the value of α is determined by w and Θ. Therefore, if the optimal values of w and Θ cannot be determined, we cannot obtain the optimal value of α in a limited time. However, by observing Algorithms 1 and 2, we can draw the following conclusions: for given α and Θ, we can find the optimal w of problem (P2) according to Algorithm 1; for given α and w, we can find the optimal Θ of problem (P5) according to Algorithm 2. Therefore, for a given α, we can alternately use Algorithms 1 and 2 to solve the local optimal solution of problem (P1). To facilitate practical implementation, we consider that the backscattering coefficient can only adopt a limited number of discrete values. Let L indicate the number of backscattering coefficient levels. For simplicity, we assume that such discrete backscattering coefficients are obtained by uniformly quantizing the interval ð0, 1. Thus, the set of discrete backscattering coefficients is given by where Δα = 1/L. Let α l = lΔα, l = 1, 2, 3, ::, L. Then, we can solve the local optimal values Θ l and w l for each α l of problem (P1). Let W = ½kw 1 k 2 , kw 2 k 2 , ⋯, kw L k 2 . Then, the optimal solution of problem (P1) can be given by Then, take the Θ l , w l , and α i that minimize kw l k 2 as the solution to problem (P1). Algorithm 3 gives a detailed description of the alternate optimization algorithm, where ε is a threshold for the increment of the objective value until convergence. Simulation Results In this section, we will evaluate the performance of the algorithm. To effectively evaluate the proposed algorithm, we consider the settings shown in Figure 2. The default locations 6 Wireless Communications and Mobile Computing of the AP, IRS, CR, and BD are ð0, 0Þ, ð50, 10Þ, ð50, 0Þ, and ðd, 0Þ, with all coordinates in meters hereafter. The default number of IRS elements is N = 100, while the AP has 8 antennas. We assume that all channels are independent Rayleigh fading, and the path loss index is set to 2.2 and the reference distance is 1 m. For all channels, the path loss at 1 meter (m) is set to 30 dB. For ease of analysis, we assume that each channel coefficient is uniformly randomly generated from ½0, 2πÞ. Since there is occlusion between AP and CR (BD), we assume that the penetration loss is 10 dB. At the same time, we assume that the antenna gain of AP, BD, and CR are all 0 dBi and the antenna gain of each reflective 1:Initialize: random IRS phase shifts Θ; random backscatter coefficients α; 2:Optimize problem (P4) by CVX and get X. 3:Get U and Σ, where X = UΣU H 4Get w = UΣ 1/2 e, where e is uniformly distributed on the unit sphere. 5:Scaling w so as to satisfy constraints (14b)-(14d). The threshold ε is set to 0.01. We set the minimum SNR required to demodulate the primary signal and the BD signal to 20 dB and 13 dB, respectively. First, we verified the convergence of the algorithm. When verifying the convergence of the algorithm, we do not con-sider the impact of the threshold on the algorithm but only consider the impact of the number of iterations on the algorithm. To qualitatively analyze the convergence of the algorithm, we locate BD at ð52, 0Þ. At the same time, in order to illustrate the influence of the number of IRS units on the convergence of the algorithm, we considered two cases where Figure 3, as the number of iterations increases, the AP's transmit power gradually decreases and tends to stabilize. This shows that the proposed algorithm has good convergence. As shown in Figure 3, when considering the impact of the threshold on the proposed algorithm, the transmit power can converge at most in three iterations. That is to say, when the maximum number of iterations is set to 3, the proposed algorithm can obtain satisfactory transmit power. Since the complexity of the algorithm is proportional to the number of iterations, the more iterations, the lower the complexity. Therefore, Figure 3 also proves that the complexity of the proposed algorithm is low. At the same time, we can see from Figure 3 that the greater the number of IRS units, the smaller the transmit power required by the AP. And the performance of α = 0:4 is better than the performance of α = 0:8. Although Figure 3 shows the effect of different reflection efficiencies on system performance, it does not reflect the optimal reflection efficiency required by the proposed algorithm. In Figure 4, we show the effect of different reflection efficiencies on AP transmit power. We can see that although increasing the number of IRS units can reduce the transmission power, the optimal reflection coefficient setting has nothing to do with the number of reflection units. At the same time, we can see that the performance of the proposed algorithm is better than that of the system without IRS assistance. However, it can be seen from Figure 5 that the choice of the best reflection coefficient is related to horizontal distance between AP and BD. We can see from Figure 5 that as the horizontal distance between AP and BD increases, the optimal emission coefficient increases. We can also see from Figure 5 that as the horizontal distance between AP and BD increases, the transmission power required to ensure the QoS also increases. Conclusions Intelligent reflector surfaces (IRSs) can improve the radio frequency environment by adjusting the phase and amplitude of the incident signal, which provides the possibility for the widespread deployment of AmBC. In this paper, we introduce a novel IRS and AmBC joint design method. This method is based on the joint design of an iterative beamforming vector, IRS phase shift, and reflection coefficient to minimize the AP's transmit power. This method can effectively reduce the transmission power of the access point, and the simulation results prove the effectiveness of this method. Data Availability The data in this paper is based on MATLAB simulation. According to the method described in this paper, all the data can be obtained through MATLAB. Conflicts of Interest The authors declare that they have no conflicts of interest.
6,064.8
2021-06-21T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
2D CFT partition functions at late times We consider the late time behavior of the analytically continued partition function Z(β + it)Z(β − it) in holographic 2d CFTs. This is a probe of information loss in such theories and in their holographic duals. We show that each Virasoro character decays in time, and so information is not restored at the level of individual characters. We identify a universal decaying contribution at late times, and conjecture that it describes the behavior of generic chaotic 2d CFTs out to times that are exponentially large in the central charge. It was recently suggested that at sufficiently late times one expects a crossover to random matrix behavior. We estimate an upper bound on the crossover time, which suggests that the decay is followed by a parametrically long period of late time growth. Finally, we discuss gravitationally-motivated integrable theories and show how information is restored at late times by a series of characters. This hints at a possible bulk mechanism, where information is restored by an infinite sum over non-perturbative saddles. Introduction and summary Quantum black holes have finite entropy and a discrete spectrum of states. The details of this spectrum are inaccessible in the semi-classical approximation: the density of states one obtains from the Bekenstein-Hawking entropy is a smooth function of the energy. In this work we address the question of how the discrete spectrum arises in 2d conformal field theories and their holographic duals. Maldacena suggested that one may address this question by studying the late time behavior of correlation functions [1], which is a sharp probe of the discrete energy levels in the spectrum. For unitary systems with discrete spectra, connected thermal correlators of -1 -JHEP08(2017)075 the form O(t)O(0) (where O is a Hermitian operator) tend to decay exponentially until times of order the entropy S, and then proceed to oscillate erratically about zero with an RMS amplitude of order e −S . 1 On the other hand, correlation functions computed in a classical black hole background tend to decay exponentially forever. This decay is often referred to as 'information loss'. Holography may be a useful setting for studying the question of how a discrete black hole spectrum arises. From the boundary field theory point view, the fact that the spectrum is discrete is trivial if we place the theory on a compact spatial manifold. Similarly, the qualitative features of the late time behavior follow easily from mild assumptions about the spectrum (such as the fact that the theory is chaotic). The challenge is then to describe this behavior in 'bulk language', using objects that are natural from a gravity point of view. In this work we focus on another quantity that is also sensitive to information loss at late times. Spectral form factor and information loss Consider the thermal partition function Z(β), and let us analytically continue β → β + it. The parameter t should be thought of as real time. Let E n be the discrete energy levels, each with degeneracy N n , and consider the following quantity. g(β, t) ≡ |Z(β + it)| 2 = n,m N n N m e −β(En+Em)+it(En−Em) . (1.1) If we formally set β = 0 then (1.1) becomes a well-studied quantity in Quantum Chaos called the spectral form factor (for reviews, see [5,6]). We will use the same name to refer to g(β, t) at any β. In the context of black hole physics this quantity was first discussed in [7], and was recently studied in the context of information loss in the Sachdev-Ye-Kitaev model [8,9] in [10]. See also [4] for a related discussion. At late times the double sum in (1.1) essentially localizes onto terms with E n = E m . As we review in section 2, the time average of g(β, t) obeys the bound g(β) ≡ lim to→∞ 1 t o to 0 g(β, t)dt ≥ Z(2β) . (1. 2) The bound is saturated when the spectrum has no degeneracies. The non-zero time average reflects a weighted counting of the discrete energy levels in the spectrum. The quantity on the right-hand side is of order e S . On the other hand, suppose we have a bulk theory with a black hole background and focus on the BTZ black hole for simplicity. We approximate the exact partition function by the BTZ black hole partition function Z(β) = exp π 2 c 3β , which is the dominant contribution for temperatures above the Hawking-Page transition. We then find a spectral form factor that decays to 1 at late times. If we also include the 1-loop determinant we 1 By the notation e −S we mean that the quantity scales as e −n dof where n dof is the number of degrees of freedom. We will be interested in 2d CFTs with large central charge c, for which n dof ∼ c. We note that at very late times of order e e S we expect recurrences, which do not play a role in this work. See [2][3][4] for a discussion of recurrences in the context of information loss. find that the spectral form factor decays to zero, representing no discrete states in the corresponding spectrum. 2 We see that the spectral form factor, just like the correlation function, is sensitive to information loss. See [11,12] for related discussions. It was suggested in [1] that one may improve the situation by adding subleading bulk saddle points such as thermal AdS 3 . It is easy to check that including the thermal AdS 3 contribution indeed raises the time average, but this contribution is not sufficient for the time average to obey the bound (1.2) at high temperature. Indeed, we will see that no finite number of subleading saddles is enough to obey the bound (1.2) at high temperature. For 2d conformal field theories, the question of information loss in the thermal two point function was studied in [13] and for collapsing black holes in [14]. Recently, the authors of [15][16][17][18] considered the four point function of two heavy operators O H , ∆ H ∼ c, and two light operators O L on the cylinder O H |O L (φ, it)O L (0)|O H . This is a microcanonical version of the calculation described above. In the large c limit, corresponding to the classical black hole limit in the bulk, one finds that the correlation function reproduces the thermal two-point function on a line, with temperature set by the heavy operator's dimension and thus decays in time. In [17,18] it was speculated that perhaps the late time decay is avoided (and information is restored) within each Virasoro block in an OPE expansion of this four-point function. This question is difficult to answer because the relevant Virasoro blocks are not known exactly. 3 We are able to answer this question in our context, by considering instead the spectral form factor, which has a decomposition in terms of Virasoro characters, analogous to the Virasoro blocks that show up in the OPE expansion of the heavy-heavy-light-light correlator. 4 The Virasoro characters have known closed-form expressions, and each relevant Virasoro character decays at late times. We conclude that in chaotic 2d theories information is not restored kinematically in general, namely as a consequence of Virasoro symmetry, but rather dynamically, due to an interplay between infinitely many characters. (Integrable theories will be discussed separately. For such theories information loss still occurs at the level of Virasoro characters, but is explicitly restored in the characters of the extended chiral algebra.) The authors of [20,21] studied the discrete spectrum of chaotic 2d CFTs by working directly with the thermal partition function. Our conclusion agrees with their results. The authors of [20] considered a modular invariant partition function that is made up of the vacuum character plus its modular images (appropriately regulated). They found that the corresponding density of states is essentially smooth and captures almost none of the discrete states. Here we advertise that if one is interested only in whether or not the spectrum contains discrete states (rather than in the detailed properties of these states), it is enough to check whether the time-averaged spectral form factorḡ(β) vanishes, a potentially simpler computation. 2 That the BTZ partition function decays to 1 (if we do not include the 1-loop determinant) is related to the fact that the inverse Laplace transform of e 1/β is given by E −1/2 I1(2E 1/2 ) + δ(E) which includes a single discrete state. 3 See [19] for recent developments. 4 The torus partition function can be written as a correlator involving 4 heavy twist operators. The Virasoro characters are the blocks that appear in an OPE expansion of this correlator. This discussion of information loss has been phrased in terms of the boundary Virasoro characters, but also has a natural bulk interpretation. The character which dominates at high temperature corresponds to the bulk BTZ saddle. The O(1/c) correction to the character corresponds to a one-loop determinant in the bulk. Therefore, a resolution of information loss phrased in terms of Virasoro characters would probably shed light on how information is restored in the bulk. Late times and random matrix theory The late time behavior of the spectral form factor is only sensitive to the structure of small energy differences. We generally expect that if we probe any chaotic system at sufficiently small energy differences, then the Hamiltonian can be approximated by a random matrix chosen from a suitable Gaussian ensemble. The authors of [10] made the observation that the late time behavior of chaotic theories should therefore be described by random matrix theory (see [5,6] for a review of RMT). This was verified for the Sachdev-Ye-Kitaev model [8,9] in [22]. We thus now turn to RMT as a guide for what to expect for the late time behavior of the spectral form factor. Figure 1 shows the spectral form factor for random matrices selected from the Gaussian Unitary Ensemble (GUE). We will discuss this curve in more detail below. For now we merely point out that (i) the shape of the curve before its minimum (the dip) is dominated by the coarse-grained shape of the spectrum (in the case of Gaussian random matrices this is Wigner's semicircle law), and that (ii) after the dip time the curve starts probing the discrete energy levels. In particular, the period of linear growth is related to the spectral rigidity of random matrix energy levels (essentially the fact that energy levels repel). In [10] it was conjectured that the existence of a dip time, followed by a period of linear growth, is a generic feature of chaotic systems, including black holes. The value of the dip time is non-universal and depends on the detailed properties of the theory, including the coarse-grained shape of the spectrum that determines the early time decay. Here we test this conjecture in the context of 2d CFTs, and find evidence that the approximate dip time is robust for chaotic CFTs dual to gravity. In RMT (and in the SYK model) the spectral form factor is defined by averaging over an ensemble of Hamiltonians. This averaging leads to a smooth curve at late times. In trying to apply the conjecture to an ordinary quantum field theory, one has to confront the fact that there is only one Hamiltonian. As a result, the late time behavior is expected to be erratic. Figure 2 shows the spectral form factor computed from a single GUE matrix. Beyond the dip time the fluctuations become large and the features of figure 1 are barely visible (see also [23]). However, as explained in [10], one can replace ensemble averaging by time averaging over a parametrically small window (in the limit of a large Hilbert space dimension), restoring the late time features. It is therefore meaningful to discuss the random matrix theory ramp and plateau at late times even in an ordinary quantum field theory. In this work we estimate the dip time at which a generic 2d CFT crosses over into the RMT regime. This is done by estimating the shape of the early decay of the curve using modular invariance, and assuming that at late times we have the linear growth predicted by RMT. Our estimate relies on identifying the dominant contribution from a single Virasoro character at each point in time. We estimate that the spectral form factor decays at late times in an erratic way, with an envelope that decays as We will see this implies a parametrically long period of linear growth following the dip time. We expect the same to be true of black holes in AdS 3 . Summary of results Here is a brief summary of the key points of this paper. 1. We consider the spectral form factor |Z(β + it)| 2 as a probe of the spectrum. At early times it diagnoses the mean density of states, while at late times its behavior serves as a useful diagnostic of the discreteness of the spectrum [10]. As in the case of two-point functions, a decay at late times indicates that we are not probing the discrete states of the spectrum, and signals information loss. 2. In 2d CFTs the partition function has an expansion in terms of the Virasoro characters. Each character decays at late times, and therefore Virasoro symmetry is not enough in general to restore information. 3. We identify a universal contribution to the early time behavior of the spectral form factor, which follows from Virasoro symmetry and modular invariance. It includes sharp peaks at times t = 2πn for integer n, where the height of the peaks decays as a power law in time. We conjecture that this is contribution dominates the early time behavior in generic 2d CFTs. 4. In chaotic theories we expect the late time behavior (t e c ) to be described by random matrix theory. In particular, we expect there to be a characteristic time scale (the 'dip time') beyond which the RMT description is valid. Based on our (uncontrolled) analysis of the early time behavior, we conjecture that the dip time scales as e c . Beyond the dip time we expect there to be a period of linear growth (with large fluctuations) that is parameterically long at large c and high temperature. 5. For certain integrable models, or BPS sub-sectors of generic models, we identify a precise infinite set of bulk saddles which restore the information naively lost in the leading thermodynamic approximation. The rest of the paper is organized as follows. In section 2 we discuss the spectral form factor and information loss in 2d CFTs. In section 3 we review the Virasoro character expansion and the modular properties of the torus partition function, and provide simple estimates of its decay before the dip time. Then, in section 4 we give an improved estimate of the decay by identifying the dominant character at any rational time. We conclude that these contributions are not sufficient to avoid information loss. In section 4.4 we estimate the dip time, beyond which we expect the system to have an effective random matrix theory description. In section 5 we discuss integrable theories. We show that for certain integrable theories, or BPS sectors of generic theories, information is restored by identifying the dominant saddle point at each particular time using modular invariance. Appendix A gives a short review of black holes in AdS 3 . Spectral form factor In this section we define the spectral form factor and discuss its properties in relation to information loss. Consider a unitary quantum field theory with a holographic dual. Place the theory on a compact manifold so that it has a discrete spectrum. The spectrum consists of energy levels E n , each with degeneracy N n . The density of states is given by (2.1) The thermal partition function at inverse temperature β is We assume for simplicity that this function is finite for any β > 0 (this is always true for 2d CFTs). Let us generalize the partition function and define One can obtain this function by analytically continuing Z(β), taking β → β + it. The parameter t is conveniently thought of as real time. We then define the spectral form factor by This is an important quantity in the study of random matrix theory [5,6]. In this work we will study the late time behavior of g(β, t). In a general chaotic theory this behavior is complicated as it involves a sum over many oscillators with different frequencies (E n − E m ). Things simplify if we only consider the long-time average, where only terms with E n = E m contribute. We see that, on average, g(β, t) approaches a non-zero value at late times. In (2.5) we implicitly assumed that there is a minimal level spacing in the spectrum. The long-time average obeys the bound The bound is saturated when the spectrum has no degeneracies. 5 In this case the late-time average of g, namely Z(2β), is exponentially smaller than the initial value Z 2 (β). Indeed, in general we have For a CFT in d spacetime dimensions the right-hand side is equal to exp − 2 Information loss We now consider the long-time averageḡ(β) in the context of the AdS 3 /CFT 2 duality. Consider a 2d CFT on a circle of length L = 2π that has a holographic bulk dual, and assume as before that the theory has a discrete spectrum. At high tempereature the thermal state of the theory is dual to a BTZ black hole. Its partition function is given approximately by Z(β) = exp 8π 2 k/β where k = c/24 and c is the central charge of the field theory. This is an approximation to the full partition function of the quantum theory. The BTZ contribution to the spectral form factor can be computed by continuing β → β + it, and it decays at late times as In taking the late time limit we will always keep β (the real part) fixed. In this approximation we find that the time average is 1, violating the bound (2.6). This is a form of information loss. The BTZ contribution to the partition function is given by the modular image of the vacuum state. As we will see below, no finite number of additional primary operators is sufficient to avoid information loss. Let us think clearly about what this means. Given an approximate partition function Z(β) we can compute the corresponding density of states ρ(E) by an inverse Laplace transform. For the BTZ black hole this is well approximated at high energies by the Cardy formula ρ cardy (E) = e 4π √ 2kE , which is an approximation to the density of states in the dual field theory. The important difference between this and the exact density of states (2.1) of the quantum theory is that the Cardy density is a smooth and finite function of the energy (see [11] for a related discussion in the context of large N gauge theories). Indeed, given a partition function of the form Z(β) = dEρ s (E)e −βE where ρ s is a smooth and finite function, it is easy to see that the time-averaged spectral form factor violates the bound regardless of the details of ρ s . We see that the late time behavior of the spectral form factor directly probes the discreteness of the spectrum of the theory. In particular, the time-averagedḡ(β) counts discrete states in the spectrum (weighted by a Boltzmann factor and by degeneracy). Information loss occurs when we approximate the density of states by a smooth function that does not capture the individual energy levels. This type of information loss occurs in classical black holes in arbitrary dimension. Equivalently, it occurs in the dual field theory when we use the thermodynamic approximation to the partition function. 2d CFTs In this section we discuss in more detail the torus partition function and spectral form factor in 2d CFTs, focusing on theories with large central charge. We discuss possible corrections to the leading answer (including certain non-perturbative corrections) and show that they are not sufficient to restore information in the spectral form factor. Consider the partition function of a 2d CFT on a torus with parameter τ = iβ 2π + µ 2π . From now on we set the chemical potential µ = 0. The partition function can be written as a sum over all states, Here q(τ ) ≡ exp(2πiτ ), N h,h is the degeneracy of the state with conformal weights (h,h), and we took the central charges to be c L = c R = c = 24k for convenience. All states have h,h ≥ 0. The full partition function can also be written as a sum over Virasoro characters, Here we use the notationf (z) = f (z). Each term captures the contribution from a Virasoro primary with dimensions (h,h) and its descendants, and we have isolated the vacuum contribution from the sum. Each character appears with degeneracy n h,h . The characters are given by where η(τ ) is the Dedekind eta function. These expressions are exact even at finite c. We assume the theory is modular invariant, which means We can write the partition function as a sum over states after performing any SL(2, Z) transformation γ. We will refer to the γ-image of a particular character as the contribution of that character in the γ frame. To obtain the high-temperature approximation to the partition function we can write the sum over characters in the S frame. (3.5) The first term, which is the vacuum character contribution in the S frame, is the dominant contribution at high temperatures (when β < 2π) [24]. It is given by In writing this we used the fact that η(−1/τ ) = √ −iτ · η(τ ). The leading part of (3.6) at large c comes from the vacuum state itself. It also has an O(1/c) correction coming from the sum over the vacuum's descendants. JHEP08(2017)075 Much of this structure is echoed on the gravitational side. The asymptotic symmetry algebra of pure gravity in AdS 3 is the Virasoro algebra, with central charge c = 3 2G [25]. 6 The contribution of thermal AdS 3 to the partition function can be evaluated exactly, and is given by the vacuum character contribution In the bulk, the leading contribution comes from evaluating the action of the classical gravity solution, while the O(1/c) correction is due to a 1-loop determinant. There are no higher order corrections so this result is 1-loop exact in bulk language. The contribution of the BTZ black hole geometry is given by the vacuum character in the S frame, eq. (3.6). Here, again, the leading large c contribution comes from the classical (black hole) solution, and there is an More generally, as we review in appendix A, at fixed temperature and chemical potential there are an infinite number of classical bulk solutions that are related by SL(2, Z) transformations [26]. 7 The solution that corresponds to γ ∈ SL(2, Z) makes a contribution to the partition function equal to χ 0 (γ(τ ))χ 0 (γ(τ )). For a general theory, there will be many additional contributions to the partition function that correspond to states involving matter fields. Analytic continuation to real time Equation (3.5) is a useful starting point for the analytic continuation β → β + it to real time because (i) at t = 0 the vacuum character contribution provides a good approximation, and (ii) this dominant contribution has a clear bulk interpretation as the BTZ black hole. This contribution remains dominant at sufficiently early times. We now discuss the various pieces of eq. (3.5) after analytic continuation to late times. We will find that the contribution coming from each individual character decays to zero at late times, violating the bound (2.6). We start by focusing on the contribution of the vacuum character (3.6), which is equal to the BTZ black hole contribution and is the dominant contribution at high temperature. We analytically continue β, taking Notice that after analytic continuationτ is not the complex conjugate of τ . After a time of order a few βs we find that the vacuum character contribution to the spectral form factor decays as 8 Here is the AdS length, and G is the 3d Newton constant. 7 The family of solutions is labeled by elements of Γ∞\SL(2; Z), where we quotient by τ → τ + 1 on the left. 8 The continued eta function η(τ ) oscillates in time, never giving a substantial contribution to (3.9). JHEP08(2017)075 The leading, vacuum state contribution decays exponentially to an O(1) amplitude at times t ∼ √ k. The subleading contribution coming from the descendents is then responsible for the 1/t 6 power law decay down to zero. Curiously, including additional states (the vacuum's descendents) in the S-frame makes the violation of the bound (2.6) worse. Next, the contribution to (3.5) coming from each non-vacuum character can be written as and deacys as at late times, regardless of the conformal dimensions. (In writing these equations we assumed for simplicity that neither h norh are equal to zero, i.e. we are excluding additional conserved currents.) We arrive at the following conclusion: including a finite number of characters in the S-frame does not bring us closer to obeying the bound (2.6). Universal late time decay In this section we will attempt to understand universal properties of the late time partition function in AdS 3 /CFT 2 . For gravity in weakly curved AdS 3 the partition function undergoes a phase transition between the dominant low temperature saddle, thermal AdS 3 , and the high temperature saddle, the BTZ black hole [27]. The partition function in these two regimes is given approximately by This phase structure is replicated in sufficiently sparse, large c CFTs [24]. As long as the number of states grows sub-exponentially, the partition function is dominated by the vacuum state at low temperatures, and by the vacuum state in the S frame at high temperatures. As discussed above, starting with the dominant high temperature contribution and continuing β analytically to real time does not reproduce the correct late time behavior. The spectral form factor satisfies the bound (2.6), while the thermal partition function corresponding to the BTZ black hole leads to a decaying spectral form factor (3.9). This contribution decays exponentially to an O(1) amplitude at times of order √ k. As we will show, this decay significantly underestimates the correct late time behavior of the partition function. In section 4.1 we identify a universal contribution to the partition function which decays significantly slower than (3.9). Then, in section 4.2 we estimate corrections to the -11 -JHEP08(2017)075 universal decay using Cardy's formula, and find that they are negligible in this approximation (though with important caveats). We also show that the free compact boson exhibits the universal early time behavior identified in section 4.1. In section 4.3 we give a refined version of the universal contribution to the partition function for all times and temperatures. This, together with the late time plateau for the spectral form factor, lends evidence to a universal picture for the time dependence of the partition function that we lay out in section 4.4. Universal contribution The partition function (3.5) expanded in the S frame is dominated by the vacuum character at t = 0. This suggests a strategy for approximating the partition function at later times: at any given time, identify the apropriate modular transformation such that the image of the vacuum character in this frame is larger than in any other frame. Consider the partition function at times t n ≡ 2πn, with corresponding modular parameters To study the partition function at these discrete times, it is convenient to perform a timedependent modular transformation γ n (τ ) ≡ −1/(τ + n). This transformation removes all of the holomorphic time dependence. It maximizes the contribution from the vacuum character among all modular transformations. Explicitly, the vacuum character in the γ n frame is given by It decays at late times (large n) as Notice that the vacuum state itself decays in this frame to the exponentially large value e 4π 2 k/β , which is much larger than the asymptotic value of the vacuum state in the S frame. The power law decay is due entirely to the O(1/c) piece of the vacuum character. Figure 3. The spectral form factors corresponding to the BTZ black hole contribution g BTZ (β, t) (blue) and corresponding to the dominant image of the vacuum g n (β, t) (red). Here, for t = 2πn, we interpolate by taking n = integer part(t/2π). This accounts for the discontinuities in the red, dashed line. The peaks of this contribution are attained at discrete times t n (purple dots). Going to the late dominant frame does not avoid late time decay, violating the late time bound (2.6) (black, dotted). Inset: the dominant contribution at t n (purple) with a fit to a t −3 power law (black). (4.7) Figure 3 shows g n (β, t) compared with the late time bound (2.6) and the decay from the vacuum character in the S frame. Notice that the amplitude of the power law decay in (4.7) is in fact greater than the value of the late time bound (2.6), Z(2β) ≈ exp 4π 2 k β . Next, let us consider the contribution of a non-vacuum character Z h,h (τ,τ ) = χ h (τ )χh(τ ). We assume that the state is 'light', namely that the conformal weights h,h are fixed as we take k large. We will also assume for simplicity that there are no extra currents, i.e. h,h are both strictly positive. At time t n the γ n frame again maximizes the contribution of the character among all SL(2, Z) frames. At late times this contribution to the partition function decays as The faster decay compared with the vacuum character (4.6) can be traced back to the fact that the vacuum character has an additional (1 − q) factor that decays as 1/t n . The matter character contribution to the spectral form factor behaves at late times as (4.9) -13 - JHEP08(2017)075 In section 4.3 we will generalize these considerations and find a universal contribution to the spectral form factor for arbitrary rational times. The result will be bounded above by (4.9) if we replace t n by a rational time. We conjecture that the universal contributions from the vacuum (4.7) together with the contribution from the light states (4.9) correctly describe the spectral form factor for generic non-integrable CFTs up to the dip time. For a putative CFT that is dual to pure gravity there are no light matter fields, and we conjecture that correct description is given by (4.7). We provide an argument for this in the next subsection. As discussed in the introduction, beyond the dip time we expect another universal contribution, one due to random matrix theory, to become dominant and lead to a ramp and a plateau. This universal contribution we have identified, (4.9), has a nice connection with classical bulk saddles. As we review in appendix A, for each n there is a black hole solution in the bulk, with the contribution Z vac (γ n (τ ), γ n (τ )) to the gravitational partition function. We can thus identify the universal decay of the spectral form factor with the contribution of these black hole solutions. Dominance of the universal contribution In the previous subsection we identified a universal contribution to the partition function. It is natural to ask for what class of theories (if any) this contribution correctly describes the early time behavior of the partition function. In this section we show that this is the case for at least one theory. We then argue that the universal contribution is the dominant one at early times in a large class of theories. First, consider the theory of a free compact scalar with internal radius r. 9 The partition function (with τ = iβ 2π ) is given by (4.10) The spectral form factor has a period that is determined by the radius r. By choosing r appropriately (either very large or very small) one can have a long period, exposing the universal behavior (4.9) at early times. The result is shown in figure 4. 10 We now argue that the universal contribution identified in the previous subsection provides a good approximation to the partition function before the dip time even in generic 2d CFTs, namely before the universal contribution due to random matrix theory becomes dominant. The argument has important caveats that will be discussed below. 9 We are thankful to Alexandre Belin for useful discussions. 10 Note that, due to the small central charge, c = 1, the behavior of the initial decay is somewhat different than the universal large k behavior identified previously. As in that case, the heights of the peaks of the partition function exhibit an initial power law decay. In this case, however, there is no exponential decay. Never the less, the behavior is reproduced by the modular images of the free bosonic vacuum character. The difference stems from the fact that the free boson vacuum character looks different than the c > 1 Virasoro vacuum. Z c=1 vac (γn(τn), γn(τn)) = 1/η(γn(τn))η(γn(τn)), which has a pure power law decay at large n. t g(t) with c=1 β=0.5 r=0.005 Figure 4. The spectral form factor of a compact scalar, displaying the behavior (4.9) of the universal contribution at times t = 2πn. The subleading peaks at times t = 2πn + π can also be explained using universal properties such as modular invariance, as will be discussed in section 4.3. The dashed lines show the 1/t envelope of the leading and subleading peaks. Focusing again on the discrete times t n = 2πn the full partition function can be written as a sum over states in the γ n frame, The factor in front on the right-hand side is equal to the vacuum state contribution in the γ n frame. This is the amplitude of the universal contribution (4.6). Our goal is to argue that the sum (4.11) is well approximated by the universal contribution, (4.6), until the dip time. We begin by explaining why the sum over the heavy states gives a subdominant contribution to the partition function, and then why the light states and descendants reproduce the amplitude and power law decay of (4.6). The correction to the leading amplitude in (4.11) is In the second line we separated the sum over all states into a sum f L over 'light' states, and a sum f H over 'heavy' states. Let us discuss these two sums separately. Heavy states. We consider first the sum over heavy states, which we can write as β+4πin . JHEP08(2017)075 Hereĥ ≡ h − k, and ρ(ĥ,ĥ) is the density of heavy states. This density of states can be approximated by the Cardy density ρ c [28], which is defined by the equation (4.14) The integral on the right is exactly the integral that appears on the right-hand side of (4.13) if we approximate the full density of states ρ by the Cardy density ρ c , and replace τ = 4π 2 i β andτ = − 4π 2 i β+2itn . Therefore, in the Cardy approximation we find that In the large k, high temperature limit we see that f H 1 at arbitrarily late times, and so the contribution from the heavy states cannot significantly change the amplitude in (4.11). It is instructive to verify that this suppression of heavy states does not rely on detailed properties of the Cardy distribution. The solution to (4.14) is In the last line we expanded to leading order in largeĥ. It is easy to check that this leading piece (including theĥ −3/4 factor) also leads to a suppressed contribution from the heavy states. We now mention an important caveat regarding the argument above, which relies on the assumption that the density of states is well-approximated by the Cardy density for heavy states. One can apply the same argument to the S image of the vacuum at time t = 2πn (instead of to the γ n image), and again conclude that the corrections to the image of the vacuum state due to heavy states are negligible. But the S image of the vacuum simply decays in time and does not exhibit the peaks seen in the γ n frame, leading to a contradiction. Perhaps the simplest resolution of this problem is that there are subleading corrections to the Cardy density that reproduce the peaks when working in the S frame. We showed that the detailed properties of the Cardy density are not important for the argument to work, and so such corrections should take a special form. We hope to return to this question in future work. Light states. The contribution from light states is more subtle. To constrain the contribution of the light states, we would like to appeal to sparsity. In other words, we would like to consider theories without too many light states. However, we always have, at the very least, Virasoro descendants of the vacuum. As the light state contribution, JHEP08(2017)075 has no suppression, it is difficult to argue that the light states give an O(1) contribution at late times. Indeed, if this were the case, it would contradict the power law decay of our universal contribution (4.9). To address this fact, and to give teeth to the assumption of sparsity, we turn our attention to the expansion of the partition function in terms of characters rather than states. Light and heavy characters. The universal contribution (4.9) contains an amplitude and a subleading power-law decay, which comes from summing over descendants. The descendants include heavy states which contribute to the Cardy relation (4.14). To show the dominance of the full contribution (4.9) (including the power law decay) we re-expand the partition function in characters instead of in states, in the γ n frame. We define σ n ≡ γ n (τ n ) andσ n ≡ γ n (τ n ) to reduce clutter. Here ρ χ (ĥ,ĥ) denotes the density of characters with conformal dimensions (h,h), and we took out factors of q k as in (4.13). As before,ĥ ≡ h − k. The term χ 0 (σ n )χ 0 (σ n )) is the universal vacuum contribution (4.7). The sum on the second line is the contribution from light characters. The primaries we are describing as light here consist of any state with either h orh smaller than k. These are referred to as censored primaries in [21]. One way to justify limiting the number of such states, is that those with either h h orh h are close to conserved currents, and we expect there to be few such states in a typical chaotic CFT. More generally, we would like to consider CFTs that are dual to gravitational theories without too much matter. For us, sparseness means simply that the contribution from these light primaries is well approximated by the vacuum character, with at most an order one number of additional light primaries. 11 Finally, on the last line we have the contribution of the heavy characters, which we claim is negligible in the Cardy approximation. We can approximate the density of the heavy characters by a Cardy density ρ χ ≈ ρ χ,c , which is defined by the equation (4.20) As in the case of heavy states, the integral on the right-hand side is the same integral that appears in (4.19), and the same argument implies that this contribution will be negligible. It is worth briefly connecting this argument back to the case of the free boson and discussing what role a sparse light spectrum played in getting the early time universal 11 Note, this is more strict then what is sometimes imposed (see [24] for instance), and requires a separation of scales between the AdS length and the string scale in the bulk. figure 4. This behavior manifests itself when the radius of the boson is taken to be very large, or very small, providing a long enough period to see the power law decay. For simplicity, let's focus on the case r 1. In this case the winding modes, m = 0 in (4.10), are parametrically heavy. The contribution of the light modes is then given by, exhibiting the same power law decay as the vacuum. Here we can view the r → 0 limit as producing a sparser light spectrum by decoupling the winding modes. If instead we take r ∼ 1, so that there is no separation between momentum and winding modes, there is no early time window for which the partition function decays. The arguments above seem to imply a decaying spectral form factor at arbitrarily late times, but we know that they must fail at some point in order for the lower bound (2.6) on the plateau height to be satisfied. In particular, the assumption that the density of characters is well approximated by the Cardy density becomes invalid at sufficiently late times. The left-hand side of (4.20) includes only the vacuum state. In the full theory the left-hand side includes other states, whose contribution becomes important at late times. In this work we assume that at late times the only important physical effects are the universal decay before the dip time, and the random matrix theory behavior of a ramp + plateau beyond it. This is equivalent to assuming that the density of characters ρ χ is well approximated by the Cardy density until the dip time. Rational times and hot saddles So far we focused on the discrete times t n = 2πn. The story at generic times is slightly more elaborate. We begin by considering the times t n+1/2 = 2π(n + 1/2), n ∈ Z, and the corresponding modular parameters τ n+1/2 andτ n+1/2 . There are now two modular transformations of the vacuum that vie for dominance at high temperatures: γ n and γ 2,2n+1 , where we define γ c,d (τ ) ≡ aτ +b cτ +d (where a, b are uniquely determined from c, d). Indeed, we have our previous choice, And we have the competing modular frame, Figure 5. Here, we show the upper half-plane tiled by fundamental domains of SL(2; Z). As we increase the temperature, which corresponds to lowering the red line, we cross more and more fundamental domains. At late times (large n) we compare the two contributions, For sufficiently high temperature, β < π √ 3 , the second contribution is larger and gives the dominant contribution, while for π √ 3 < β < 2π the first contribution dominates. More generally, for any rational time, t n/m = 2πn m , there exists an inverse temperature, β m,n , such that for β < β m,n , the vacuum in the modular frame γ m,n gives a bigger contribution than the vacuum in any other frame. We can understand this from the Γ ∞ \SL(2; Z) tiling of the upper half plane, see figure 5. As we increase temperature, we decrease Im(τ ), and intersect more and more fundamental domains. Each such fundamental domain corresponds to a different modular image of the vacuum dominating. At a given temperature, we can refine our identification of the universal contribution to the partition function, Z (β, t n/m ) ≡ χ 0 (γ (τ n/m ))χ 0 (γ (τ n/m )) . (4.26) Here γ is the modular transformation that maximizes the vacuum character contribution at given temperature and time. 12 At high temperatures, (4.26) gives a complicated contribution to the partition function. See figure 6 for an example. At late times, it is easy to check that taking the decaying result, e 8π 2 k/β t 3 n , for the spectral form factor, and replacing t n by an arbitrary time t n/m , leads to a result that is always greater than or equal to |Z | 2 . 12 Explicitly, given τ n/m ,τ n/m it is defined by γ * ≡ argmax γm,n |χ0(γm,n(τ n/m ))χ0(γm,n(τ n/m ))| 2 . β= π 25 Figure 6. Here, in the top line, we display the behavior of our universal contribution, g (β, t) at various temperatures. On the bottom line, for comparison, we display the spectral form factor for a sample modular invariant function, ψ 2 (τ ), defined in section 5. As we increase temperature both are controlled by more and more saddles. Dip time estimate In this section we derive an upper bound on the time at which the spectral form factor of a generic chaotic CFT is expected to cross over to random matrix theory behavior. We call this the dip time. The derivation assumes that the universal contribution computed in previous sections correctly describes the late time behavior of the spectral form factor up to exponentially late times, right up to the dip time t d . The universal contribution, which we shall call the slope, is bounded from above by where s = 3 for the vacuum character in the γ n frame, and s = 1 for non-vacuum characters (where both h,h are non-zero). While the result (4.28) was derived for the discrete times t n = 2πn, as we saw in section 4.3 it provides an upper bound on the universal contribution and that will suffice for the purpose of deriving a bound. 13 The decaying contribution (4.28) cannot be the full answer for a theory with a discrete spectrum at arbitrarily late times, because it violates the bound (2.6). Going to late times in the spectral form factor is equivalent to probing small energy differences in the spectrum. We expect the properties of the spectrum at small energy differences (and therefore the behavior of the spectral form factor at very late times) to be goverened by random matrix theory [10]. As described in section 1.2, random matrix theory gives another universal contribution. While this contribution is expected to have large fluctuations, on average its behavior is relatively simple. Roughly speaking, it grows linearly in time until the plateau time t p , beyond which it levels off at its asymptotic value which we shall denote g p . In this section we estimate the dip time t d , which is the crossover time from the universal decay of (4.28) to the random matrix theory behavior. We find that the ratio 13 Notice that the universal contribution at non-integer times is exponentially smaller in k than (4.28). Therefore, in practice we expect the random matrix theory contribution (the ramp) to 'peak through' at non-integer times even before our estimate of the dip time. We thank Steve Shenker for pointing this out. -20 -JHEP08(2017)075 t p /t d is exponentially large in k, which implies that there is a long period during which we expect the spectral form factor to grow linearly (on average) in a generic theory. To get the late time behavior of the ramp and the plateau, recall that the thermodynamic partition function is given by the BTZ black hole partition function, Z(β) = e 8π 2 k/β . The plateau height g p is bounded below by Z(2β) (it can be pushed higher by degeneracies, which we ignore for now). The plateau time can be approximated by counting the available states at 2β, so it is given by 14 The ramp grows linearly in time, and should reach the plateau height at the plateau time. The spectral form factor on the ramp is then given by The dip time t d is defined by g slope (t d ) = g ramp (t d ), and is given by For both the vacuum and matter contributions it is parametrically smaller than the plateau time: (4.33) Fine spectral probe As we have seen discreteness of the spectrum in the original SL(2, Z) frame is a necessary and sufficient condition for the partition function not to decay at late times. However, modular invariance means that we should be able to present the partition function as a sum over states in any SL(2; Z) frame. In other frames, discreteness of the spectrum is not sufficient to guarantee the correct late time behavior, for instance a discrete set of states in the BTZ frame, may certainly decay. Thus, the late time behavior probes slightly different features of the spectrum when viewed in each frame. Of course, if we have a modular invariant spectrum these are all equivalent, but if one doesn't know a-priori that a given spectrum is modular invariant, 14 The factor of 2 comes from the two terms in the exponent e −β(En+Em) that appears in the sum over energy states. the late time behavior in other frames provides a detailed probe. To demonstrate this phenomenon, consider the time dependence depicted in figure 7, where we compare the exact partition function, to the behavior of an approximate partition function built out of a discrete spectrum with exponentially small modifications to the degeneracies. For long enough times, these two putative partition functions diverge despite the similarity in their spectra. In this way, the time dependence in different frames probes detailed aspects of the CFT spectrum. Information restoration in integrable theories So far we have discussed information loss in chaotic CFTs. In section 4 we have identified a decaying universal contribution to the spectral form factor, and commented on the expected late time behavior from random matrix theory. In integrable theories we can say significantly more about the time dependence of the spectral form factor. 15 Such theories are not chaotic and are not described by random matrix theories at small energy differences. Therefore, their spectral form factors do not exhibit a dip, ramp, and plateau at late times. Nevertheless, such theories do exhibit information loss at the level of individual Virasoro characters: each Virasoro character still decays to zero at late times. It is interesting to ask how information is restored in these simpler cases. In this section we will answer this question for chiral CFTs. The existence of chiral CFTs with large central charge that are dual to some form of semiclassical gravity is somewhat speculative [29][30][31][32][33][34][35][36][37]. Here we will work under the assumption that such theories do exist, and that they have a sensible bulk interpretation (though the calculation itself will be done purely in field theory). We will identify a set of modular transformations whose vacuum images are sufficient to restore information. In generic non-chiral theories, the same set of transformations is responsible for the universal late time decay discussed in section 4. In chiral theories, these transformations are enough to avoid the late time decay. JHEP08(2017)075 While we will focus on chiral theories, we note that much of what we say here also applies to holomorphic objects in general non-chiral theories, such as the elliptic genus which counts BPS states in theories with N = (1, 1) supersymmetry. We now turn to a brief review of the properties of chiral CFTs. In two spacetime dimensions, the vector representation of the Lorentz group is reducible into left-moving and right-moving representations. Chiral conformal field theories are theories of purely left-moving degrees of freedom in Lorentzian signature, or purely holomorphic fields in Euclidean signature. The symmetry algebra of these theories contains a single left-moving copy of the Virasoro algebra, and correspondingly a chiral CFT is labeled by a single central charge c. Operators are labeled by a single conformal dimension, h = ∆ = J, where J is the spin. The torus partition functions of chiral CFTs can be written in a similar fashion to a generic 2d CFT. We again will be focusing on the case of modular invariant theories, As above, we will focus on sparse theories with N h e 2πh , for which the thermal partition function undergoes a sharp phase transition in temperature. At high temperature the BTZ contribution dominates and is given by where τ = iβ 2π as before. 16 We again analytically continue β → β + it, with the modular parameter given by (5.5) 16 We are calling this the 'BTZ partition function' because it is dual to the contribution from the BTZ configuration in chiral gravity. See appendix A for details. JHEP08(2017)075 We consider the spectral form factor g(β, t) = |Z(β + it)| 2 . Just as in the non-chiral case, the BTZ contribution decays to zero at late times, We see that we have a phenomenon of information loss even in chiral theories. It is now easy to see how information is restored. The partition function is manifestly 2π-periodic in time as a result of modular invariance, Z(τ ) = Z(τ + 1). At time t n = 2πn, n ∈ Z, the partition function is dominated by the modular image χ 0 (γ n (τ n )) of the BTZ contribution. This image is simply equal to γ 0 (−1/τ 0 ) due to the periodicity. As advertised, the modular transformation at time t n is the same one that gives the universal late time decay discussed in section 4. Saddle point expansion Our next goal is to describe, in bulk language, the mechanism by which information is restored. The modular-invariant partition function includes contributions from SL(2, Z) images of the vacuum character. They are dual to a family of black holes in the bulk. In this section we will explain that the partition function can be written as a sum over these saddle point contributions. This description of the partition function is evocative of a bulk path integral. In the next section we will discuss how information is restored in this saddle point expansion, and what this may teach us about the bulk. As mentioned above, meromorphic modular invariant functions are entirely fixed by their poles and their constant term. For a chiral CFT, this means that the full partition function, is fixed by the light spectrum -those states with h ≤ k. Here the generating function for the light states is denoted by Z L . The way in which the spectrum of heavy states is fixed is relatively simple, and goes back to the work of Rademacher [38,39]. 17 We would like to complete Z L (τ ) into a fully modular invariant function. One way to do this is to sum over the modular group, SL(2; Z). One generator, τ → τ + 1 acts trivially on q, so we only actually need to sum over Γ ∞ \SL(2; Z). Here, the sum runs over the elements, JHEP08(2017)075 which can be parameterized by the pair (c, d) satisfying gcd(c, d) = 1. The sum is primed to indicate that there is a regularization needed. There is some freedom in how to regularize, but choices that preserve modular invariance can differ by at most an additive constant. 18 The sum takes on a particularly attractive meaning when thought of in the context of large k CFTs dual to large radius gravity. It is tempting to identify this sum with the sum over bulk geometries. In this description the first and second terms correspond to the vacuum and BTZ black hole respectively, and the remaining terms correspond to the subleading geometries M c,d and their appropriate generalization for gravitational theories with matter. As we review in appendix A, this can be made precise in the context of chiral gravity. Late time behavior in saddle point expansion Equipped with our expression of the partition function as an infinite sum over saddles, (5.8), we can gain more insight into how the thermal partition function avoids late time decay. Initially, at high temperatures, the partition function is well approximated by the BTZ contribution. This contribution, however, quickly begins to underestimate the partition function. Focusing on times t ≈ t n = 2πn and taking n > 0, the dominance of the BTZ saddle is eclipsed by the appropriate saddle, labeled by (c, d) = (1, n). For each integer n the given saddle goes from subdominant to dominant and then exponentially decays again. Only by summing over this infinite class of saddles do we get a partition function that exhibits the appropriate, non-decaying behavior, see figure 8. For non-integer time, we again have the spaghetti like behavior of section (4.3). For each time t = n/m there is a phase transition such that for all β < β m,n we are dominated by the (m, n) saddle. In this way, reproducing the correct late time behavior at all temperatures depends crucially on including the appropriate set of saddles. 18 One simple way to regularize is to promote Z(τ ) from a modular invariant function to a modular form of weight w, Zw(γ(τ )) = (cτ + d) w Zw(τ ) = γ∈Γ∞\SL(2;Z) Z L (γ(τ )) (cτ +d) w . The partition function Z(τ ) is then defined by analytic continuation. g 0 (β,t) g(β,t) Figure 8. The spectral form factor g(β, t) (dashed-dotted), and the contribution of six individual saddles g n (β, t), n = 0, . . . , 5 (solid lines). Each individual saddle g n is dominant around t = t n and exponentially sub-dominant at other times. Discretizing the spectrum Throughout this paper we have emphasized the connection between the late time behavior of the spectral form factor and the discrete nature of the spectrum. In this section we review how the naively smooth spectral density is rendered discrete by the SL(2; Z) saddle point expansion. Including a large but finite number of saddles in the expansion yields a smooth density of states with sharp peaks around the locations of the underlying states, while including all saddles leads to a fully discrete density of states (cf. eq. (5.22)). To be concrete, we will study weight w modular forms ψ n;w , with polar part consisting of a single pole of weight n. ψ n;w (τ ) ≡ 1 q n + O(q) . (5.13) They have the following property under modular transformation. ψ n;w (γ(τ )) = (cτ + d) w ψ n;w (τ ) . (5.14) To make contact with the previous discussion, the functions ψ n;0 can be used as a basis for constructing a partition function. Strictly speaking, the manipulations we present are only valid for w > 1, but we may think of introducing w as a regulator. 19 The final results can be analytically continued to w = 0. They match careful computations performed in the w < 1 regime with a subtraction based scheme [38,39]. For w = 0 the only holomorphic modular function is a constant, and so any scheme that preserves modular invariance is guaranteed to reproduce the same modular function, up to a constant. This constant may be important for understanding whether theories of pure 3d gravity exist [33,36], but will not effect our discussion here. 19 A special case of this is the differential regularization advocated in [20,41]. JHEP08(2017)075 Given any T invariant function, f (τ + 1) = f (τ ), we may write, F w (τ ) = γ∈Γ∞\SL(2;Z) 1 (cτ + d) w f (γ(τ )) . (5.15) To see how F w (τ ) transforms, we apply an element of SL(2; Z). The one subtlety in the above argument is working with the cosets, Γ ∞ \SL(2; Z) rather then the full group, but as f is T invariant, and {c, d} do not change when acting with T on the left, we are free to work in the coset space. We are interested in the special case, In terms of a real inverse temperature, we can write ψ n;w (β) = e βn + ∞ 0 d∆ ρ (n;w) (∆)e −β∆ , (5.18) and perform an inverse Laplace transform to read off the density of states. The term involving the density of states can be written explicitly as, It is useful to organize the sum over Γ ∞ \SL(2; Z) as a double sum first over Γ ∞ \SL(2; Z)/Γ ∞ , and a sum over right action by T . Then, by using the identity, The delta function in the last line is exactly the discreetness of the spectrum we were after. Notice that including a finite number of saddles, by placing a cutoff on | |, leads to a smooth density of states that becomes progressively sharper around the discrete states as we increase the cutoff. Put differently, by including an increasing number of saddles in the expansion we can witness the discreteness of the spectrum emerge out of the smooth density. Discussion In this paper we have examined the time dependence of the partition function in twodimensional conformal field theories. We identified a universal contribution which decays slowly in time. By apealing to the late time behavior of random matrix theory we were able to conjecture a dip time, where we expect the crossover to RMT to set in. In integrable models, in particular chiral conformal field theories, we were able to identify an infinite set of saddle point contributions to the partition function, corresponding to black holes in the bulk, which serve to restore information for all time. All of these discussions, however, leave open many avenues of future inquiry. One important question is when do the correction to the Cardy formula (4.20) describing the density of characters become important enough to affect the late time behavior. In theories with sufficiently sparse spectra, we expect such corrections to be responsible for the late time transition to random matrix theory behavior. They may also affect the universal decay worked out in section 4 before the dip time. A possible starting point for investigating these questions is to include non-vacuum states on the left-hand side of (4.20). An important assumption we use was sparsity of the light spectrum in gravitational theories. An obvious question is how the notion of sparsity imposed here connects to other such criteria one may wish to impose for a conformal field theory dual to gravity. For instance those coming from requiring a Hawking-Page phase transition, appropriate behavior of Rényi entropies, saturation of Lyapunov bounds, or from demanding a bulk point singularity [24,[42][43][44][45][46][47][48][49][50]. We have mentioned that the discussion of information loss in integrable theories can in principle be applied to counts of BPS states in generic supersymmetric theories. It would be interesting to study this in detail. It would be especially interesting if one can leverage information about how the BPS spectrum solves its information paradox to make statements about the full supersymmetric theory. As we investigate the analytically continued partition function at higher and higher temperature, it's time dependance becomes very featured, see figure 6. For our universal -28 -JHEP08(2017)075 contribution, as well as for chiral CFTs, there are spikes that occur at regular, rational times. An ambitious question is whether there is an experimental observable (perhaps considering a two point function rather than a partition function) which might be able to detect these rational spikes for experimentally realizable 1+1d systems.
14,774.2
2017-08-01T00:00:00.000
[ "Physics" ]
Migration check tool: automatic plan verification following treatment management systems upgrade and database migration Software upgrades of the treatment management system (TMS) sometimes require that all data be migrated from one version of the database to another. It is necessary to verify that the data are correctly migrated to assure patient safety. It is impossible to verify by hand the thousands of parameters that go into each patient's radiation therapy treatment plan. Repeating pretreatment QA is costly, time‐consuming, and may be inadequate in detecting errors that are introduced during the migration. In this work we investigate the use of an automatic Plan Comparison Tool to verify that plan data have been correctly migrated to a new version of a TMS database from an older version. We developed software to query and compare treatment plans between different versions of the TMS. The same plan in the two TMS systems are translated into an XML schema. A plan comparison module takes the two XML schemas as input and reports any differences in parameters between the two versions of the same plan by applying a schema mapping. A console application is used to query the database to obtain a list of active or in‐preparation plans to be tested. It then runs in batch mode to compare all the plans, and a report of success or failure of the comparison is saved for review. This software tool was used as part of software upgrade and database migration from Varian's Aria 8.9 to Aria 11 TMS. Parameters were compared for 358 treatment plans in 89 minutes. This direct comparison of all plan parameters in the migrated TMS against the previous TMS surpasses current QA methods that relied on repeating pretreatment QA measurements or labor‐intensive and fallible hand comparisons. PACS numbers: 87.55.T, 87.55.Qr Received 1 January, 2013; accepted 1 July, 2013 Software upgrades of the treatment management system (TMS) sometimes require that all data be migrated from one version of the database to another. It is necessary to verify that the data are correctly migrated to assure patient safety. It is impossible to verify by hand the thousands of parameters that go into each patient's radiation therapy treatment plan. Repeating pretreatment QA is costly, time-consuming, and may be inadequate in detecting errors that are introduced during the migration. In this work we investigate the use of an automatic Plan Comparison Tool to verify that plan data have been correctly migrated to a new version of a TMS database from an older version. We developed software to query and compare treatment plans between different versions of the TMS. The same plan in the two TMS systems are translated into an XML schema. A plan comparison module takes the two XML schemas as input and reports any differences in parameters between the two versions of the same plan by applying a schema mapping. A console application is used to query the database to obtain a list of active or in-preparation plans to be tested. It then runs in batch mode to compare all the plans, and a report of success or failure of the comparison is saved for review. This software tool was used as part of software upgrade and database migration from Varian's Aria 8.9 to Aria 11 TMS. Parameters were compared for 358 treatment plans in 89 minutes. This direct comparison of all plan parameters in the migrated TMS against the previous TMS surpasses current QA methods that relied on repeating pretreatment QA measurements or labor-intensive and fallible hand comparisons. previously treated and may require treatment in the future. Maintaining the integrity of the plan information is of the utmost importance to assure accurate and safe treatment after the migration. If one were doing a failure mode and effects analysis of data transfer, then migrating from one database to another would be consider a high-risk situation with low detectability. (3,4) In this work we describe a software tool for checking the parameters of a migrated treatment plan against the same plan before migration. The purpose of the tool is to replace a fallible and slow comparison done by users with a comprehensive direct parameter check done by welldesigned software. In a previous upgrade in our clinic from Aria 7.4 to Aria 8.9, we chose to repeat the patient-specific pretreatment quality assurance measurements for patients under treatment at the time of upgrade to verify the integrity of the treatment plan data. This was approximately a day's worth of effort and included verification of other patient parameters, such as the number of treatments and so on. The tool developed here was used in conjunction with a TMS upgrade and migration from Varian Medical Systems Aria 8.9 to Aria 11 and for a minor upgrade (Aria 11.0.5 to 11 MR1). II. MAtErIALS And MEtHodS Migration Check Tool (MCT) was developed by our in-house software development team using Microsoft.NET technology. The primary purpose of the tool is to compare all plans newly imported or under treatment in the database to the same plan in the software upgraded database that contains the migrated data. We refer to the "reference" as the current clinically used database that is to be copied and migrated as part of a software upgrade. The "test" database is the database resulting from the software upgrade and database migration that will go to clinical use after vendor acceptance testing and quality assurance tests have been passed. A. System design MCT is a software system comprised of three University of Michigan Radiation Oncology (UMRO) web services and a console application. The web services are part of a larger software architecture and interact with Varian Medical Systems Aria Oncology information system (Varian Medical Systems, Palo Alto, CA) using structured query language (SQL) queries and stored procedure calls. The web services in our system are titled "UMRO Aria WS 8.9", "UMRO Aria WS 11", and "UMRO Plan Comparison". "UMRO Aria WS" are simple access object protocols (SOAP) (5) style web services which provide an interface that exposes key objects stored in the Aria database in XML format. MCT uses each UMRO Aria WS service to retrieve a complete radiotherapy plan object in XML format from the reference database. UMRO Aria WS 11 is exactly the same as the Aria 8.9 version of the service, except that it is adapted to connect to the Aria 11 database. The UMRO Plan Comparison Service is also a SOAP-style web service, which directly compares two radiotherapy plan objects returning the result of that comparison as an XML document. The use of the SOAP interfaces provides for a platform-neutral service to provide machine readable information from the Aria database to other applications in need of that information. The UMRO Plan Comparison Console is a Windows Console Application that coordinates use of the three services to compare all plans that are imported or under active treatment in the reference database with the migrated version of each plan found in the test database. Figure 1 is a schematic diagram of the software services and Aria databases. The end user interacts only with the UMRO Plan Comparison Console. B. console application The user runs the migration check from the UMRO Plan Comparison Console which provides two distinct features. The first is a feature that searches the reference database to generate a list of active plans to be checked after migration. Active plans are ones that are in an active course and have status of "Unapproved", "Planning Approved" or "Treatment Approved". This tool is designed to be run before the TMS database migration to create a list of plans currently in the database that will migrate and likely be used later. This feature allows the users to know ahead of time which plans are targeted for QA during the migration process. The second feature of the Plan Comparison Console is the batch mode comparison feature. The Plan Comparison Tool takes as input each plan from the two different databases for comparison and runs a comparison algorithm. The active plan list includes the patient ID, course ID, and plan ID, along with other data in XML format. A user can create their own list to include a plan missing from the active plan list or to include plans with other statuses such as "Completed". c. Aria web services The Aria web service is part of a larger software infrastructure developed at The University of Michigan. The web service translates information in the Aria database into a common XML format for use by other software applications. This XML format with schema mapping is needed because the TMS database schema may be changed in newer versions such that tables cannot be directly compared to one another. For the Aria 8.9 to Aria 11 upgrade, not only did many data tables change, the database server platform also changed from Sybase to Microsoft SQL Server. Use of the common XML format simplifies the comparison algorithm by abstracting the representation of the plan in the database to a common format. Table 1 lists the major radiotherapy parameters that are represented in the XML schema. d. the Plan comparison Service The Plan Comparison Service accepts as input two radiotherapy plan XML documents, one retrieved from UMRO Aria WS 8.9 (reference), and the other from the UMRO Aria WS 11 service (test) by the Plan Comparison Console. The plan from the reference database is considered to be correct, and is the standard to which the migrated plan (test) is compared. This service uses an XML style sheet to make the comparison between the two parsed XML plans using the tolerance levels defined in Table 1. The service generates a report in both XML (for software agents) and PDF (for humans) for each comparison, detailing the parameters that differed between the reference and test plan. E. the Plan comparison tool The Plan Comparison Tool accepts as input the list of plans that are to be compared between the reference and test systems. Figure 2 illustrates the comparison process. The Plan Comparison Tool loops through the list, first getting the plan XML from the Aria WS 8.9, then getting the plan XML from the Aria WS 11, and finally passing both XML documents to the Plan Comparison Service. The Plan Comparison Service compares the two plans, field by field, control point by control point, parameter by parameter (see Table 1), and generates a report that details whether the two plans are identical or, if not, how they differ. A summary report is generated listing the success or failure of each plan comparison from the active plan list. For each set of plan comparisons, an individual report is generated indicating success or failure. A risk and hazards analysis of the system was performed to determine the possible deleterious impacts that MCT could have if it is used as part of a database migration and upgrade. A major hazard identified was that plan data would be corrupted but MCT would miss the difference between the test and reference plan. The other major hazard identified was that the wrong databases would be selected for comparison. This could happen in situations where multiple test and development systems exist, such as in our department. To mitigate this hazard, MCT reports which two databases are used in the comparison and that information is verified by physicists and IT staff. In addition, newly imported plans that only exist in the reference database before the migration are noted in the results to verify that the correct databases were used. MCT was specifically designed to check external beam plans and does not check brachytherapy planning information. By design, the Aria web services do not read or translate information related to images and structures in the Aria database. As such, reference DRRs and CT scans and the structures used for image guidance are not verified by MCT. These data need to be verified by other means. MCT checks the treatment plan geometry and not the dose tracking reference points used to enforce daily, session, and overall dose limits. F. testing and implementation The Plan Comparison Tool was tested as part of clinical acceptance testing. The software was tested by modifying plans in the reference database with key parameters changed. Plan Comparison Tool was used to compare modified and unmodified plans. The following modifications were made to plans: energy, mode, couch position and angle, deleted IMRT MLC sequence, MLC leaf position for static fields submillimeter changes to MLC leaf position in an IMRT control point, monitor units, jaw positions, gantry and collimator angles, and patient orientation. The tool makes no assumptions with respect to which two fields between two plans should be matched. Instead, it used a measure of agreement calculated by comparing parameters like gantry, couch and collimator angle energy, and treatment mode and treatment type (e.g., static vs. IMRT). The comparison tool has two ways to note a difference between fields. When two fields are matched between two plans, it reports any differences found. If a field is modified a great deal, it will not match any other field and the report will note that no corresponding field was found. Either result is considered a successful detection of a modified field. MCT was tested using clinical Aria 8.9 of VMS Aria against a migrated and upgraded version 11.0.5 of VMS Aria. Tests were run after-hours, so as not to impact clinical operations. MCT was implemented during the software upgrade and database migration. Special treatment plans were entered into the database before the upgrade to fully test the available combinations of beam energies and applicators. We labeled these as "one of everything" plans, where fields were included to test every possible combination of X-ray energy and wedge, electron energy and cone, and various add-on accessories. The clinical database also includes a large number of test and QA cases for static and IMRT commissioning and other QA applications. III. rESuLtS & dIScuSSIon MCT was used in conjunction with two separate software upgrades at The University of Michigan Radiation Oncology Department. The first upgrade was from Varian Medical Systems Aria 8.9 to Aria 11.0.5, which not only involved a database migration but a change in server software from Sybase to Microsoft SQL Server. MCT ran against the clinical database Aria 8.9 at the end of the treatment day to generate an active plan list before the upgrade and migration. A total of 358 plans were listed as active and were tested by MCT after the upgrade. Of the 358 active plans, 212 were test or QA plans and 146 were clinical radiotherapy treatment plans. A copy of the clinical Aria 8.9 database was set up on an alternate server and activated while the Aria 11.0.5 upgrade and migration was in progress on the clinical server. Once completed, acceptance testing was done and the software was released to our department by the vendor. At that time, a new active plan list was made by running MCT against the Aria 11 database. No differences in the plans listed were found. MCT was then launched in batch mode and each plan in the migrated Aria 11 database was compared to the previous Aria 8.9 database. MCT checked 358 plans in 89 minutes. Reports were generated for each plan tested and no differences were found in any of the plan data. As implemented, the MCT requires two versions of the database to be running concurrently. This may present an obstacle when an additional server is not available to run the previous version of the TMS software and database to compare the migrated system. The use of multiple servers is helpful because it permits a timely investigation of any differences. Additional database migration testing was done by loading the "one of everything" plans in clinical mode on each treatment unit to verify that the plan parameters were correct and could be loaded by the treatment control software at the treatment unit without errors or warnings. Selected IMRT QA plans that were migrated and tested using MCT were rerun to verify proper IMRT delivery. Each plan that uses CBCT image guidance was tested to verify proper functioning, because MCT does not check image or structure data. The second use of MCT was for a minor upgrade from Aria 11.0.5 to Aria 11 MR1. Once again, a copy of the clinical database was set up and accessible by SQL queries upon which the web service application relies. In that instance, MCT ran on a virtual machine environment and performance was degraded to where it took nearly 5 hours. An example of the output of MCT is show in Figs. 3, 4, and 5. Figure 3 shows the summary page of all results with links to individual plan comparison reports. In this example, QA cases were modified between the clinical database and test database. The first result, "none / C1 / 1.1-1 C2 SBRT", demonstrates an instance where a patient doesn't exist in the migrated database. The second comparison that shows a failure, "none / C1 / 1.1-1 HN", is an example where the plan was artificially modified in Aria to produce differences between the plans. The beginning of this plan comparison report is shown in Fig. 4. The differences are noted in Table 2. MCT reports all differences found and, as a result, the entire report is too long to show here. Figure 5 shows an individual report for a plan that passed the comparison. Migration Check Tool joins other previous work that used automated software checks of plan parameters at different phases of the radiotherapy planning and delivery process. (6,7) These computer-based checks have proven useful in finding errors in process or parameters, and lead to an improvement in patient care. The use of such tools supports patient safety directly and saves invaluable personnel time during the migration process and acceptance. Many critical parameters that exist in any radiation oncology database are not checked by MCT because it is focused on radiation therapy plan parameters. Items not checked by MCT include, for example, universal IDs, unique serial numbers, patient identifiers, and image data. One critical aspect of treatment not checked by MCT is the dose accumulated to reference points. Reference point data were checked manually by printing a paper report before the migration and then verifying the correct dose after the migration. In our clinic, a physicist verifies the total dose delivered to date for each plan prior to the final migration of the database and then confirms the information in the upgraded system prior to using it for patient treatments. Also, because MCT is only focused on active plans, errors created in plans that have been completed would go undetected. Checking all this type of data would have to be part of a much more comprehensive database integrity check. IV. concLuSIonS Software upgrades require careful checking of database configuration information stored in the database to assure patient safety and guarantee that the treatment delivery data are correct. The impact of incorrect plan data caused by a faulty migration could be devastating for a patient. With complex IMRT and dynamic arc treatments, it is not possible to manually check all the plan data after a migration. Repeating IMRT QA measurements would be labor-intensive and even if the repeated measurement passes QA that may not reveal corrupted data used to control treatments. We developed an automated software check of all treatment plans either under treatment or being prepared for treatment that have been migrated from a clinical version of the TMS to an updated and migrated version. Using a software-based check of all plan parameters, we were able to make direct comparisons of all parameters used to control complex treatments. The software tool allowed for more parameters to be checked in a more detailed level than users Fig. 5. Example report of a plan that had no differences between the pre-and postmigrated and updated database. Table 2. List of artificial plan differences. The example plan used was: "none / C1 / 1.1-1 HN".
4,653.6
2013-11-01T00:00:00.000
[ "Engineering", "Medicine" ]
A Novel Recovery Method of Soft X-ray Spectrum Unfolding Based on Compressive Sensing In the experiment of inertial confinement fusion, soft X-ray spectrum unfolding can provide important information to optimize the design of the laser and target. As the laser beams increase, there are limited locations for installing detection channels to obtain measurements, and the soft X-ray spectrum can be difficult to recover. In this paper, a novel recovery method of soft X-ray spectrum unfolding based on compressive sensing is proposed, in which (1) the spectrum recovery is formulated as a problem of accurate signal recovery from very few measurements (i.e., compressive sensing), and (2) the proper basis atoms are selected adaptively over a Legendre orthogonal basis dictionary with a large size and Lasso regression in the sense of ℓ1 norm, which enables the spectrum to be accurately recovered with little measured data from the limited detection channels. Finally, the presented approach is validated with experimental data. The results show that it can still achieve comparable accuracy from only 8 spectrometer detection channels as it has previously done from 14 detection channels. This means that the presented approach is capable of recovering spectrum from the data of limited detection channels, and it can be used to save more space for other detectors. Introduction Investigation of laser-driven inertial confinement fusion (ICF) is of great significance in the search for the ideal energy resource, as well as the application of fundamental scientific research [1]. ICF is a process in which nuclear fusion reactions are initiated by heating and compressing a fuel capsule containing a mixture of Deuterium and Tritium [2]. The purpose of laser fusion diagnostics is to reveal the state and behavior of the target plasma by measuring the characteristics of the plasma radiation and fusion reaction products, and then give insight into the absorption mechanism and regular characteristics of the laser energy. Figure 1 is the ICF experimental equipment Shenguang II. In the ICF experiment, the laser plasma emits strong X-ray radiation whose energy spectrum is mainly concentrated in the soft X-ray energy region, and its photon energy ranges from tens of electron volts to thousands of electron volts. The spectrometer will detect the power spectrum and then transmit the detected signals to the computer, and the spectrum information of the target can be obtained [3]. The spectral information can be used to research the implosion process with the plasma. Thus, it is crucial to investigate the soft X-ray radiation spectrum in the ICF diagnosed experiment. The solution of the unfolding problem is an ever-present issue in X-ray spectrometry. To recover the original spectrum it is necessary to use the detector response function, by solving the so called inverse problem [4]. There are many different methods that can be used to solve this problem, and each different approach leads to a different unfolding method and a different approximate solution [5], but in general the strategy is familiar: Search for a solution that is close to a reasonable estimate of the spectrum which could give a good data fit, without over-fitting or under-fitting. The maximum entropy unfolding technique [4] solves the inverse problem by imposing a set of physical constraints artificially. The stochastic methods [5], such as the Monte Carlo methods [6], Genetic algorithms [5], and Neural networks [7], are used to derive the solution spectrum, and are successful in some specific applications. The weighted method which modifies the portion of each spectrometer response was introduced by Fehl [8] to recover the radiation flux. However, the combined flat response is not perfectly smooth, and errors will be introduced into the recovered radiation flux. The basis function method uses the linear combination of the basis function [9] to express the spectrum. There are mainly two techniques to this method. One technique is selecting the reasonable basis function to reconstruct the spectrum. The response function [10], Piece-wise B-spline [10,11], and Gaussian Bumps, among others [12][13][14], are shapes of the basis function frequently used in spectrum unfolding experiments. By using these basis functions, basis atoms should be selected in some certain rules in advance. Thus, these selected basis atoms might not be enough and fixed, and these basis functions may have difficulty presenting the spectrum signal. The other technique is choosing an appropriate calculation method to determine coefficients of the basis function. The following calculation methods are commonly used at present: Iteration based on maximum entropy [11,15,16], the Bayesian theorem [16], singular value decomposition (SVD) [10], and Least Squares (LS) [10], among others. Since these coefficients are the solution in the sense of the ℓ2 norm, which are calculated over the fixed basis dictionary, the solution may be under-fitting with some measured data, and these methods cannot recover the spectrum precisely. These methods may need more measurement data to achieve good recovery performance for a spectrum. Nevertheless, with the laser beams increasing, it is too difficult to obtain more detection channel data with the soft X-ray spectrometer, which yields some difficulties in a practical ICF facility. Since there is a lot of other physical information that needs measured to research the soft X-ray, including the temperature and radiation, the possible install positons and the opening of the center target for spectrometers is limited, and their orientation also leads them to interfere with each other. It is hard to add detection channels to achieve good performance in recovering the spectrum. Thus, it is necessary to find a new recovery method to unfold a spectrum with finite detection channels. The solution of the unfolding problem is an ever-present issue in X-ray spectrometry. To recover the original spectrum it is necessary to use the detector response function, by solving the so called inverse problem [4]. There are many different methods that can be used to solve this problem, and each different approach leads to a different unfolding method and a different approximate solution [5], but in general the strategy is familiar: Search for a solution that is close to a reasonable estimate of the spectrum which could give a good data fit, without over-fitting or under-fitting. The maximum entropy unfolding technique [4] solves the inverse problem by imposing a set of physical constraints artificially. The stochastic methods [5], such as the Monte Carlo methods [6], Genetic algorithms [5], and Neural networks [7], are used to derive the solution spectrum, and are successful in some specific applications. The weighted method which modifies the portion of each spectrometer response was introduced by Fehl [8] to recover the radiation flux. However, the combined flat response is not perfectly smooth, and errors will be introduced into the recovered radiation flux. The basis function method uses the linear combination of the basis function [9] to express the spectrum. There are mainly two techniques to this method. One technique is selecting the reasonable basis function to reconstruct the spectrum. The response function [10], Piece-wise B-spline [10,11], and Gaussian Bumps, among others [12][13][14], are shapes of the basis function frequently used in spectrum unfolding experiments. By using these basis functions, basis atoms should be selected in some certain rules in advance. Thus, these selected basis atoms might not be enough and fixed, and these basis functions may have difficulty presenting the spectrum signal. The other technique is choosing an appropriate calculation method to determine coefficients of the basis function. The following calculation methods are commonly used at present: Iteration based on maximum entropy [11,15,16], the Bayesian theorem [16], singular value decomposition (SVD) [10], and Least Squares (LS) [10], among others. Since these coefficients are the solution in the sense of the 2 norm, which are calculated over the fixed basis dictionary, the solution may be under-fitting with some measured data, and these methods cannot recover the spectrum precisely. These methods may need more measurement data to achieve good recovery performance for a spectrum. Nevertheless, with the laser beams increasing, it is too difficult to obtain more detection channel data with the soft X-ray spectrometer, which yields some difficulties in a practical ICF facility. Since there is a lot of other physical information that needs measured to research the soft X-ray, including the temperature and radiation, the possible install positons and the opening of the center target for spectrometers is limited, and their orientation also leads them to interfere with each other. It is hard to add detection channels to achieve good performance in recovering the spectrum. Thus, it is necessary to find a new recovery method to unfold a spectrum with finite detection channels. Compressive sensing (CS), proposed by Donoho and Candès [17], is a new method to reconstruct signals from significantly fewer sampling points than is required by traditional methods. In recent years, compressive sensing has attracted considerable attention in areas of medical imaging (MI) [18], Analog-digital Conversion [19], Computational Biology [20], Computer Graphics [21], and other aspects. Owing to the significance of compressive sensing, CS can be applied in this paper to solve the problem of spectrum unfolding with limited detection channels data, and achieve the following: (1) The spectrum recovery is formulated as a problem of an accurate signal recovering from a few measurements (i.e., compressive sensing), which applies the measurement matrix to convert the spectrum signal into a voltage signal, and enables the signal be reconstructed with a small amount of measured data. (2) The proper basis atoms are selected adaptively over the Legendre orthogonal basis dictionary with a large size and Lasso regression in the sense of the 1 norm, which enables the spectrum to be recovered with high accuracy from the small amount of measured data of the limited detection channels. (3) By employing this method, since the soft X-ray spectrometer may recover the spectrum with limited detection channel data, it provides the possibility of saving space for other detectors. The rest of this paper is organized as follows. In the next section, we introduce the soft X-ray spectrometer and the previous principle of the energy spectrum recovery with multi-channels, followed by a brief review of compressive sensing theory. Section 3 gives the entire formulation of this novel method of spectrum unfolding based on compressive sensing. Then in Section 4 we discuss the numerical experiments of spectrum recovery, and the results which support our viewpoints. Finally, the paper comes to an end with a summary of some significant conclusions. Soft X-ray Spectrometer and Compressive Sensing In this section, we first present the soft X-ray spectrometer in the ICF experiment, including its structure, operating principle, and drawbacks in the process of spectrum unfolding. Then we demonstrate compressive sensing theory through discussion of sparse representation, the measurement matrix, and sparse coefficients reconstruction. Soft X-ray Spectrometer In indirect-drive inertial confinement fusion (ICF) experiments, high power laser beams irradiate the high-Z inner wall of the hohlraum and the energy is converted into soft X-ray radiation, which is used to drive the capsule located inside the hohlraum to implode, or to irradiate a package of materials to study their properties [9,22,23]. In order to investigate the physical process of ICF, it is important to measure the radiation spectrum distribution of the soft X-ray. Multi-channel spectrometers composed of filtered X-ray diodes (XRDs) are routinely used in ICF experiments to measure the radiation flux and recover the spectrum from the laser produced soft X-ray radiation source [9,24]. This kind of spectrometer includes the Dante on Omega and NIF (National Ignition Facility) in the U.S (Rochester, NY, USA). Ref. [13,24,25], the DMX in France [26], and the soft X-ray spectrometer used on Shenguang laser facilities in Mianyang, China [27]. The soft X-ray spectrometer works using the filter method shown in Figure 2, and it's structure contains the following parts: Collimator, filter, mirror, XRD, cable, attenuator, and oscillograph [10]. The filtering method is composed of a windowless soft X-ray diode as a detector, and it can be regarded as a soft X-ray detection system. Firstly, the energy spectrum from the soft X-ray is divided into several energy channels with different combinations of filters and mirrors [27], in which the thickness of the filters can influence the soft X-ray transparency, and the mirrors have characteristics that can cut off the high energy tails. Then, the detecting response may be obtained to measure the spectral signal. Depending on these detecting responses, the XRD of the high voltage power supplying system can transmit the spectral signal to the digital oscilloscope through the microwave cable, and the data can be collected. Finally, voltage signals may be displayed on a high-speed oscilloscope. Taking a 14 channel spectrometer as an example, the detecting response is shown in Figure 3. In Figure 3, normalized channel response functions of a 14-channel spectrometer are depicted. This spectrometer covers photon energy from 50 eV to 6000 eV. Different channels from the filtered X-ray diode array use filters and mirrors made of different materials to realize roughly band-pass measurements [9]. The signal of the channels from the filtered XRD arrays is determined by the following integral: where is the number of detecting channels, is the photon energy, ( )is the soft X-ray spectrum, ( ) is the response function of the th channel, and is the voltage signal from the th channel recorded by a high speed oscilloscope. Equation (1) can usually be deemed as the first kind Fredholm integral equation, and spectrum recovery can be obtained by solving this inverse problem. The recovery method [10] based on a basis function was introduced to approach the original spectrum with the linear combination of basis function ( ) , ,…, , such as the Piece-wise B-spline function. , ,…, are coefficients, in which is the number of basis atoms, and the errors are generated. It can be described as Taking a 14 channel spectrometer as an example, the detecting response is shown in Figure 3. Taking a 14 channel spectrometer as an example, the detecting response is shown in Figure 3. In Figure 3, normalized channel response functions of a 14-channel spectrometer are depicted. This spectrometer covers photon energy from 50 eV to 6000 eV. Different channels from the filtered X-ray diode array use filters and mirrors made of different materials to realize roughly band-pass measurements [9]. The signal of the channels from the filtered XRD arrays is determined by the following integral: where is the number of detecting channels, is the photon energy, ( )is the soft X-ray spectrum, ( ) is the response function of the th channel, and is the voltage signal from the th channel recorded by a high speed oscilloscope. Equation (1) can usually be deemed as the first kind Fredholm integral equation, and spectrum recovery can be obtained by solving this inverse problem. The recovery method [10] based on a basis function was introduced to approach the original spectrum with the linear combination of basis function ( ) , ,…, , such as the Piece-wise B-spline function. , ,…, are coefficients, in which is the number of basis atoms, and the errors are generated. It can be described as In Figure 3, normalized channel response functions of a 14-channel spectrometer are depicted. This spectrometer covers photon energy from 50 eV to 6000 eV. Different channels from the filtered X-ray diode array use filters and mirrors made of different materials to realize roughly band-pass measurements [9]. The signal of the channels from the filtered XRD arrays is determined by the following integral: where m is the number of detecting channels, E is the photon energy, S(E) is the soft X-ray spectrum, M k (E) is the response function of the kth channel, and D k is the voltage signal from the kth channel recorded by a high speed oscilloscope. Equation (1) can usually be deemed as the first kind Fredholm integral equation, and spectrum recovery can be obtained by solving this inverse problem. The recovery method [10] based on a basis function was introduced to approach the original spectrum with the linear combination of basis function B j (E) j=1,2,...,N , such as the Piece-wise B-spline function. S j j=1,2,...,N are coefficients, in which N is the number of basis atoms, and the errors are generated. It can be described as Sensors 2018, 18, 3725 of 17 The purpose of the diagnosis is to recover the radiation spectrum S(E) from the known response function M k (E) and voltage signal D k . Combine Equations (1) and (2) and the voltage signal D k is written as The coefficient vector S j can be calculated based on the Least Square algorithms. The spectrum S(E) can be determined through Equation (2). In this method, the basis atom is chosen factitiously in the fixed basis function which may have the limited basis atom, to recover spectrum. More measurement data would be needed to calculate the coefficient vector, otherwise there may be under-fitting in the process of spectrum recovery. However, the number of detection channels is enslaved to the spectrometer structure. There is much physical information that needs measured to research the soft X-ray, including the temperature and radiation, and the install positon and the opening of the center target leaving for the detecting channels is limited, with their orientations also interfering with each other. This makes it hard to add detection channels for more measurement data. To clearly comprehend this, we supposed that according to the response data from 14 detection channels, which is shown in Figure 3, the basis atoms factitiously selected are fixed and the number of the basis atoms is 14. With this method, these coefficients can be calculated over a fixed basis dictionary and solutions are in the sense of the 2 norm, which may not be computed accurately with limited measured data. The recovery spectrum can be easily under-fitting. Owing to the significance of compressive sensing, a method to reconstruct signals from significantly fewer sampling points than traditional methods is required. We present a novel spectrum unfolding method based on compressive sensing, which may enable basis atoms to be selected adaptively over a large size of basis dictionary with high-order basis atoms, and the spectrometer may achieve high accuracy spectral information from limited detection channels. Compressive Sensing Method Compressive Sensing, also known as compressive sampling, is a novel sensing paradigm that is widely used in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use [28]. To be more specific, the idea of compressive sensing is to reconstruct a signal from a few samples by exploring it's sparse characteristics over some kinds of representation basis, such as the Fourier representation basis, Discrete cosine basis, and Discrete wavelet basis, among others. Compressive sensing generally consists of the following parts: Sparse representation, a measurement matrix, and sparse coefficients reconstruction. Sparse Representation The theory of sparse representation was first proposed in signal field and later widely applied to solve undetermined equations [29]. It reveals an interesting phenomenon, that a function or signal can be represented with just a few significant terms from a set of base functions. This means there are only a few large (in magnitude) coefficients corresponding to the base functions, and most of them are zero or they may be ignored with minimal perceptual loss. Given the set of sampling points x = [x 1 , x 2 , . . . , x n ] T on the interval [a, b], and the corresponding actual signal F = [F(x 1 ), F(x 2 ), . . . , F(x n )] T , any signal F ∈ R n×1 can be represented by a linear combination of a complete basis: where the sparse basis matrix (so-called dictionary) Φ ∈ R n×n is composed of a set of basis atoms, and Θ ∈ R n×1 is the coefficient vector. If Θ has only ρ n non-zero elements, in which ρ sparseness represents the number of non-zero elements, Θ is named as the sparse coefficient vectors. Additionally, if ρ is small enough, the signal F may be accurately reconstructed by a few sampling points. Measurement Matrix Supposing a measurement matrix A ∈ R m×n (m n), the function is reduced from the n-dimension to m-dimension by linear transformation as follows: where Ψ ∈ R m×n is obtained by A timing Φ, and can be called a sensing matrix. Y ∈ R m×1 is the transformed vector. This theory asserts that the function F ∈ R n×1 can be recovered by. Y ∈ R m×1 [30] under two conditions. The first one is the sparsity, which demands that the function should be sparsely represented on the dictionary Φ. The other is the incoherence, which means these two matrices need to satisfy the condition of the Restricted Isometry Property (RIP); that is, the measurement matrix A should be incoherent with dictionary Φ, and it can be described as The restricted isometry constant δ s (0 < δ s < 1) is defined as the smallest constant for which this property holds for all sparse coefficient vectors Θ. It is difficult to set up a matrix that satisfies the RIP condition, or verify the matrix satisfies the RIP condition. The non-coherent property was proposed by Candès and Wakin [17]; that is, if the measurement matrix A is linearly independent of the dictionary Φ, or the correlation of both is weak, the two matrices will satisfy the RIP criterion with high probability. It has been proven that random matrices are largely incoherent with any fixed basis [17]. Since the dictionary Φ is reasonably designed, only the measurement matrix A ∈ R m×n may be varied. There are several measurement matrices to construct the measurement matrix; examples of these matrices include Circulant [31], Toeplitz [32], Chirp sensing [33], Gaussian, and Uniform random matrices. As the random matrix is easy to implement in the current spectral recovery experiment and can be incoherent with a fixed basis, it was used to construct the measurement matrix. With this matrix, the measurement matrix can be selected randomly, and random matrices A may be largely incoherent with any fixed basis Φ. Sparse Coefficients Reconstruction In order to recover the function with limited sampling points, we must seek the sparse representation over the dictionary. The sparse coefficient Θ can be obtained by solving the inverse problem of Equation (6) with an appropriate compressive sensing reconstruction algorithm, which can be seen as solving the following problem: The above problem is called the p 0 question, which is known as an NP (Non-Deterministic Polynomial) hard problem. To solve this problem, the Greedy algorithm, Bayesian category [34], and Convex Relaxation Technique have been proposed. The Greedy algorithm is efficient, but has poor performance in anti-noise; examples include the Matching Pursuit (MP) [35] and Orthogonal Matching Pursuit (OMP) [36], among others. The Bayesian category [37] can balance between small recovery error and short recovery time in the large scale problem, with an example being the Bayesian Relevance Vector Machine (BCS-RVM) [38]. Convex Relaxation Techniques, such as Basis Pursuit (BP) [39] and Lasso regression [40], have an excellent ability in basis atom selection and coefficient shrinkage. The latter algorithm has a higher accuracy and its performance can be maintained even with added noise in the system. In this paper, we prefer to apply lasso regression to obtain the coefficient vector Θ, which cannot only compress most of the coefficients to zero, but also avoid over-fitting. To make this possible, Lasso regression relaxes the 0 norm to the 1 norm. The 1 norm question is a convex problem which has the most similarity to the 0 norm. It can be written as Lagrange multiplier:Θ The last equation is the famous Lasso (least absolute shrinkage and selection operator) model, which consists of a least square estimate and a 1 norm penalty. Where β ≥ 0 denotes the penalty factor, it may be deemed as the weight between cost function Y − ΨΘ 2 2 and the penalty β Θ 1 . The larger the value β is, the more elements in β Θ 1 will be close to zero. This means that β Θ 1 = βΘ 1 can contribute to the sparse solution in terms of 1 norm regularization. The accuracy of reconstruction also depends on the appropriate value of β. Thus, it is necessary to find the "coefficient path" which represents the relationship between the solutionΘ(β) and β. There are a lot of operators for solving the Lasso problem, such as quadratic programming [40] and the least angle regression lasso (LAR-Lasso) [41] algorithm. Quadratic programming uses the 1 norm penalty on the regression coefficients, and it tends to produce simpler models [42]. LAR-Lasso is a method that can be viewed as a vector-based version of lasso to accelerate the computations [42]. Zou et al. [43] proposed an efficient algorithm called LARS-EN to solve Lasso with a regularization parameter which is based on LARS. The algorithm complexity is equivalent to the complexity of least squares. It is particularly useful when the number of predictors is bigger than the observer number. We chose LARS-EN in this article and downloaded the Matlab toolbox from http://www2.imm.dtu. dk/projects/spasm/. Spectrum Unfolding based on Compressive Sensing In this section, spectrum recovery is formulated as a problem of accurate signal recovery from a few measurements; that is, compressive sensing. We will detail that the proper basis atoms are selected adaptively over a Legendre orthogonal basis dictionary with a large size and Lasso regression in the sense of the 1 norm, which enables the spectrum to be recovered with high accuracy from the minimal measured data of the limited detection channels. The entire process of the spectrum unfolding method based on compressive sensing is given in Figure 4. Sparse Representation of Spectrum with Legendre Polynomial The spectrum signal is represented by a sum of base functions with their coefficients, and it can be written as and it's matrix form is In the process of sparse representation of the spectrum, we prefer to choose the basis function which has orthogonal properties, like Legendre polynomials. The Legendre orthogonal basis function is good at expressing the spectral curve with high-order basis atoms, and it is easily constructed and calculated. By measuring with the sparse representation of the signal on the orthogonal basis, a few coefficients can be used to present spectrum unfolding. In addition, the Legendre polynomial is orthogonal with respect to the ℓ2 norm, which means in the interval [−1, 1], the energy range should be transferred into [−1, 1] in advance. The Legendre polynomials can be determined by the recursive definition. Suppose that ( ) = 1, ( ) = , then The basis function of Legendre orthogonal polynomials can be defined as Given the sampling points of photon energy, the sparse basis matrix ∈ ℝ × (so-called dictionary) composed by a set of atoms is written as follows: Sparse Representation of Spectrum with Legendre Polynomial The spectrum signalŜ is represented by a sum of base functions with their coefficients, and it can be written asŜ and it's matrix form isŜ = Φ·Θ (12) where E = [E 1 , E 2 , . . . , E N e ] T is the discrete equidistant point represented by the photon energy, and N e is the number of the photon energy point. {θ i } i=1,2,...,N b are coefficients, Θ = θ 1 , θ 2 , . . . , θ N b T is the coefficient vector, {ϕ i (E)} i=1,2,...,N b are basis functions, Φ = ϕ 1 , ϕ 2 , . . . , ϕ N b is the set of base functions, and N b is the number of the basis function which determines the dictionary size. For ϕ i (E) is a polynomial, a set of orthogonality polynomials is commonly used as atoms to describe the signals in the theory of compressive sensing. In the process of sparse representation of the spectrum, we prefer to choose the basis function which has orthogonal properties, like Legendre polynomials. The Legendre orthogonal basis function is good at expressing the spectral curve with high-order basis atoms, and it is easily constructed and calculated. By measuring with the sparse representation of the signal on the orthogonal basis, a few coefficients can be used to present spectrum unfolding. In addition, the Legendre polynomial is orthogonal with respect to the 2 norm, which means in the interval [−1, 1], the energy range should be transferred into [−1, 1] in advance. The Legendre polynomials can be determined by the recursive definition. Suppose that L 0 (E) = 1, L 1 (E) = E, then (n + 1)L n+1 (E) = (2n + 1)L n (E) − nL n−1 (E), n = 1, 2, · · · , N b The basis function of Legendre orthogonal polynomials can be defined as Given the sampling points of photon energy, the sparse basis matrix Φ ∈ R N e ×N b (so-called dictionary) composed by a set of atoms is written as follows: The basis dictionary can be generated with a large number of basis atoms. Due to high order expansion on the Legendre basis function and a plentiful basis dictionary, the proper basis atoms can be selected adaptively, and higher accuracy spectral information may be achieved. Measurement Data for Sparse Spectrum Reconstruction The measurement matrix M ∈ R m×N e (m N e ) is constructed by multiple sets of response values, which are generated by the corresponding detection channels. To obtain different response data, there are many configuration parameters that could be adjusted in each detection channel, such as the filter materials, the thickness of the filter, the installation of the plane mirror, and the size of the solid angle. The values for these configuration parameters depend on the experimental conditions, so they have to be chosen from the existing experimental devices. Examples of filter materials, of which there are almost ten more device varieties, such as Al, B, C, Ti, Cr, Fe, Ni, Cu, and Zn. As well as other configuration parameters, they can be selected in a definite range. The combination of all the configuration parameters can be made as a data set and labeled. p represents the number of all combinations. However, there are only m positions to install the detecting channels in the laser facility. There are many methods for taking m detection channels from p combinations. To improve the incoherence of the samplings, the random matrix is used to select m detection channels, and then the label is used to find out the configuration parameters. The spectrometer would be scaled by these configuration parameters to obtain the m response, and each response can be represented as a set of vectors which have N e samplings. Finally, the measurement matrix M ∈ R m×N e can be constructed, and it can be represented as After the measurement matrix M is constructed, it can be used to measure the original spectral signal and obtain the new converted signal. Based on the spectrum unfolding principle of the soft X-ray spectrometer and Equation (1) in the last section, it can be seen that the spectral signal can be converted into voltage signals D = [D 1 , D 2 , . . . , D m ] T . The conversion process can be described with Equation (17) where Ψ ∈ R m×N b is the sensing matrix, which is a non-full rank matrix generated with measurement matrix M ∈ R m×N e and basis dictionary Φ ∈ R N e ×N b . To make it clearer, we supposed that according to the response data from 14 detection channels, which are shown in Figure 3, the number of basis atoms N b can be set to 84 (this value depends on the compromise scheme based on the theoretical results in Reference [44]; i.e., let N b = 6 m), which means the basis dictionary has a large number, and the highest order of the basis function is up to 83. This can be compared with the Piece-wise B-spline method referred to in the last section, and as it can see the spectrum, may be expressed more accurately in the large basis dictionary. Sparse Coefficient Recovery for the Soft X-ray Spectrum From Equation (17), the work recovering the spectrum is turned to solving the problem Equation (18) is an NP hard problem, which has many groups of uncertain solutions. According to the brief of Section 2, Lasso regression can relax the 0 norm to the 1 norm, thus Equation (19) can be obtained: Due to the sensing matrix Ψ ∈ R m×N b containing a large number of high-order basis atoms, the spectrum may be represented precisely. It also has the trait of completeness and redundancy, which may make the spectrum-unfolding over-fitting. To solve the problem, Least Angle Regression (LAR) [45] is employed to calculate the entire path of solutions as t is varied, in which t should be adaptively chosen to minimize an estimate of expected prediction error. As the tuning parameter t varies, making t sufficiently small will cause some of the coefficients to be exactly zero, and sparseness ρ to be close to zero. Whereas, when the value t is increasing, the number of non-coefficients ρ is increasing, and the spectrum recovery may be more accurate. Next, Akaike's Information Criterion (AIC) [46] is applied to trade-off the sparsity and the accuracy to select the best coefficient path, which can be defined as where K(Θ) presents the number of nonzero elements of Θ, and r 2 denotes the residual variance, which can be calculated by Equation (21): where Ψ + is the Moore-Penrose pseudo-inverse of Ψ. I best is supposed to the index of the best vector. Θ (z) z=1,2,...,Z is generated by LARS-EN,Θ (z) is the coefficient vector calculated in the zth iteration, and Z is the maximum number of iterations. Therefore, we can have Subsequently, the best coefficient vectorΘ (I best ) can be acquired, and the best reconstruction voltage signal can be expressed asD Finally, the best recovery spectrumŜ best can be calculated aŝ Overview of the Soft X-ray Spectrum Unfolding Process The process of the proposed spectrum unfolding method based on compressive sensing is shown in Figure 4. The spectrum recovery is an inverse problem, which depends on the measured voltage signal D from the soft X-ray spectrometer to calculate the spectral information. At first, the voltage signal D is obtained by the oscilloscope, and then the basis dictionary, which can sparsely represent the spectrum, is built based on the Legendre polynomial; the range of the photon energy E should be mapped into [−1, 1] in advance. To improve the incoherence of the samplings, the measurement matrix M can be constructed by modifying the configuration parameters from the detection channels at random; these random parameters include the filter material, the thickness of the filter, the installation of the plane mirror, and the size of the solid angle. Next, the sensing matrix can be obtained by multiplying the measurement matrix M and Φ. Finally, LARS-EN can be employed to calculate the coefficient path, and the AIC criterion is adopted to select the best coefficient vector, and thus the spectrum will be recovered. Numerical Experiments In this section, we provide two groups of contrasting experiments to prove that this novel method enables accurate signal recovery from a few measurements. Accuracy Assessment Criteria To verify the spectrum unfolding method, we conducted self-inspection of numerical experiments. In order to imitate the real experiment, we input the known spectrum S, called the original spectral signal, then we acquired the voltage signal value by the simulation of the soft X-ray spectrometer. To solve the inverse question, the value of voltage signal could be deemed as the input vector, and the recovery spectrumŜ could be obtained by the spectrum unfolding method. Comparing the two sets of data S andŜ, we could determine the spectral recovery accuracy based on this spectrum unfolding method. For this experiment, it was very important to verify the accuracy of the spectrum recovery. In addition to the self-inspection of numerical experiments, we also reproduced the experiment of the recovery performance with different spectrum unfolding methods, in which both the Piece-wise B-spline and Gaussian Bump methods were used in past research. We contrasted these two methods with the compressive sensing method in recovery performance. Furthermore, in the experiment we also explored the accuracy of the three methods with a small amount of measured data from the limited detection channels of the soft X-ray spectrometer. The recovery error is a metric to evaluate the error between the original spectrum signal and the recovered one. In order to calculate the recovery error, the following formulas were used in this literary work: 1. Root mean squared error (RMSE) Since the actual spectrum of the benchmark functions was known, the actual errors at any points could be computed. The RMSE is given by where S (k) is the actual spectrum,Ŝ (k) is the spectrum unfolding at the kth test point, and n is the number of test points. Mean absolute error (MAE) The mean absolute error is given by Experimental Settings To show the robustness of the method, two types spectrum were regarded as the original spectrum; that is, the Plank spectrum and Triple-peak spectrum, which are commonly used in spectrum recovery experiments. In order to see the recovery performance, three methods were applied to contrast these. The first one is proposed in this paper based on the compressive sensing method, the second is the Piece-wise B-spline spectral method, and the third is the Gaussian Bump. The basic parameter settings based on the compressive sensing method were as follows: The photon energy ranged from 50 eV to 6000 eV, the number of sampling points for photon energy was N e = 500, and the number of the Legendre basis was N b = 6 m. To better illustrate the capacity of spectrum unfolding with limited detection channels, the number of detecting channels m was regarded as the experimental variable. We assumed that the number of detection channels in spectrometers is up to 14, and that the result of spectrum unfolding accuracy may change as the number decreases; hence, we set m = 14, 12, 10, 8. The experiment was run on a desktop computer with Intel Quad-core CPU, a frequency of 4.2 GHz, and 8 G memory, and MATLAB R2016a was used to validate the efficiency of the proposed approach. Experimental Results and Analysis Experimental figures show the spectrum recovery performance with three different spectrum unfolding methods in different recovery spectrum as the measurements decreased. The line of blue stars represents the original spectral signal; that is, the Plank spectrum and Triple-peak spectrum. The other lines in different colors and styles represent the recovery signal based on three different methods, namely the Piece-wise B-spline, Gaussian Bump, and Compressive Sensing. m denotes the number of measurements; that is, the number of detection channels. Figure 5 shows the Plank spectrum recovery performance with three spectrum-unfolding methods with respect to a decreasing number of measurements. In Figure 5a, it can be seen that when the number of detection channels is high enough (m = 14), good recovery performance can be achieved by all the methods. Figure 5b shows the results of decreasing the number of detection channels to 12. The recovery spectra for two of the methods still keep in line with the original spectrum, however the Piece-wise B-spline is a little under-fitting. Figure 5c shows that when the number of detection channels is reduced to 10, the recovery spectrum with the Gaussian Bump method is under-fitting. A similar result can be seen in Figure 5d, where the number of detection channels is reduced to 8. The different recovery performance of the three recovery spectra is obvious, only the recovery spectrum based on the compressive sensing method stayed in accord with the original spectrum. Spectrum Unfolding Method Contrast: Plank Spectrum To illustrate the problem more evidently, the recovery error was used to show the recovery performance with different spectrum unfolding methods, and RMSE and MAE were calculated by Equations (25) and (26). The smaller the value was, the better the recovery performance that could be achieved. As shown in Table 1 (the smallest value is bold), as the number of detection channels decreases from 14 to 8, the recovery error based on the compressive sensing method is almost unchanged; the RMSE varied from 0.001992 to 0.002013, and the MAE varied from 0.001293 to 0.001315. A significant change took place in both the Piece-wise B-spline and Gaussian Bump methods, for which the RMSE varied from 0.002647 to 0.094138 and 0.007489 to 0.532886, respectively, and the MAE varied from 0.001740 to 0.026802 and 0.006479 to 0.432161, respectively. The accuracy of the spectrum recovery is reasonable, as the ratio of the error ∆ for compressive sensing is 0.31%, which is lower than the 3.5% which was presented to the Plank spectrum recovery in previous work [11]. ∆ can be calculated by Equation (27): where S is the mean of the spectrum, S (k) is the actual spectrum,Ŝ (k) is the spectrum unfolding at the kth test point, and n is the number of test points. spectrum, however the Piece-wise B-spline is a little under-fitting. Figure 5c shows that when the number of detection channels is reduced to 10, the recovery spectrum with the Gaussian Bump method is under-fitting. A similar result can be seen in Figure 5d, where the number of detection channels is reduced to 8. The different recovery performance of the three recovery spectra is obvious, only the recovery spectrum based on the compressive sensing method stayed in accord with the original spectrum. To illustrate the problem more evidently, the recovery error was used to show the recovery performance with different spectrum unfolding methods, and RMSE and MAE were calculated by Figure 6 shows the recovery errors of the three spectrum-unfolding methods with respect to the decreasing number of measurements, which describes the accuracy variation tendency. When the number of measurements was m = 8, compressive sensing showed better performance than the other spectrum unfolding methods. Furthermore, it can be seen that the accuracy tendency based on compressive sensing is the most stabilized of these three methods. Equations (25) and (26). The smaller the value was, the better the recovery performance that could be achieved. As shown in Table 1 (the smallest value is bold), as the number of detection channels decreases from 14 to 8, the recovery error based on the compressive sensing method is almost unchanged; the RMSE varied from 0.001992 to 0.002013, and the MAE varied from 0.001293 to 0.001315. A significant change took place in both the Piece-wise B-spline and Gaussian Bump methods, for which the RMSE varied from 0.002647 to 0.094138 and 0.007489 to 0.532886, respectively, and the MAE varied from 0.001740 to 0.026802 and 0.006479 to 0.432161 , respectively. The accuracy of the spectrum recovery is reasonable, as the ratio of the error ∆∁ for compressive sensing is 0.31%, which is lower than the 3.5% which was presented to the Plank spectrum recovery in previous work [11]. ∆∁ can be calculated by Equation (27): where ̅ is the mean of the spectrum, ( ) is the actual spectrum, ( ) is the spectrum unfolding at the th test point, and is the number of test points. Figure 6 shows the recovery errors of the three spectrum-unfolding methods with respect to the decreasing number of measurements, which describes the accuracy variation tendency. When the number of measurements was = 8, compressive sensing showed better performance than the other spectrum unfolding methods. Furthermore, it can be seen that the accuracy tendency based on compressive sensing is the most stabilized of these three methods. Figure 7 shows the Triple-peak spectrum recovery performance based on three spectrum-unfolding methods with respect to a decreasing number of measurements. In Figure 7a, the behavior of the three recovery methods is similar. As the number of detection channels decreases Figure 7 shows the Triple-peak spectrum recovery performance based on three spectrum-unfolding methods with respect to a decreasing number of measurements. In Figure 7a, the behavior of the three recovery methods is similar. As the number of detection channels decreases from 14 to 8, shown in Figure 7b-d, it can be seen that the recovery performance based on the Piece-wise B-spline and Gaussian Bump methods moves to under-fitting, and only the recovery spectrum based on the compressive sensing method stays in accord with the original spectrum. Table 2 shows the recovery error of the Triple-peak Spectrum with three spectrum-unfolding methods, with the smallest value in bold. Contrasting in RMSE, the range of the compressive sensing method is 0.001137, 0.005037 , and it is the lowest of the three methods. The same result is shown when contrasting in MAE, with the lowest range being 0.000116, 0.001917 . Figure 8 describes the Triple-peak spectrum recovery errors with three different spectrum unfolding methods when the detection channels decrease. As one can see from this figure, the compressive sensing method performs better than the other spectrum unfolding methods. Figure 8 describes the Triple-peak spectrum recovery errors with three different spectrum unfolding methods when the detection channels decrease. As one can see from this figure, the compressive sensing method performs better than the other spectrum unfolding methods. Throughout the two groups of experiments, the results showed that the spectrum unfolding method based on compressive sensing has the best recovery performance of these methods, and it can still achieve comparable accuracy from only 8 spectrometer detection channels as it has done from 14 detection channels previously; hence, about 42.86% of the space can be saved. Conclusions The laser facility for researching Soft X-ray is a complex system, with limited locations for installing detection channels to obtain measurements, and thus the soft X-ray spectrum can be difficult to recover. In this paper, we proposed a novel recovery method for unfolding Soft X-ray spectrum based on compressive sensing. This method has the following characteristics: (1) Spectrum recovery is formulated as a problem of accurate signal recovery from a few measurements (i.e., compressive sensing); and (2) we use an orthogonal basis function, such as the Legendre polynomials, to sparsely represent the spectral signals, and the corresponding coefficients can be obtained by Lasso regression in the sense of the ℓ1 norm. Since the basis atoms are selected adaptively over the basis dictionary with a large size, the under-fitting problem with limited detection channels is solved, and the spectrometer can achieve higher accuracy spectral information with less measured data. In order to prove and demonstrate the performance and robustness of the recovery method based on compressive sensing, we conducted self-inspection of the numerical test for recovering two spectra: The Plank spectrum and the Three-peak spectrum. The results show that comparable accuracy can be achieved from only 8 spectrometer detection channels as was previously achieved from 14 detection channels. This means that the presented approach has the capability of recovering a spectrum from limited detection channels data, and it can be used to save more space for other detectors. Throughout the two groups of experiments, the results showed that the spectrum unfolding method based on compressive sensing has the best recovery performance of these methods, and it can still achieve comparable accuracy from only 8 spectrometer detection channels as it has done from 14 detection channels previously; hence, about 42.86% of the space can be saved. Conclusions The laser facility for researching Soft X-ray is a complex system, with limited locations for installing detection channels to obtain measurements, and thus the soft X-ray spectrum can be difficult to recover. In this paper, we proposed a novel recovery method for unfolding Soft X-ray spectrum based on compressive sensing. This method has the following characteristics: (1) Spectrum recovery is formulated as a problem of accurate signal recovery from a few measurements (i.e., compressive sensing); and (2) we use an orthogonal basis function, such as the Legendre polynomials, to sparsely represent the spectral signals, and the corresponding coefficients can be obtained by Lasso regression in the sense of the 1 norm. Since the basis atoms are selected adaptively over the basis dictionary with a large size, the under-fitting problem with limited detection channels is solved, and the spectrometer can achieve higher accuracy spectral information with less measured data. In order to prove and demonstrate the performance and robustness of the recovery method based on compressive sensing, we conducted self-inspection of the numerical test for recovering two spectra: The Plank spectrum and the Three-peak spectrum. The results show that comparable accuracy can be achieved from only 8 spectrometer detection channels as was previously achieved from 14 detection channels. This means that the presented approach has the capability of recovering a spectrum from limited detection channels data, and it can be used to save more space for other detectors.
11,379.2
2018-11-01T00:00:00.000
[ "Physics", "Engineering" ]
ADA compliance and teaching linguistics online Only 8.8% of faculty have reported receiving formal training for developing ADA (Americans with Disabilities Act) compliant online courses (Gould & Harris, 2019), yet in any given semester, faculty may be required by federal law to make their course accessible for a student that has enrolled with a disability. Linguistics faculty face many of the same challenges (namely time and resources) as other disciplines with implementing ADA federal guidelines. However, there are further obstacles with linguistic specific topics (such as dialect illustrations, phonology, morphology) that require special attention when devising accessible material for those that are either visually or hearing impaired. Through the exploration of an undergraduate linguistics course (LING 2050: Language of Now), this paper reflects on best practices, suggested modifications, barriers in developing an ADA compliant online linguistics course, and presents a resource developed by the author aggregating resources that facilitate making a course ADA compliant. think about the benefits of accessibility from a narrow scope, without realizing that accessibility measures create classroom equity for all students. In fact, there are many students in our classrooms that have forms of visual and hearing impairments that are not covered under the ADA. Specifically, 1 in 12 men are colorblind, 75% of adults use some sort of vision correction, 60% of Americans are far sighted (i.e., will have trouble reading), and 1 in 4 college students have hearing loss or hidden hearing loss that impacts their ability to discern speech (Caswell, 2015;Kian, 2020;Le Prell, Hensley, Campbell, Hall & Guire, 2012;Liberman, Epstein, Cleveland, Wang, & Maison;2016;National Eye Institute, 2019). Apart from disabilities, accessibility features also benefit non-disabled students and second language learners. Research has shown that implementing ADA best practices, such as closed-captioning and transcripts, resulted in increased student engagement, and increased retention and comprehension of course content (Kent, Ellis, Peaty, Latter & Locke, 2017;Markham, 2008;Rowland, 2007). When we focus on accessibility, we create an inclusive course experience that assists every student. 3. LING 2050: Language of Now. For this project, I used an undergraduate linguistics course called LING 2050: The Language of Now. LING 2050 is offered online every semester as a core curriculum course, thus students in this course come from many different majors. This course serves as a simplified introduction to linguistics, with a focus on sociolinguistic analysis and theories. Lectures are pre-recorded and the class is offered asynchronously. The course does not require a textbook, but instead relies on a variety of outside and self-generated instructional materials to teach. A summary of the instructional material diversity is provided in Table 1 4. Implementing ADA best practices. Through this experience, I learned that many of the accessibility changes faculty need to make (regardless of the linguistic topic being taught) are simple (see Table 2 for a summary of suggested simple fixes). By "simple", I mean they require a shallow learning curve. Changes such as this include enlarging font sizes, changing font type, adding alternative text to images, and replacing links with descriptive text (i.e., embedding website links into the page's text rather than providing complicated web addresses). Additionally, most of the changes implemented are long-term fixes, meaning once made, the changes will not have to be revisited each time the course is taught. This is a benefit for courses that are offered regularly where the instructional material doesn't change. Implementing ADA best practices does take a great deal of time, so it is beneficial to be reminded that one's effort creates permanent solutions. This course utilized a wide variety of instructional materials (see Table 1), thus implementing ADA strategies required becoming familiar with ADA best practices for each of these items. For example, students in this course learn about the word "selfie" through the "2013: The Year of the Selfie" infographic (see accessible version: https://thada.org/infographics/). There were two accessibility issues with this piece. First, by nature, an infographic assumes a student can see, which excludes visually impaired students. Secondly, this infographic was a .jpeg (a picture file), which meant a visually impaired student could not rely on a screen reader to read the text. Online resources discussing how to make infographics accessible required expertise in HTML and CSS 1 , so alternatively, I provided a typed transcription of the infographic in a Word document alongside the image in the course. This would allow a screen reader to read the text and was a feasible alternative to learning markup languages. The application of ADA features also helped improve the linguistic focus of this course. For example, in a lecture on affixation, the automated closed-captioning combined the morpheme --ing with the previous word, rather than recognizing it as its own unit of speech in the context of the lecture (see Figure 1). Figure 1. Automated closed-captioning versus edited closed-captioning Caption errors like this regularly occurred with morphemes, phonemes, accent illustrations, and words from foreign languages. Editing the closed-captioning of my recorded lectures, rather than relying entirely on automated closed-captioning, allowed me to ensure that linguistics terms and concepts were correctly transcribed and communicated. Barriers. There were three primary barriers in creating an accessible linguistics course. The first of which was the time commitment, the second was the ability to find resources, and the third pertained to specific issues in linguistics. 5.1. TIME. While I argue that many of the accessibility changes needed in courses are simple, one-time fixes, they are not necessarily easy to implement. Table 3 reflects the approximate time required for applying ADA changes to one week's worth of primary instructional materials. For example, adding closed-captioning or editing automated closed-captioning on a 20-minute lecture can take several hours, and there were 1-2 lectures in each week of this 16-week course. Additionally, increasing the font size in the LING 2050 course shell was a simple accessibility fix, but the course had 147 pages, and each one of them required individual editing. This is because there is no functional global editing feature in Canvas that allows faculty to make a onetime universal change. Canvas does provide a built-in accessibility checker, however, this also had to be run for each individual page (i.e., 147 times). The implementation in Canvas, or other LMS providers, of features that facilitate making the course accessible would require petitioning from multiple universities or professional societies. The process of implementing ADA best practices into a course shell would be more time efficient for faculty if universities could head the conversations with LMS providers on these needs. Table 3. Weekly time commitment for applying ADA changes to primary instructional materials 5.2. LACK OF UNIFIED RESOURCES. There were no one-stop resources that could assist me in this process. I initially relied on institutional trainings and staff to guide me in the right direction. Their knowledge base was extremely helpful, but they could not address some of the unique challenges I was facing, especially pertaining to linguistic related topics (see 5.3 Challenges for the Field of Linguistics). I turned my attention to online resources looking for answers that pertained to not only accessibility, but accessibility as it related to every piece of instructional material used in the course. This required combing through hundreds of websites. CHALLENGES FOR THE FIELD OF LINGUISTICS. There were many complications effectively implementing ADA compliant practices for teaching linguistics. While I was able to resolve certain issues pertaining to morphemes, phonemes, and dialects in the closed-captioning, these same topics created unavoidable challenges outside of recorded lectures. For example, when I illustrated the phoneme [p] in a written lecture, I discovered that assistive screen readers would pronounce [p] as [pi]. I also had this challenge with screen readers and some bound morphemes, such as "tri-" (e.g., "trifold"), which would be pronounced as [tɹi]. Regarding dialects, it was complicated to illustrate dialect variations effectively in written text and in audio form (e.g., YouTube videos). Screen readers would often misread the text which in turn yielded an unsuccessful demonstration of the dialect, and the majority of YouTube videos didn't provide accurate closed-captioning. In these cases, the only solutions are to either produce the closed-captioning and submit it to YouTube or create a transcript of the video to provide to students. Both alternatives require a great deal of time. 6. Good faith effort. The endeavor to create an ADA compliant online linguistics course proved to be useful but not entirely successful. On one hand, the ADA best practices I was able to implement did improve the accessibility and inclusiveness of the course, but this experience also showed that it is currently impossible for one faculty member to ensure every aspect of a course is accessible (see section 5. Barriers). Therefore, the current goal cannot be full accessibility, but rather putting forth a good faith effort, by making accessible what can be made accessible, and by accepting the inaccessibility that is outside the control of the faculty member. To assist other faculty members in their good faith efforts, and to work towards a unification of resources, I have developed the Teaching Headquarters for the Americans with Disabilities Act (THADA), 2 to provide faculty with self-training and guidance while making their courses more accessible and inclusive for every student. However, it needs to be clear, without serious institutional support putting pressure on LMS providers to make ADA implementation easier, creating an ADA compliant course is not a feasible responsibility for a faculty member. Large universities especially can support this by negotiating with vendors. Institutions should also consider offering faculty course release time to make these changes in their courses or seek outside vendors to assist faculty with the time-consuming practices (e.g., closed-captioning).
2,234.4
2021-10-12T00:00:00.000
[ "Linguistics", "Education" ]
In-Situ Grown NiMn2O4/GO Nanocomposite Material on Nickel Foam Surface by Microwave-Assisted Hydrothermal Method and Used as Supercapacitor Electrode The NiMn2O4/graphene oxide (GO) nanocomposite material was in situ grown on the surface of a nickel foam 3D skeleton by combining the solvent method with the microwave-assisted hydrothermal method and annealing; then, its performance was investigated as a superior supercapacitor electrode material. When nickel foam was soaked in GO aqueous or treated in nickel ion and manganese ion solution by the microwave-assisted hydrothermal method and annealing, gauze GO film or flower-spherical NiMn2O4 was formed on the nickel foam surface. If the two processes were combined in a different order, the final products on the nickel surface had a remarkably different morphology and phase structure. When GO film was first formed, the final products on the nickel surface were the composite of NiO and Mn3O4, while NiMn2O4/GO nanocomposite material can be obtained if NiMn2O4 was first formed (immersed in 2.5 mg/L GO solution). In a 6M KOH solution, the specific capacitance of the latter reached 700 F/g at 1 A/g which was superior to that of the former (only 35 F/g). However, the latter’s specific capacitance was still inferior to that of in-situ grown NiMn2O4 on nickel foam (802 F/g). Though the gauze-formed GO film, almost covering the preformed flower-spherical NiMn2O4, can also contribute a certain specific capacitance, it also restricted the electrolyte diffusion and contact with NiMn2O4, accounting for the performance decrease of the NiMn2O4/GO nanocomposite. A convenient method was raised to fabricate the nanocomposite of carbon and double metal oxides. Introduction Increasingly serious environmental problems and energy demand encourage people to study safe and efficient energy.As a new type of green energy storage device, the supercapacitor has a broad application prospect and is worth further study [1][2][3].The performance and efficiency of supercapacitors depend directly on the electrode materials [4][5][6].Metal oxides have been widely used as electrode materials in supercapacitors because of their high theoretical specific capacity, especially multi-metal oxides which attract more researchers' attention because of the synergistic effect of multivalent states and multi-metal ions [7].For example, Li et al. synthesized hierarchical MnCo 2 O 4 nanosheets by using a two-step hydrothermal method and post-annealing treatment [8].Its porous structure and large specific surface area have a positive effect on electrochemical activity and enhance electron diffusion.Chen et al. prepared nickel-cobalt hydroxide by electrochemical deposition on a nickel foam substrate and then annealed at 300 • C to transform the coating into porous NiCo 2 O 4 nanosheets, which can be directly used as binderless electrodes [9].The high specific capacitance of the electrode is 1734.9 and 1201.8F/g at the current densities of 2 A/g and 50 A/g, respectively, indicating good magnification performance.After 3500 cycles at 30 A/g, the capacitance only decreased by about 12.7%, showing good cycle stability. Nickel manganate (NiMn 2 O 4 ) is a kind of AB 2 O 4 spinel transition metal oxide with a stable structure and easy preparation.It is the most widely used material in negative temperature coefficient thermistors [10,11].However, due to its relatively poor conductivity and low actual specific capacitance, NiMn 2 O 4 is still very limited in its application as electrode material for supercapacitors.A lot of research is needed [5,12].At present, a common means to improve the performance of metal oxide electrode materials is to compound them with carbon materials, which can solve the problem of poor conductivity of metal oxides and facilitate electron transmission [13].Among the carbon materials, graphene, a two-dimensional carbon material, has many advantages, such as high electrical conductivity, good flexibility, and high mechanical strength, and is an ideal choice for preparing composite electrode materials. In this work, NiMn 2 O 4 /GO composites were in situ grown on the surface of 3D skeleton foam nickel by the microwave hydrothermal method, which can be used as the binder-free electrode of the supercapacitor.During synthesis, the process was conducted according to the following different processes, respectively.One process is first depositing a layer of GO on the surface of the foamed nickel 3D skeleton, and then NiMn 2 O 4 is grown on it.The other process is that NiMn 2 O 4 is first grown on the surface of foamed nickel, and then coated GO.The effect of the concentration of GO aqueous solution on the performance of the obtained composite materials was studied.At 1 A/g, the specific capacitance of the nanocomposite of NiO and Mn 3 O 4 (after immersing in 2.5 mg/L GO solution) was only 35 F/g.And the specific capacitance of NiMn 2 O 4 /GO nanocomposite material reached 700 F/g. The Synthesis of Nanocomposite NiMn 2 O 4 /graphene oxide (GO) composite material was prepared according to the two different processes, which are illustrated in Figure 1. and 50 A/g, respectively, indicating good magnification performance.After 3500 cycles at 30 A/g, the capacitance only decreased by about 12.7%, showing good cycle stability. Nickel manganate (NiMn2O4) is a kind of AB2O4 spinel transition metal oxide with a stable structure and easy preparation.It is the most widely used material in negative temperature coefficient thermistors [10,11].However, due to its relatively poor conductivity and low actual specific capacitance, NiMn2O4 is still very limited in its application as electrode material for supercapacitors.A lot of research is needed [5,12].At present, a common means to improve the performance of metal oxide electrode materials is to compound them with carbon materials, which can solve the problem of poor conductivity of metal oxides and facilitate electron transmission [13].Among the carbon materials, graphene, a two-dimensional carbon material, has many advantages, such as high electrical conductivity, good flexibility, and high mechanical strength, and is an ideal choice for preparing composite electrode materials. In this work, NiMn2O4/GO composites were in situ grown on the surface of 3D skeleton foam nickel by the microwave hydrothermal method, which can be used as the binder-free electrode of the supercapacitor.During synthesis, the process was conducted according to the following different processes, respectively.One process is first depositing a layer of GO on the surface of the foamed nickel 3D skeleton, and then NiMn2O4 is grown on it.The other process is that NiMn2O4 is first grown on the surface of foamed nickel, and then coated GO.The effect of the concentration of GO aqueous solution on the performance of the obtained composite materials was studied.At 1 A/g, the specific capacitance of the nanocomposite of NiO and Mn3O4 (after immersing in 2.5 mg/L GO solution) was only 35 F/g.And the specific capacitance of NiMn2O4/GO nanocomposite material reached 700 F/g. The Synthesis of Nanocomposite NiMn2O4/graphene oxide (GO) composite material was prepared according to the two different processes, which are illustrated in Figure 1.The first detailed process was as follows: (1) A total of 30 mL of GO aqueous solutions with different concentrations (0, 2.5, 5, and 7.5 mg/mL) were prepared and poured into Petri dishes, respectively.(2) The pre-treated nickel foam was soaked in the petri dish for 1.5 h and then washed gently in deionized water several times, and then put into a drying oven at 60 °C for 12 h to get graphene-coated nickel foam.The first detailed process was as follows: (1) A total of 30 mL of GO aqueous solutions with different concentrations (0, 2.5, 5, and 7.5 mg/mL) were prepared and poured into Petri dishes, respectively.(2) The pre-treated nickel foam was soaked in the petri dish for 1.5 h and then washed gently in deionized water several times, and then put into a drying oven at 60 • C for 12 h to get graphene-coated nickel foam. (3) Accurately weighted 0.5 mmol nickel nitrate hexahydrate, 3 mmol ammonium fluoride, and 7.5 mmol urea were all dissolved in 30 mL deionized water and magnetically stirred for 15 min to obtain a clear, light-green solution.Then, 1 mmol potassium permanganate was added into the above solution, which was continuously magnetically stirred for 30 min.Finally, the purplish-red solution was obtained.(4) The treated foam nickel was put into a 100 mL microwave reactor containing the purplish-red solution and then put into the microwave synthesis instrument.The reactor was heated to 140 • C for 3 h and then cooled to room temperature.(5) The nickel foam was taken out, ultrasonicated with deionized water for 5 min and anhydrous ethanol several times, and then dried in a drying oven at 60 • C for 12 h.(6) The dried nickel foam was annealed at 450 • C for 2 h in a tube furnace with a heating rate of 10 • C/min at N 2 atmosphere and then cooled to room temperature in the furnace. The second detailed process was as follows: (1) Accurately weighted 0.5 mmol nickel nitrate hexahydrate, 3 mmol ammonium fluoride, and 7.5 mmol urea were all dissolved in 30 mL deionized water, and magnetically stirred for 15 min to obtain a clear, light-green solution.Then, 1 mmol potassium permanganate was added into the above solution and magnetically stirred for 30 min.The purplish-red solution was obtained.(2) The pre-treated nickel foam was immersed into a 100 mL microwave container having the prepared solution for 30 min.The container was put into the microwave synthesis instrument and heated to 140 • C for 3 h with certain procedures and then cooled to room temperature.(3) The nickel foam was removed, cleaned by ultrasonic for 5 min with deionized water and anhydrous ethanol several times, and then dried in a drying oven at 60 • C for 12 h.(4) The dried nickel foam was heated to 450 • C with a heating rate of 10 • C/min in a tube furnace at N 2 atmosphere held for 2 h and then cooled to room temperature.The nickel foam with NiMn 2 O 4 on the surface was obtained.(5) The treated nickel foam was put in a Petri dish containing 30 mL of GO aqueous solution with different concentrations and immersed for 1.5 h.(6) The soaked nickel foam was gently washed with deionized water several times, and then dried in a blast oven at 60 • C for 12 h. All the reagents (see Table 1) were of analytical grade and used without further purification. Electrochemical Tests Electrochemical tests including cyclic voltammetry (CV) and galvanostatic charge/ discharge (GCD) were conducted on a VMP3 (EG&G) electrochemical workstation in 6M KOH solution using a three-electrode system, i.e., the treated electrode used as the working electrode, Platinum net as the counter electrode, and the saturated calomel electrode (SCE) as the reference electrode, respectively.CV curves were recorded in the potential range of −0.15 V to 0.55 V with different scan rates (10,20,50, and 100 mV/s).The galvanostatic charge-discharge (GCD) was performed at a current density of 1 A/g between about −0.15 and 0.55 V.The specific capacitance calculation formula based on the GCD curve is as follows [14]: where C m represents the specific capacitance of the electrode material; the unit is F/g or F/cm 2 ; i is the discharge current; the unit is A; t represents the discharge time; the unit is s; m is the mass of the active substance in g; S is the electrode area; the unit is cm 2 ; V represents the discharge voltage; the unit is V.The capacity of the NiMn 2 O 4 is calculated according to the mass change of the nickel foam before and after being coated with NiMn 2 O 4 .This sentence is also added to the experiment. The Characteristic of Composite Material Fabricated According to the First Process Figure 2 illustrates the FE-SEM images of nickel foam before and after being coated with GO by the first process.As shown in Figure 2a,b (a-pre-treated nickel foam, b-after soaked in 2.5 mg/L GO), compared with pretreated nickel foam, it seems that a wrinkle film was formed after immersion in the solution having GO, which may prove that the 3D skeleton surface of nickel foam was successfully coated with GO in profile.From Figure 1c, the uniformly and tightly film was formed on the surface of nickel foam after the microwave hydrothermal method and annealing.After the nickel foam soaked in solution, having a different content of GO (d~f), it was clear that deposits were observed.Meanwhile, the higher the concentration of GO, the thicker the obtained film, and the more serious the film fell off. Figure 3 illustrates the XRD pattern of the microwave hydrothermal method and annealed nickel foam with and without soaking in 2.5 mg/L GO solution.As shown in Figure 3a, for nickel foam without immersion in GO solution, its diffraction peak occurred at 21.26 220), (311), ( 222), (400), (511), and (440) plane of the standard of cubic NiMn 2 O 4 (JCPDS#01-1110).The result indicated that NiMn 2 O 4 was formed on the surface of the microwave hydrothermal and annealed nickel foam without soaking in GO solution.But for nickel foam immersed in 2.5 mg/L GO solution, the diffraction peak was composed of the characteristic peaks of NiO (JCPDS#47-1049) and Mn 3 O 4 (JCPDS#80-0382).The preferential-formed GO film on the nickel foam surface may participate in the redox reaction during the microwave hydrothermal process, which led to the phase change of microwave hydrothermal products.As a result, NiMn 2 O 4 was not obtained [15,16].the microwave hydrothermal process, which led to the phase change of microwave hydrothermal products.As a result, NiMn2O4 was not obtained [15,16].The CV curves of the obtained samples measured at different scan rates (10, 20, 50, and 100 mV/s, respectively) were summarized in Figure 4. From CV curves, for all the obtained samples, the current increased gradually with the increase in the scan rate, while the microwave hydrothermal process, which led to the phase change of microwave hydrothermal products.As a result, NiMn2O4 was not obtained [15,16].The CV curves of the obtained samples measured at different scan rates (10, 20, 50, and 100 mV/s, respectively) were summarized in Figure 4. From CV curves, for all the obtained samples, the current increased gradually with the increase in the scan rate, while The CV curves of the obtained samples measured at different scan rates (10, 20, 50, and 100 mV/s, respectively) were summarized in Figure 4. From CV curves, for all the obtained samples, the current increased gradually with the increase in the scan rate, while its shape changed little, indicating that the electrochemical behavior of the sample will be slightly affected by the polarization [17].From Figure 4a, an obvious redox peak was observed because of the existence of NiMn 2 O 4 on the surface of nickel foam, which exhibited pseudocapacitance behavior.In Figure 4b-d, the CV curve of the obtained sample was obviously different from that of Figure 4a with the redox peak having a great change.It can be found that at the same scan rate, the absolute integral area of the CV curve of the sample immersed in GO aqueous solution was much smaller than that of pure NiMn 2 O 4 (Figure 4a), meaning the performed GO coating had no benefit for improving its specific capacitance.Since nickel foam was first immersed in aqueous solution having GO, its surface was curved by GO film (Figure 2b), which may affect the formation of NiMn 2 O 4 during the subsequent microwave hydrothermal and anneal process, confirmed by the result of Figure 3b.So, the change in CV curves related to the formed GO film and these oxides. Nanomaterials 2023, 13, x FOR PEER REVIEW 6 of 11 its shape changed little, indicating that the electrochemical behavior of the sample will be slightly affected by the polarization [17].From Figure 4a, an obvious redox peak was observed because of the existence of NiMn2O4 on the surface of nickel foam, which exhibited pseudo-capacitance behavior.In Figure 4b-d, the CV curve of the obtained sample was obviously different from that of Figure 4a with the redox peak having a great change.It can be found that at the same scan rate, the absolute integral area of the CV curve of the sample immersed in GO aqueous solution was much smaller than that of pure NiMn2O4 (Figure 4a), meaning the performed GO coating had no benefit for improving its specific capacitance.Since nickel foam was first immersed in aqueous solution having GO, its surface was curved by GO film (Figure 2b), which may affect the formation of NiMn2O4 during the subsequent microwave hydrothermal and anneal process, confirmed by the result of Figure 3b.So, the change in CV curves related to the formed GO film and these oxides.Figure 5a illustrates the GCD curves of the obtained samples according to the first process at 1 A/g between the potential ranges of −0.15~0.5 V. On the one hand, the GCD curve of annealed nickel foam with (2.5, 5.0, 7.5 mg/L) GO solution presents a nearly symmetrical shape, and the curve shape is basically similar under different current densities.The results show that the electrode has high coulomb efficiency and excellent reversibility.On the other hand, it is obvious that the charge and discharge time of the sample without immersing in GO aqueous solution were much longer than those samples soaking in GO solution, representing pseudo-capacitance characteristics.Meanwhile, the calculated specific capacitance based on Equation (1) was summarized in Figure 5b.It can be clearly seen that all the obtained samples after immersing in GO solution had a very small specific capacitance, which was significantly inferior to the sample without soaking in GO solution.To be specific, at 1 A/g the electrode impregnated at 2.5 mg/L GO solution has a maximum specific capacitance of 493.7 F/g, which is higher than that of the electrode impregnated at 5 mg/L and 7.5 mg/L GO solution.It is worth noting that when the electrode does not invade the bubble, the GO solution ratio is surprisingly high for 802 F/g.This result corresponded to that of the CV curve.Figure 5a illustrates the GCD curves of the obtained samples according to the first process at 1 A/g between the potential ranges of −0.15~0.5 V. On the one hand, the GCD curve of annealed nickel foam with (2.5, 5.0, 7.5 mg/L) GO solution presents a nearly symmetrical shape, and the curve shape is basically similar under different current densities.The results show that the electrode has high coulomb efficiency and excellent reversibility.On the other hand, it is obvious that the charge and discharge time of the sample without immersing in GO aqueous solution were much longer than those samples soaking in GO solution, representing pseudo-capacitance characteristics.Meanwhile, the calculated specific capacitance based on Equation (1) was summarized in Figure 5b.It can be clearly seen that all the obtained samples after immersing in GO solution had a very small specific capacitance, which was significantly inferior to the sample without soaking in GO solution.To be specific, at 1 A/g the electrode impregnated at 2.5 mg/L GO solution has a maximum specific capacitance of 493.7 F/g, which is higher than that of the electrode impregnated at 5 mg/L and 7.5 mg/L GO solution.It is worth noting that when the electrode does not invade the bubble, the GO solution ratio is surprisingly high for 802 F/g.This result corresponded to that of the CV curve. The Performance of NiMn2O4/GO Nanocomposite Material The micro-morphology of the samples prepared according to the second process is shown in Figure 6.From Figure 6a, the surface of nickel foam without immersing in having the GO solution was covered by a bouquet assembled with nanosheet, which was NiMn2O4 (as shown in Figure 3a).As shown in Figure 6b-d, it can be clearly seen that the bouquet was covered by thin gauze film, and the thickness of the gauze film increased with the GO concentration increasing. The Performance of NiMn 2 O 4 /GO Nanocomposite Material The micro-morphology of the samples prepared according to the second process is shown in Figure 6.From Figure 6a, the surface of nickel foam without immersing in having the GO solution was covered by a bouquet assembled with nanosheet, which was NiMn 2 O 4 (as shown in Figure 3a).As shown in Figure 6b-d, it can be clearly seen that the bouquet was covered by thin gauze film, and the thickness of the gauze film increased with the GO concentration increasing. The Performance of NiMn2O4/GO Nanocomposite Material The micro-morphology of the samples prepared according to the second process is shown in Figure 6.From Figure 6a, the surface of nickel foam without immersing in having the GO solution was covered by a bouquet assembled with nanosheet, which was NiMn2O4 (as shown in Figure 3a).As shown in Figure 6b-d, it can be clearly seen that the bouquet was covered by thin gauze film, and the thickness of the gauze film increased with the GO concentration increasing.In order to determine the elemental composition and chemical valence of the prepared samples further, XPS analysis of the samples after soaking in 2.5 mg/mL GO aqueous solution was performed (see Figure 8).The total spectrum of the sample (Figure 8a) identified Ni, Mn, O, and C element peaks, which was consistent with the composition.The Ni 2p spectrum (Figure 8b) was well deconvoluted into two Ni 2+ fitted peaks at 855.7 eV and 873.5 eV, two Ni 3+ fitted peaks at 861.6 eV and 879.9 eV [18], and two shakeup satellites.The fitting results of Mn 2p in Figure 7c indicated the existence of Mn 2p1/2 and Mn 2p3/2.The fitting peaks of Mn 2p1/2 and Mn 2p3/2 revealed the coexisting of Mn 2+ and Mn 3+ oxidation states in the product, which occurred at 640.9 eV, 651.6 eV, 642.5 eV, and 653.0 eV [19], respectively.The O 1s spectrum in Figure 8d could be deconvoluted into lattice oxygen (M-O) at 530.4 eV, metal-O-H at 531.8 eV, and surface oxygen species (H-O-H) at 532.9 eV [13,20], respectively.Figure 8e showed the C 1s spectrum, with three peaks at 284.4 eV, 285.1 eV, and 286.9 eV, corresponding to the C=C bond, C-O bond, and C=O bond [21], respectively.The XPS results further confirm the successful in-situ growth of NiMn2O4/GO nanocomposite materials on nickel foam.In order to determine the elemental composition and chemical valence of the prepared samples further, XPS analysis of the samples after soaking in 2.5 mg/mL GO aqueous solution was performed (see Figure 8).The total spectrum of the sample (Figure 8a) identified Ni, Mn, O, and C element peaks, which was consistent with the composition.The Ni 2p spectrum (Figure 8b) was well deconvoluted into two Ni 2+ fitted peaks at 855.7 eV and 873.5 eV, two Ni 3+ fitted peaks at 861.6 eV and 879.9 eV [18], and two shakeup satellites.The fitting results of Mn 2p in Figure 7c indicated the existence of Mn 2p1/2 and Mn 2p3/2.The fitting peaks of Mn 2p1/2 and Mn 2p3/2 revealed the coexisting of Mn 2+ and Mn 3+ oxidation states in the product, which occurred at 640.9 eV, 651.6 eV, 642.5 eV, and 653.0 eV [19], respectively.The O 1s spectrum in Figure 8d could be deconvoluted into lattice oxygen (M-O) at 530.4 eV, metal-O-H at 531.8 eV, and surface oxygen species (H-O-H) at 532.9 eV [13,20], respectively.Figure 8e showed the C 1s spectrum, with three peaks at 284.4 eV, 285.1 eV, and 286.9 eV, corresponding to the C=C bond, C-O bond, and C=O bond [21], respectively.The XPS results further confirm the successful in-situ growth of NiMn 2 O 4 /GO nanocomposite materials on nickel foam.In order to determine the elemental composition and chemical valence of the prepared samples further, XPS analysis of the samples after soaking in 2.5 mg/mL GO aqueous solution was performed (see Figure 8).The total spectrum of the sample (Figure 8a) identified Ni, Mn, O, and C element peaks, which was consistent with the composition.The Ni 2p spectrum (Figure 8b) was well deconvoluted into two Ni 2+ fitted peaks at 855.7 eV and 873.5 eV, two Ni 3+ fitted peaks at 861.6 eV and 879.9 eV [18], and two shakeup satellites.The fitting results of Mn 2p in Figure 7c indicated the existence of Mn 2p1/2 and Mn 2p3/2.The fitting peaks of Mn 2p1/2 and Mn 2p3/2 revealed the coexisting of Mn 2+ and Mn 3+ oxidation states in the product, which occurred at 640.9 eV, 651.6 eV, 642.5 eV, and 653.0 eV [19], respectively.The O 1s spectrum in Figure 8d could be deconvoluted into lattice oxygen (M-O) at 530.4 eV, metal-O-H at 531.8 eV, and surface oxygen species (H-O-H) at 532.9 eV [13,20], respectively.Figure 8e showed the C 1s spectrum, with three peaks at 284.4 eV, 285.1 eV, and 286.9 eV, corresponding to the C=C bond, C-O bond, and C=O bond [21], respectively.The XPS results further confirm the successful in-situ growth of NiMn2O4/GO nanocomposite materials on nickel foam.Figure 9 displayed the CV curves of obtained samples according to the second process (soaking in 0, 2.5, 5.0, and 7.5 mg/L GO solution) at different scan rates (10, 20, 50 and 100 mV/s), respectively.It is easy to see that all CV curves have obvious redox peaks, indicating that all samples behaved with pseudo-capacitance behavior.As shown in Figure 9, the CV curve of the sample having GO coating was obviously different from that of the sample without GO coatings.The difference was shown in the positions of the redox peak.That is, the redox potential of the latter was earlier than that of the former, indicating that the formed GO coating had an influence on its electrochemical behavior.In addition, it can be found that the current gradually increased with the scan rate increasing, while its shape changed little, indicating that the electrochemical behavior of the sample was less affected by the polarization.However, for the sample obtained after being soaked in 7.5 mg/L GO solution (Figure 9d), the redox peak on CV was deformed, illustrating that the electrochemical behavior of the sample was more susceptible to polarization when the GO coating was too thick. Nanomaterials 2023, 13, x FOR PEER REVIEW 9 of 11 Figure 9 displayed the CV curves of obtained samples according to the second process (soaking in 0, 2.5, 5.0, and 7.5 mg/L GO solution) at different scan rates (10, 20, 50 and 100 mV/s), respectively.It is easy to see that all CV curves have obvious redox peaks, indicating that all samples behaved with pseudo-capacitance behavior.As shown in Figure 9, the CV curve of the sample having GO coating was obviously different from that of the sample without GO coatings.The difference was shown in the positions of the redox peak.That is, the redox potential of the latter was earlier than that of the former, indicating that the formed GO coating had an influence on its electrochemical behavior.In addition, it can be found that the current gradually increased with the scan rate increasing, while its shape changed little, indicating that the electrochemical behavior of the sample was less affected by the polarization.However, for the sample obtained after being soaked in 7.5 mg/L GO solution (Figure 9d), the redox peak on CV was deformed, illustrating that the electrochemical behavior of the sample was more susceptible to polarization when the GO coating was too thick.Figure 10a displays the GCD curves of the obtained sample after being soaked in 2.5 mg/L GO aqueous solutions at a current density of 1 A/g between −0.2 to 0.5 V.The chargedischarge platforms occurred on all the GCD curves, representing the characteristics of pseudocapacitors.From the charge and discharge time point of view, the time of the sample without GO film was longer than that of those having GO coatings.The calculated corresponding specific capacitance by Equation (1) based on the GCD curve at different current densities is shown in Figure 10b.At different current densities of 1-10 A/g, there is no significant difference in the specific capacitance of the impregnated 2.5, 5, and 7.5 mg/mL GO film electrodes, especially at high current densities.Obviously, the specific capacitance of samples having GO film had little difference and was all smaller than that of samples without GO coating.Combined with the FE-SEM analysis results, it can be seen that the preformed flower-spherical-like NiMn2O4 was almost completely covered by Figure 10a displays the GCD curves of the obtained sample after being soaked in 2.5 mg/L GO aqueous solutions at a current density of 1 A/g between −0.2 to 0.5 V.The charge-discharge platforms occurred on all the GCD curves, representing the characteristics of pseudocapacitors.From the charge and discharge time point of view, the time of the sample without GO film was longer than that of those having GO coatings.The calculated corresponding specific capacitance by Equation (1) based on the GCD curve at different current densities is shown in Figure 10b.At different current densities of 1-10 A/g, there is no significant difference in the specific capacitance of the impregnated 2.5, 5, and 7.5 mg/mL GO film electrodes, especially at high current densities.Obviously, the specific capacitance of samples having GO film had little difference and was all smaller than that of samples without GO coating.Combined with the FE-SEM analysis results, it can be seen that the preformed flower-spherical-like NiMn 2 O 4 was almost completely covered by the gauze GO film.Meanwhile, the larger the GO concentration, the thicker the gauze.Although the GO film also can contribute a certain specific capacitance, the preformed NiMn 2 O 4 was covered by GO film, blocking electrolyte diffusion and contact.As a result, its specific capacitance was worse. the gauze GO film.Meanwhile, the larger the GO concentration, the thicker the gauze.Although the GO film also can contribute a certain specific capacitance, the preformed NiMn2O4 was covered by GO film, blocking electrolyte diffusion and contact.As a result, its specific capacitance was worse. Conclusions The nanocomposite was in situ grown on the surface of nickel foam by combining solvent method with microwave-assisted synthesis followed by heating and annealing according to a different order.Meanwhile, the morphology, structure, and electrochemical performance of the obtained nanocomposite were investigated.When a nickel foam was first immersed in having GO aqueous solution and then treated in the solution containing nickel nitrate hexahydrate, fluoride, urea, and potassium permanganate solution by microwave-assisted synthesis and heating annealing process, the nanocomposite of NiO and Mn3O4 was formed on nickel foam in-situ.At 1 A/g, the specific capacitance of the nanocomposite of NiO and Mn3O4 (after immersing in 2.5 mg/L GO solution) was only 35 F/g.While the process was reversed, the NiMn2O4/GO nanocomposite material can be directly grown on the surface of a foamed nickel 3D skeleton.At 1 A/g, the specific capacitance of NiMn2O4/GO nanocomposite material reached 700 F/g.Overall, this synthesis strategy provides a convenient method for preparing carbon and bimetallic oxide nanocomposites.In addition, this study can open up opportunities for the application of these nanocomposites in supercapacitors. Conclusions The nanocomposite was in situ grown on the surface of nickel foam by combining the solvent method with microwave-assisted synthesis followed by heating and annealing according to a different order.Meanwhile, the morphology, structure, and electrochemical performance of the obtained nanocomposite were investigated.When a nickel foam was first immersed in having GO aqueous solution and then treated in the solution containing nickel nitrate hexahydrate, fluoride, urea, and potassium permanganate solution by microwave-assisted synthesis and heating annealing process, the nanocomposite of NiO and Mn 3 O 4 was formed on nickel foam in-situ.At 1 A/g, the specific capacitance of the nanocomposite of NiO and Mn 3 O 4 (after immersing in 2.5 mg/L GO solution) was only 35 F/g.While the process was reversed, the NiMn 2 O 4 /GO nanocomposite material can be directly grown on the surface of a foamed nickel 3D skeleton.At 1 A/g, the specific capacitance of NiMn 2 O 4 /GO nanocomposite material reached 700 F/g.Overall, this synthesis strategy provides a convenient method for preparing carbon and bimetallic oxide nanocomposites.In addition, this study can open up opportunities for the application of these nanocomposites in supercapacitors. Figure 1 . Figure 1.The illustration of synthesis procedure. Figure 1 . Figure 1.The illustration of synthesis procedure. Figure 3 . Figure 3. XRD pattern of microwave hydrothermal and annealed nickel foam without (a) and with (b) soaking in 2.5 mg/L GO solution. Figure 3 . Figure 3. XRD pattern of microwave hydrothermal and annealed nickel foam without (a) and with (b) soaking in 2.5 mg/L GO solution. Figure 3 . Figure 3. XRD pattern of microwave hydrothermal and annealed nickel foam without (a) and with (b) soaking in 2.5 mg/L GO solution. Figure 5 . Figure 5. (a) GCD curves at 1 A/g and (b) the calculated specific capacitance at different current densities (1, 2, 5, and 10 A/g) of the hydrothermal and annealed nickel foam without and with (2.5, 5.0, 7.5 mg/L) GO solution in 6M KOH solution. Figure 7 Figure7illustrates the composition analysis result from EDS of the obtained sample soaked in 2.5 mg/L GO solution.It can be seen that the sample was composed of Ni, Mn, O, and C elements.Meanwhile, these elements were all uniformly scattered on the surface of nickel foam.The results combined with that of Figure6further confirmed that the previously formed NiMn2O4 on the surface of nickel foam was successfully coated by gauze GO. Figure 5 . Figure 5. (a) GCD curves at 1 A/g and (b) the calculated specific capacitance at different current densities (1, 2, 5, and 10 A/g) of the hydrothermal and annealed nickel foam without and with (2.5, 5.0, 7.5 mg/L) GO solution in 6M KOH solution. Figure 7 Figure7illustrates the composition analysis result from EDS of the obtained sample soaked in 2.5 mg/L GO solution.It can be seen that the sample was composed of Ni, Mn, O, and C elements.Meanwhile, these elements were all uniformly scattered on the surface of nickel foam.The results combined with that of Figure6further confirmed that the previously formed NiMn2O4 on the surface of nickel foam was successfully coated by gauze GO. Figure 7 Figure 7 illustrates the composition analysis result from EDS of the obtained sample soaked in 2.5 mg/L GO solution.It can be seen that the sample was composed of Ni, Mn, O, and C elements.Meanwhile, these elements were all uniformly scattered on the surface of nickel foam.The results combined with that of Figure 6 further confirmed that the previously formed NiMn 2 O 4 on the surface of nickel foam was successfully coated by gauze GO. Figure 7 . Figure 7. EDS images of the obtained sample after immersing in 2.5 mg/mL GO solution. Figure 8 . Figure 8.The XPS spectra of the obtained sample after immersing in 2.5 mg/mL GO solution (a) a total survey (b) Ni 2p (c) Mn 2p (d) O 1s (e) C 1s. Figure 7 . Figure 7. EDS images of the obtained sample after immersing in 2.5 mg/mL GO solution. Figure 7 . Figure 7. EDS images of the obtained sample after immersing in 2.5 mg/mL GO solution. Figure 8 . Figure 8.The XPS spectra of the obtained sample after immersing in 2.5 mg/mL GO solution (a) a total survey (b) Ni 2p (c) Mn 2p (d) O 1s (e) C 1s.Figure 8.The XPS spectra of the obtained sample after immersing in 2.5 mg/mL GO solution (a) a total survey (b) Ni 2p (c) Mn 2p (d) O 1s (e) C 1s. Figure 8 . Figure 8.The XPS spectra of the obtained sample after immersing in 2.5 mg/mL GO solution (a) a total survey (b) Ni 2p (c) Mn 2p (d) O 1s (e) C 1s.Figure 8.The XPS spectra of the obtained sample after immersing in 2.5 mg/mL GO solution (a) a total survey (b) Ni 2p (c) Mn 2p (d) O 1s (e) C 1s. Figure 10 . Figure 10.(a) GCD curves at 1 A/g and (b) the calculated specific capacitance at different current densities (1, 2, 5, and 10 A/g) of the obtained sample without and with GO film in 6M KOH solution. Figure 10 . Figure 10.(a) GCD curves at 1 A/g and (b) the calculated specific capacitance at different current densities (1, 2, 5, and 10 A/g) of the obtained sample without and with GO film in 6M KOH solution.
8,283.8
2023-09-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Superpixel-Based Attention Graph Neural Network for Semantic Segmentation in Aerial Images : Semantic segmentation is one of the significant tasks in understanding aerial images with high spatial resolution. Recently, Graph Neural Network (GNN) and attention mechanism have achieved excellent performance in semantic segmentation tasks in general images and been applied to aerial images. In this paper, we propose a novel Superpixel-based Attention Graph Neural Network (SAGNN) for semantic segmentation of high spatial resolution aerial images. A K-Nearest Neighbor (KNN) graph is constructed from our network for each image, where each node corresponds to a superpixel in the image and is associated with a hidden representation vector. On this basis, the initialization of the hidden representation vector is the appearance feature extracted by a unary Convolutional Neural Network (CNN) from the image. Moreover, relying on the attention mechanism and recursive functions, each node can update its hidden representation according to the current state and the incoming information from its neighbors. The final representation of each node is used to predict the semantic class of each superpixel. The attention mechanism enables graph nodes to differentially aggregate neighbor information, which can extract higher-quality features. Furthermore, the superpixels not only save computational resources, but also maintain object boundary to achieve more accurate predictions. The accuracy of our model on the Potsdam and Vaihingen public datasets exceeds all benchmark approaches, reaching 90.23% and 89.32%, respectively. Introduction With the rapid development in aerial photography technology in recent years, significant improvement has been achieved in spatial resolution of aerial images. High Spatial Resolution (HSR) aerial images contain a wide variety of objects, including vehicles, roads, farmland, buildings, and so on [1]. As such, the research in aerial imagery is of significant value to land monitoring and management [2]. As a basic task of geographic information interpretation, semantic segmentation based on HSR aerial images can be applied in practical events such as urban planning [3], road extraction [4], and land cover classification [5]. Early image segmentation algorithms (watershed [6], N-Cut [7], Grab cut [8], etc.) mainly segment an image by extracting its low-level features, and the segmentation results did not contain semantic information. With the development in deep learning, a series of semantic segmentation methods based on Convolutional Neural Networks (CNNs) represented by Fully Convolutional Neural Network (FCN) have been proposed in succession. Image segmentation has since entered a new stage of semantic segmentation [9]. Deep Convolutional Neural Networks (DCNNs) show great abilities in feature extraction and object representation [10][11][12]. However, convolutional filters can only capture limited local context, while accurate inference of semantic information requires a global perspective of the image and spatial relations between objects. Different from CNNs, Graph Neural Networks (GNNs) can process non-Euclidean structural data, effectively extract spatial features from topologies, and use global context information for inference learning [13,14]. Based on this, subsequent studies attempted to apply GNNs to semantic segmentation tasks [15,16]. However, semantic segmentation on aerial images is a challenging task for three reasons: • In the case of images with high resolution, the scale of the foreground object varies greatly (the car in Figure 1a and the building in (b) are both foreground objects, but the scale difference is great). • The edge of some foreground object is irregular (the tree edge is irregular in Figure 1c,d). • The background is highly complex and contains a wide variety of features. Existing semantic segmentation methods are difficult to deal with the complex context information of aerial images. To address the above challenges, a semantic segmentation method is proposed for aerial images based on superpixel-GNN with attention mechanism. First, the aerial image is segmented into superpixels. A graph consisting of all these superpixels as its nodes is then built. Finally, edges are constructed by finding neighbors in the spatial connection between these nodes (superpixels). For each node, the image feature vector (i.e., the output of semantic segmentation CNN) is taken as the initial representation and updated iteratively using recursive functions. The key idea of this dynamic programming approach is that the state of each node is determined by its historical state and the information sent from its neighbors. The aggregation of neighbor information can be differentiated by adding the attention mechanism into the aggregation process. The final state of each node is used to classify each node. Back-Propagation Through Time (BPTT) algorithm is used to calculate the gradient of GNN. In summary, the main contributions of this paper are outlined as follows. 1. A GNN-based framework has been proposed for semantic segmentation of aerial images. To get a satisfactory segmentation boundary, superpixels are used as graph nodes for classification, and GNN can learn its representation directly from superpixel graphs. To solve the problem of irregular object edges in aerial images, superpixels are used as graph nodes to construct the graph structure, and GNN can directly learn its representation from the superpixel graph. To overcome the limitations of GNN in extracting features, CNN is used as a feature extractor to provide good feature vectors for the subsequent learning of GNN. Our method takes the complementary advantages of two neural networks (image features extracted by CNN and spatial relations provided by GNN) based on superpixels to achieve satisfactory segmentation results. 2. The GNN model in our framework of semantic segmentation of aerial images is an improved version that has introduced the attention mechanism into each node. When the information of neighbor nodes is fused, nodes are aggregated differently depending on their similarity to neighbors, so that the GNN's expression ability is enhanced. For the challenge of large variation in aerial image scales, we increase the receptive field by increasing the number of neighbors of the graph node when constructing the graph and adding an attention mechanism when merging neighbors' information. These designs can effectively reduce information fluctuations caused by scale changes, and thus deal with the problem of scale changes. Experimental results show that it has advanced performance on the challenging public datasets of Vaihingen and Potsdam. The rest of this article is organized as follows. Section 2 covers the latest progress in semantic segmentation of aerial images in two aspects: semantic segmentation and graph neural network. Section 3 describes our proposed SAGNN architecture in detail. Section 4 presents the experiment and result analysis. Finally, the conclusion and future work prospects are given in Section 5. Semantic Segmentation In recent years, deep learning has become a mainstream method for semantic segmentation. Long et al. first proposed a Full Convolutional Network (FCN) incorporating the upsample convolution layer into Convolutional Neural Network (CNN) to achieve image segmentation of arbitrary size [9]. The FCN model has laid a solid foundation for the following semantic segmentation model. The following works [17][18][19][20] aim to implement multiscale feature fusion by expanding the receptive field. For example, DeepLabv1 increases the receptive field through atrous convolution and solves the problem of repeated maximum pooling and subsampling in DCNNs that cause resolution degradation [17]. Next, DeepLabv2 [18] and DeepLabv3 [19] use Atrous Spatial Pyramid Pool (ASPP), which is composed of parallel convolutions with distinct expansion rates, to capture the image's context information in multiple proportions. Pyramid Scene Parsing Network (PSPNet) [20] proposed a Pyramid Pooling Module (PPM) to aggregate the contextual information from different regions, thereby improving the ability to obtain global information. Other works [21,22] use an encoder-decoder architecture to optimize object edge details. Semantic segmentation is also a very challenging task to aerial images. However, in addition to the large-scale changes in most image semantic segmentation datasets [23,24], aerial images also have many challenging problems due to their unique characteristics, such as wide gaps between features within the same class, small foreground object, imbalance between background and foreground, etc. [25]. Michele Volpia and Devis Tuia combined the output and features (bottom-up) and conditions of the multi-task CNN coded with the empty field model (top-bottom) to optimize the label space [26]. In addition to increasing the diversity of data, the work in [27] also introduces a Channel Attention Mechanism (CAM), which allows the model to better weigh semantic information and spatial location information, and to achieve more accurate segmentation. Hybrid Multiple Attention Network (HMANet), in order to comprehensively capture the feature correlation among space, channel, and category, three attention modules have been proposed, namely, Class Augmented Attention (CAA), Class Channel Attention (CCA), and Region Shuffle Attention (RSA) [28]. The latest research has proposed the PointFlow Module (PFM). In order to bridge the semantic gap and address the imbalance between foreground and background at the same time, Li et al. designed the PointFlow Network (PFNet) by adding PointFlow Module(PFM) to Feature Pyramid Networks (FPNs) [29]. Graph Neural Network There are two main research directions of graph neural networks: One direction is to extend the convolution operation from traditional data (such as images) to graphic data. Graph Convolutional Neural Network (GCNN)-based algorithms are mainly divided into two categories: spectral-based and spatial-based. The spectral-based method defines graph convolution as a filter, so the graph convolution operation is considered to remove noise from the graph signal [30]. On the other hand, the spatial-based method interprets graph convolution as an aggregation of feature information from the neighborhood and coarsens the graph into a high-level substructure through the interleaving arrangement of the graph convolution layer and the graph pooling layer [31]. The other direction is to apply the Recurrent Neural Network (RNN) to each node of the graph [32][33][34][35], thus generating the "graph neural network". This GNN is based on recursive operators and can be extended to various graph types [32]. Some subsequent works integrated the attention mechanism [13,36,37], autoencoder [14,38], generative network [39,40], and other structures into the GNNs. With the vigorous development of GNN models, their applications have become more and more extensive in various fields, such as social networks [41], recommendation systems [42], life sciences [43], and so on. For unstructured data such as images, superpixels can transform images into graph structures, thus solving image-related tasks using graph neural networks [44][45][46]. Note that the application of GNNs in the field of computer vision, where semantic segmentation is an important task, has attracted more and more attention. Surprisingly, the GNN exhibits extraordinary performance in semantic segmentation tasks. In the semantic segmentation of 3D point cloud images, the work [47] proposes an end-to-end 3DGNN, from which a K-nearest neighbor graph is constructed in 2D pixels according to the depth image, so that the purpose of learning its representation directly from the 3D point cloud can be achieved. The work in [48] proposed the EdgeConv layer to improve the segmentation accuracy by acquiring local features. The result of KNN will differ, depending on the K nearest points re-found, each time the EdgeConv feature is updated according to the distance to the new feature, and the local map constructed each time will be subject to dynamic update. Different from the structure of the KNN structure map, the work [49] proposed the similar concept of superpoint to superpixel to represent simple objects. The superpoint map connected by superpoints performs well in large-scale point cloud semantic segmentation. Similarly, the work [15] of semantic analysis of two-dimensional images also constructs graph structures through superpixels and uses the novel graph Long Short-Term Memory (LSTM) to capture the semantic relationship between superpixels based on local context interactions. The subsequent work [16] proposed a structurally evolved LSTM, which randomly merges graph nodes with high similarity through stacked LSTM layers. Methodology In this section, the proposed superpixel-based graph semantic segmentation model is presented in details. An overview of the proposed model is introduced followed the description of the superpixel-based graph construction method. Finally, the superpixelbased GNN with attention mechanism is presented. Overview of the Graph Structure The graph structure is shown in Figure 2. The input RGB image is first subjected to superpixel segmentation processing, which can be done off-line. Meanwhile, a stack of convolution layers is used to determine the feature vectors for each RGB image, which are the initial hidden representations of the graph nodes. The graph is built on the superpixel nodes and their spatial connections. More details can be found in Section 3.2. As a result, both semantic information and geometrical information are accessible in this GNN, which consists of three layers. Then a Multi-Layer Perceptron (MLP) with a softmax layer is shared by all graph nodes. Figure 2 is an illustration of the diagram construction. The superpixels obtained by SLIC segmentation of aerial images are used as graph nodes, feature vectors extracted by convolutional neural networks and picture information (RGB, label, and coordinate information) are used as the hidden state of nodes, and k-nearest neighbors graph is constructed according to the spatial relations between superpixels. The blue curve in the graph neural network represents the addition of attention in information aggregation, which can aggregate neighbor information differently. Finally, we get the segmentation result of aerial image through prediction module. Graph Construction Based on the Simple Linear Iterative Clustering (SLIC) superpixel method put forward by Achanta et al. [50], the superpixel-based graph semantic segmentation model is proposed hereof. First, the SLIC approach is used to a generate a superpixel graph. We construct a directed graph based on the superpixel nodes. Each superpixel is regarded as a graph node and each graph node is connected to its K nearest neighbors via directed edges. The graph can be denoted by G = (V, E, H) where V, E, and H represent the sets of nodes, edges, and hidden states, respectively. It can be easily found that the graph is directed and asymmetric. Second, a CNN is used to figure out the feature map which can exploit semantic information. Finally, the feature vectors of the superpixels are determined according to the feature maps and put into the corresponding hidden states of graph nodes. Nodes Determination The SLIC method is employed to derive the superpixel graph, in which each superpixel is regarded as a graph node. It is a local clustering method of pixels defined in the 5D space including the (l, a, b) values of the CIELab (Commission International Eclairage Lab) color space and the (u, v) pixel coordinates. In this method, the number of superpixels (nodes) can be specified according to the task and computing power. For smaller granularity segmentation and higher computing power, more superpixels (nodes) can be designed, and vice versa. Node Features and Labels Each graph node includes RGB, superpixel center coordinate, and feature information, respectively. As a result, a graph node feature contains following elements: R, G, B, x, y, label, and S. Among them, R, G, and B are the average RGB values of all pixels in each superpixel; (x, y) represents the x-and y-coordinates of the center of the superpixel; and S is the feature vector extracted from the convolution feature maps. In subsequent experiments, the improved VGG-16 network, namely, Deeplab-Largefov [17], is used as our unary CNN to extract appearance features from aerial images. The fc7 feature maps are used to upsample the size of the original image, and the size of the output feature maps is H × W × C, where H, W, and C are original height, width, and channel size (1024), respectively. Therefore, the dimension of S is 1024. In this way, the feature or initial hidden representation of each node h i can be written as follows: label of graph node is the same as the label of corresponding superpixel node. The label of a superpixel node is obtained by voting by the pixels it contains, and the label with the most votes represents the label of this superpixel node. Edges Determination Each graph node is connected to its K nearest neighbors which are found in terms of Euclidean distance. When constructing the edge, we add the direction (from the nearest neighbor to the center), and the directional edge can more clearly convey the direction of the information. However, the connection of edges for two graph nodes is not necessarily symmetrical. It means that the edge from node i to node j can not imply the existence of the edge from node j to node i. Algorithm 1 describes the graph construction. Algorithm 1: Graph Construction. Input: RGB image Output: Graph G = (V, E, H) 1: compute superpixel map by SLIC method using RGB image 2: each superpixel node is regarded as a graph node 3: graph node → V 4: compute feature map by CNN 5: for each graph node do 6: compute R, G, B 7: obtain x, y from superpixel center coordinates 8: compute S 9: (R, G, B, x, y, S) → h, h ∈ H 10: obtain node label in feature map 11: end for 12: for every two nodes i and j do 13: compute Euclidean distance d ij between node i and j 14: end for 15: for every node i do 16: find its K nearest neighbors 17: node i establishes a edge with those K nearest neighbors 18: end for Superpixel-Based Attention Graph Neural Network It can be seen from the above that each graph node has K neighbors, which may have unequal impacts on that node. More attention should be paid to the neighbors which are closer to or have the same label as that node. In other words, these edges should have greater weight [13,37]. As such, we propose the Superpixel-based Attention Graph Neural Network (SAGNN). The overview of SAGNN is shown in Figure 2 and more details can be seen below. For each node, the propagation process is written as where N i is the set of K-nearest neighbors of node i; t ∈ 0, 1, 2 corresponds to graph G (0) , G (1) , and G (2) , respectively; h t j is the current hidden state of node j; F 1 is a Multi-Layer Perceptron (MLP); m t i is a vector, which indicates the aggregation of messages that node i receives from its neighbors N i ; α t ij is the attention parameter between node i and node j; and F 2 is Vanilla RNN. At each time step, each node collects information from its neighbors by (2), and then fuses its hidden states and neighbors' information by (3). After that, one can get the next new hidden state h t+1 i of node i which is to be used at the next layer G (t+1) . As for attention parameter α t ij , it can be obtained by the following equation: α t ij is used to represent the correlation between node i and node j, which is measured in terms of the cosine of the angle between their hidden states. α t ij also represents the similarity between two nodes. The higher the similarity between them, the more likely they have the same label. Therefore, higher weight and more attention should be given to the neighbors of nodes with higher similarity. Finally, the probability over labels can be obtained as follows: where h 2 i is the hidden state of node i in graph G (2) ; F 3 is a Multi-Layer Perceptron (MLP) with a softmax layer shared by all nodes. Network parameters are adjusted by the Back-Propagation Through Time (BPTT) algorithm. Datasets The proposed method is evaluated using two public benchmarks provided by the International Society for Photogrammetry and Remote Sensing (ISPRS), namely, the Potsdam dataset and the Vaihingen dataset [51]. Both of these datasets consist of the high-resolution True Ortho Photo (TOP), Digital Surface Model (DSM), and ground truth labels. Potsdam The Potsdam dataset contains 38 high-resolution images (size 6000 × 6000 pixels), with a Ground Sampling Distance (GSD) of 5 cm. The dataset contains 6 classes: (1) impervious surfaces, (2) building, (3) low vegetation, (4) tree, (5) car, and (6) clutter. The dataset provides four channels of NIR (Near-Infrared)-R-G-B information, DSM, and standardized DSM. Note that DSM is left unused in our experiments. 17 images are used for training and 14 images for testing our model. Each image is cut into 600 × 600 size. The validation set contains 7 images randomly selected from the training set. Vaihingen The Vaihingen dataset consists of 33 high-resolution images (average size 2494 × 2064 pixels) with a Ground Sampling Distance (GSD) of 9 cm. The classes of the dataset are the same as those of the Potsdam dataset. The dataset provides NIR-R-G channels and DSM. Sixteen images are used for training and 17 images for testing our model. Each image is cut into 512 × 512 size. Evaluation Metrics On these datasets, our method is evaluated in terms of three commonly used metrics: average F1 score, average accuracy, and Intersection over Union (IoU) [52]. Among them, the F1 score of the foreground object classes is calculated by (6): where β represents the equivalent factor between recall and precision and is usually set to 1. Overall Accuracy (OA) and Intersection over Union (IoU) are defined by Formulas (7) and (8), respectively: where: N is the total number of pixels; TN, TP, FN, and FP represent the number of true negatives, true positives, false negatives, and false positives, respectively. Implementation Details In the experiment, the SLIC algorithm [50] is used to generate 2000 superpixels for each image. See Section 4.4 for details about the number of superpixels. Subsequently, the average value of the feature vectors corresponding to all pixels contained in each superpixel is calculated as the average feature vector of the superpixel. Finally, the K nearest neighbors (K = 8 in this experiment) of each superpixel are determined according to the center of the superpixel and the graph structure is constructed. The GNN part is composed of three layers of the same graph structure. The MLP structure of each node is a single layer used to aggregate neighbor information, and the attention parameter α is calculated from forward propagation. In the training phase, the unary CNN is initialized from the pre-trained VGG network in [17]. The network optimization method is Stochastic Gradient Descent (SGD) with momentum, and the norm of the gradient is clipped in order for it not to exceed 10. The initial learning rates of the unary CNN and GNN are 0.001 and 0.01, respectively, the batch size is 5 images, the momentum is 0.9, and the weight attenuation is 0.0001. The MSRA method [53] is used to initialize RNN update functions of our GNN. All the experiments were conducted on the Pytorch framework with NVIDIA GeForce RTX 2080Ti GPU. Superpixel Number Superpixels can group pixels in advance according to their appearance similarity and spatial correlation, effectively reducing the number of elements for subsequent manipulation and helping preserve the edge information of objects. When the semantic labels of superpixels are defined, most of the internal semantic information of superpixels is consistent and can be directly used as labels. If pixels within a superpixel have different ground truth labels, the voting mechanism is adopted to take the label with the largest proportion as the label of the superpixel. However, superpixels may introduce quantization errors in this case. Therefore, the performance of constructing GNNs is evaluated using different superpixel numbers. Figures 3 and 4 show the experimental results of the Potsdam and Vaihingen datasets at different superpixel numbers, respectively. We can see that, the number of superpixels is preferably greater than 2000 for the Potsdam dataset, and it is better to be greater than 1800 for the Vaihingen dataset. Because the image resolution of the Vaihingen dataset is lower than that of the Potsdam dataset, the Vaihingen dataset needs a lower number of superpixels to achieve maximum accuracy compared to the Potsdam dataset. Comprehensive comparison between the results of the two datasets indicates the network model tends to perform well when more than 2000 superpixels are used. Therefore, in order to balance computational efficiency and prediction accuracy, we used an average of 2000 superpixels for each image throughout the experiment. Comparison with Existing Works Our model was compared with four existing methods, including the benchmark algorithm FCN [9], Spatial propagation CNN (SCNN) [54], RotEqNet [55], and DeepLabV3+ [19]. The experimental results of Potsdam and Vaihingen test sets are shown in Tables 1 and 2, respectively. In order to directly reflect the segmentation effect, the F1 score is selected as the evaluation metric for each foreground class in Table 1. As shown in Table 1, our SAGNN method not only outperforms other algorithms in F1 scores for each class, but also performs best in mean F1 score, OA and MIoU. Similarly, the numerical results of our method on the Vaihingen test set are also excellent (as shown in Table 2). In addition to the F1 score of Low Vegetables, our SAGNN achieves the best in the other 7 aspects. Our method performs most prominently in Building (in Table 1) and Car (in Table 2), which are 1.13 and 1.06 higher than the sub-optimal algorithm deeplabV3+, respectively. Regardless of whether the segmentation object is large-scale (building) or small-scale (car), our model always achieves good segmentation results, a solid proof that our network is robust to scale changes. Qualitative Comparison The results of qualitative comparison between the Potsdam and Vaihingen test sets for SAGNN and baseline network are provided in Figures 5 and 6, respectively. In particular, the red dotted boxes are used to mark areas that are inaccurately labeled in Figure 5. As the datasets of semantic segmentation are manually annotated, there are label errors, adding more challenges to the inherently difficult semantic segmentation task. From Figure 5, our method is evidently largely superior to the baseline network FCN based algorithm. And the red box in Figure 5a indicates that our model can segment the wall that is not covered by the branches. Similar situations are the black car in Figure 5b, the sunshade in Figure 5c, and the road and the white car in Figure 6c. Even if the Ground Truth is wrong, our model can still give correct predictions. Similarly, SAGNN's performance is far better than the benchmark on the Vaihingen test set. For example, in the red boxes of Figure 6a,b, the edge is accurately segmented, and the objects are correctly classified, by SAGNN method. In conclusion, SAGNN can predict more accurate segmentation maps; it not only obtains more refined boundary information, but also effectively filters out error noises (incorrectly labeled pixels), a proof of the outstanding performance of GNN and the effectiveness of the model based on superpixels and attention mechanism. Figure 7 shows several examples of the segmentation results of five segmentation algorithms. The upper three rows are the results of the Potsdam test set, and the lower three rows are the results of the Vaihingen test set. The segmentation results of our method are significantly better than those of the other four methods, especially for objects with regular edges (such as Building and Car). However, the segmentation effect is not so accurate for objects with irregular edges, such as the tree in the first image and the low vegetable in the last image. Ablation Study In SAGNN, three important modules are used on the GNN body: superpixel module, attention module, and (CNN) feature extraction module, among which the superpixel module is used to reduce the resolution of the image and retain the boundary information of the object, the attention module is used to focus on similar neighbor information when clustering neighbors, and the CNN feature extraction module is used to extract feature vectors from the original image. Above is a brief discussion about the contributions of the three modules, with which we conducted ablation studies under different settings. Tables 3 and 4 show the ablation experiment results on Potsdam and Vaihingen datasets, respectively. As shown in Table 3, when these three modules are used alone, both overall accuracy and average IoU are below baseline levels. Especially, when the attention module is used alone, the overall model performance is the worst, with OA only 82.72% and MIoU only 75.16%. However, this result does not mean that the attention module is unimportant. The main functions of the attention mechanism are to strengthen effective information and weaken redundant information. For the graph neural network model, the influence of the attention module on the semantic segmentation is indirect, and its greater significance is to increase the ability of the neural network to extract effective information. However, improving the edge accuracy of the image (super-pixel module) and improving feature quality (CNN module) have direct and effective effects on semantic segmentation results, so the model performance will be better when the attention module is used with these two modules. In the experiment where the two modules are used at the same time, it can be seen that the OA of "Superpixel + Attention" combination reaches 86.45%, which is 4.02% higher than if the "Attention" module is used alone, and 2.44% higher than if the "Superpixel" module is used alone; also significant is the improvement brought about with simultaneous use of "Superpixel + Attention" combination in MIoU. Similarly, the OA and MIoU of "Attention + CNN" combination have also achieved better results of 87.68% and 82.19%, respectively. These prove that the attention module has a strong dependence, while the simultaneous use of the other two modules can greatly improve the performance of our model. In the dual-module experiment, the OA and MIoU of "Superpixel + CNN" combination are the best, achieving more than one percentage point higher in each metric than either "Superpixel + Attention" or "Attention + CNN" combination. These results show that good segmentation results require not only superpixel preprocessing but also high-quality features being extracted by CNN. Finally, when the three modules are applied at the same time, OA and mIOU reach the best of all experiments. The same situation can be verified in Table 4. In summary, our method proves effective in optimizing the model from different angles, which brings great benefits to target segmentation. We also conduct ablation experiments on our method in terms of parameters, inference time on GPU, and computational cost (FLOPs). Table 5 details the quantitative results of ablation experiments on the Potsdam test set. It can be seen from Table 5 that the addition of the superpixel module can save inference time and computational cost very effectively. Extensive Analysis We list five aerial images semantic segmentation datasets in Table 6. Among them, the single sheet with the highest resolution is the Zeebrugges dataset, with a spatial resolution of 5 cm. In order to improve computational efficiency, the method [26] reduced the spatial resolution to 10 cm. The largest number of images is The EvLab-SS dataset, which contains 35 satellite images and 25 aerial images. Dual Multi-Scale Manifold Ranking (DMSMR) Network [56] cut the images into 640 × 480 pixels patches and then compresses them into 321 × 321 pixels for training. The Zurich Summer dataset is a relatively small dataset compared to the other four datasets. CNN-Multiresolution Segmentation (MRS) [57] designed three patch sizes (32 × 32, 64 × 64, 128 × 128) for training. P dataset and V dataset are commonly used aerial image semantic segmentation datasets. In order to ensure that the image information is relatively complete, we did not reduce the resolution of the pictures, and cut them into patches of 600 × 600 pixels and 512 × 512 pixels for training. It can be seen from the patch size that our method (including other methods) deals with relatively small-sized patches. For the semantic segmentation model, the learning and processing of large-size images is a challenge, and the substantial increase in image resolution will cause exponential growth in parameters. Our graph structure is constructed with superpixels, so it is advantageous to deal with large-size images. As the image size reaches the city-scale, our model can upgrade the graph neural network to a dynamic evolution network to save computing resources and merge graph nodes in the learning process. Conclusions In this work, a superpixel-based attention graph neural network was proposed for semantic segmentation of aerial images. GNN was built on superpixel nodes and features extracted from the image, with an attention mechanism introduced into the propagation process. Our SAGNN used both the appearance information of aerial images and the geometrical relationship between superpixels. It was able to capture long-term dependencies in images more effectively and maintain the integrity of semantic information. The comprehensive evaluation of two public datasets for semantic segmentation of aerial images well demonstrated that our SAGNN had superior performance. The evaluation metrics on both datasets attained the best results. Although the edges of objects of aerial images are irregular, our model was able segment them accurately. Our model achieved the highest F1 scores regardless of object scale, which showed that our model was robust to aerial images with large scale changes. Although our model performed well in the semantic segmentation task of aerial images, there are still unresolved problems, such as how to process larger size aerial images. As such, a direction of our future work will be to explore how to achieve semantic segmentation of large-size aerial images using a dynamic evolution graph neural network. Institutional Review Board Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest.
7,716.8
2022-01-10T00:00:00.000
[ "Computer Science" ]
Improved Application of Hyperspectral Analysis to Rock Art Panels from El Castillo Cave (Spain) Rock art is one of the most fragile and relevant cultural phenomena in world history, carried out in shelters or the walls and ceilings of caves with mineral and organic substances. The fact it has been preserved until now can be considered as fortunate since both anthropogenic and natural factors can cause its disappearance or deterioration. This is the reason why rock art needs special conservation and protection measures. The emergence of digital technologies has made a wide range of tools and programs available to the community for a more comprehensive documentation of rock art in both 2D and 3D. This paper shows a workflow that makes use of visible and near-infrared hyperspectral technology to manage, monitor and preserve this appreciated cultural heritage. Hyperspectral imaging is proven to be an efficient tool for the recognition of figures, coloring matter, and state of conservation of such valuable art. Introduction Natural caves constitute an important part of our natural heritage. Those that include art representations are a fundamental part of our historical heritage and all of them are, or can become, an excellent tourist resource as well as an excellent living laboratory to understand their behavior in different situations. Smart documentation, which collects enough significant complex data, is key to the conservation of any type of heritage to be passed on to future generations. Many factors can potentially affect rock art and its preservation. First, geological and geomorphological risks could lead to gravitational events such as landslides and collapses. Environmental risks directly affect the decorated panels and its parietal support as a result of exchange of energy with the exterior and can provoke erosion and fragmentation of the support. Water flowing can generate chemical weathering, washing processes or speleothem reconstruction over the walls that can cover the representations. Biologi- The panel was first studied by Alcalde del Río, Breuil and Sierra [16] in 1911, and researchers have revisited and implemented different techniques in the last century to study it, such as direct tracing, drawings, analogue photography and traditional digital images [17][18][19][20][21]. It is located in the narrow passage that connects the second and third rooms of the cave, on the wall opposite the so-called sorcerer of El Castillo. Its dimensions are 1.30 meters long by 1.20 meters wide, in which a more or less rectangular shape or ideomorph of a thick and intense red color stands out ( Figure 2). As will be shown, hyperspectral remote sensing can help not only to better identify figures, analyze superimpositions and isolate pigment signals [3,4] The panel was first studied by Alcalde del Río, Breuil and Sierra [16] in 1911, and researchers have revisited and implemented different techniques in the last century to study it, such as direct tracing, drawings, analogue photography and traditional digital images [17][18][19][20][21]. It is located in the narrow passage that connects the second and third rooms of the cave, on the wall opposite the so-called sorcerer of El Castillo. Its dimensions are 1.30 meters long by 1.20 meters wide, in which a more or less rectangular shape or ideomorph of a thick and intense red color stands out ( Figure 2). Appl. Sci. 2021, 11, x FOR PEER REVIEW 3 of 20 2008. The cave is located in Puente Viesgo, Cantabria, Spain ( Figure 1). Throughout its extension, this cavern contains a large number of animal figures, such as aurochs, goats, deer, hinds, reindeers, horses, etc. The panel was first studied by Alcalde del Río, Breuil and Sierra [16] in 1911, and researchers have revisited and implemented different techniques in the last century to study it, such as direct tracing, drawings, analogue photography and traditional digital images [17][18][19][20][21]. It is located in the narrow passage that connects the second and third rooms of the cave, on the wall opposite the so-called sorcerer of El Castillo. Its dimensions are 1.30 meters long by 1.20 meters wide, in which a more or less rectangular shape or ideomorph of a thick and intense red color stands out ( Figure 2). As will be shown, hyperspectral remote sensing can help not only to better identify figures, analyze superimpositions and isolate pigment signals [3,4] As will be shown, hyperspectral remote sensing can help not only to better identify figures, analyze superimpositions and isolate pigment signals [3,4] The first study carried out on the panel [16] identified one symbol. The use of hyperspectral technology has allowed documenting one ideomorph, two hinds, one bison, one reindeer, and one headless quadruped. Overall Workflow The overall workflow of the proposed method is shown in Figure 3. It basically consists of three parts. The first part deals with the rigorous georeferencing of the data in order to convert hyperspectral data into spatial data; the second part deals with data acquisition and pre-processing to obtain calibrated hyperspectral data. Both parts allow us to create what is known as a georeferenced 3D calibrated hyperspectral model; from this model, data analysis and visualization are carried out to obtain both cartography and false color compositions that improve visualization. Appl. Sci. 2021, 11, x FOR PEER REVIEW 4 of 20 The first study carried out on the panel [16] identified one symbol. The use of hyperspectral technology has allowed documenting one ideomorph, two hinds, one bison, one reindeer, and one headless quadruped. Overall Workflow The overall workflow of the proposed method is shown in Figure 3. It basically consists of three parts. The first part deals with the rigorous georeferencing of the data in order to convert hyperspectral data into spatial data; the second part deals with data acquisition and pre-processing to obtain calibrated hyperspectral data. Both parts allow us to create what is known as a georeferenced 3D calibrated hyperspectral model; from this model, data analysis and visualization are carried out to obtain both cartography and false color compositions that improve visualization. The field work for data acquisition took place in 2018. The sensor used to record data was VNIR Specim V10E (Specim Spectral Imaging Ltd., Oulu, Finland) and, as described in Figure 4, raw data was converted to reflectance and then georeferenced and 3D orthocorrected. Once the georeferenced 3D calibrated hyperspectral model is obtained, a process of classification generates a pigment cartography. An image enhancement process was used in parallel to obtain false color compositions, which were used in the technical process description and pigment interpretation. The field work for data acquisition took place in 2018. The sensor used to record data was VNIR Specim V10E (Specim Spectral Imaging Ltd., Oulu, Finland) and, as described in Figure 4, raw data was converted to reflectance and then georeferenced and 3D orthocorrected. spectral technology has allowed documenting one ideomorph, two hinds, one bison, one reindeer, and one headless quadruped. Overall Workflow The overall workflow of the proposed method is shown in Figure 3. It basically consists of three parts. The first part deals with the rigorous georeferencing of the data in order to convert hyperspectral data into spatial data; the second part deals with data acquisition and pre-processing to obtain calibrated hyperspectral data. Both parts allow us to create what is known as a georeferenced 3D calibrated hyperspectral model; from this model, data analysis and visualization are carried out to obtain both cartography and false color compositions that improve visualization. The field work for data acquisition took place in 2018. The sensor used to record data was VNIR Specim V10E (Specim Spectral Imaging Ltd., Oulu, Finland) and, as described in Figure 4, raw data was converted to reflectance and then georeferenced and 3D orthocorrected. Once the georeferenced 3D calibrated hyperspectral model is obtained, a process of classification generates a pigment cartography. An image enhancement process was used in parallel to obtain false color compositions, which were used in the technical process description and pigment interpretation. Once the georeferenced 3D calibrated hyperspectral model is obtained, a process of classification generates a pigment cartography. An image enhancement process was used in parallel to obtain false color compositions, which were used in the technical process description and pigment interpretation. Georeferencing Georeferencing is the technique of assigning geographic coordinates to an object, which is used in any mapping procedure and in the development of digital cartographic databases. By georeferencing, the position of a given point on the Earth's surface is precisely located. Rock art needs absolute georeferencing to guarantee spatial coherence of the data in a time series, which might be acquired in the future to follow up its condition. Every measurement contains error: no measurement is ever exact and must be rigorously adjusted, with the results undergoing statistical analysis. In the field of geomatics, the integration of global navigation satellite systems (GNSS), topographic total station (TTS), 3D terrestrial laser scanner (3DTLS), digital metric cameras, and hyperspectral imaging systems allow an accurate, reliable and rapid recording of information. This information can be integrated into a Geographic Information System (GIS) which can be used extensively for management, planning or decision making [22]. Accurate georeferencing allows layers of information to be superimposed and analyzed in conjunction over time [23] thanks to the creation of a common reference frame. A reference frame is the practical realization of a system; it is the materialization of a reference system, i.e., the set of points and their coordinates and the techniques applied in the measurements and methods used. To create an accurate reference frame within the cave, this work combines different geomatics methods [24]. Error propagation makes it possible to calculate the accuracy of the elements in the interior of a cave. The workflow followed is: 1. GNSS: Outside the cave, the reference frame is created with visible basis among them which was measured with a Topcon Hiper SR GNSS (Topcon Corporation, Tokyo, Japan) connected to the active network of GNSS stations in Cantabria to receive corrections via NTRIP protocol (Networked Transport of RTCM via Internet Protocol). The result is a set of four coordinates, accurate to the last centimeter. 2. Microgeodetic Network: The GNSS coordinates are used to create a complex network [25,26]. This is carried out to guarantee the quality of the different works carried out, as described by Bjerhammar [27]. First, a free network is adjusted that is later attached to the reference frame, obtaining a set of coordinates that constitute the Reference Frame of the site. The coordinate system was ETRS 1989 UTM Zone 30N-EPSG: 25830. A Topcon GPT-7503 (Topcon Corporation, Tokyo, Japan) total station was used. Its angular accuracy is 3" and measurement accuracy is ± (2 mm + 2 ppm × Distance) mean squared error. 3. Traverse/Radiation: The microgeodetic network was adjusted outside the cave. To have accurate coordinates inside the cave, a closed traverse there and back was observed, and later adjusted and compensated ( Figure 5). The bases were materialized by means of steel nails and from them a series of new bases and targets were radiated with a TTS. These were used as references for 3D laser scanning. 4. 3DTLS: A FARO system model X-130 (FARO Technologies, Florida, USA) [28] was used to scan the cave. The accuracy of a single point is 2 mm at 25 m with a reflectance of 85%. Around 87% of data was measured at less than 4.5 m. Once the data was adjusted, a subsampling method was used to obtain a homogeneous point cloud that will constitute the basis of the ground control/check points that will later be used in photogrammetry. Tensions regarding correspondences between scan points concluded that 81% of the points were below 3 mm. 5. Photogrammetry: The panels were digitized with a resolution of more than 50 microns. The ground control points were taken from the point cloud and a standard photogrammetric process as described in [29]. The photogrammetric equipment included a set of Sony A7R Mark II (Sony Corporation, Tokyo, Japan) cameras with 90 mm macro fixed-focal length objective, exposure control procedure and working distance below 1 m. Each picture had a surface of 0.3976 × 0.2652 m 2 . This 3D model is used to re-project hyperspectral data to it, remove the conical perspective of data and be able to project data and results orthogonally. 4. 3DTLS: A FARO system model X-130 (FARO Technologies, Florida, USA) [28] was used to scan the cave. The accuracy of a single point is 2 mm at 25 m with a reflectance of 85%. Around 87% of data was measured at less than 4.5 m. Once the data was adjusted, a subsampling method was used to obtain a homogeneous point cloud that will constitute the basis of the ground control/check points that will later be used in photogrammetry. Tensions regarding correspondences between scan points concluded that 81% of the points were below 3 mm. 5. Photogrammetry: The panels were digitized with a resolution of more than 50 microns. The ground control points were taken from the point cloud and a standard photogrammetric process as described in [29]. The photogrammetric equipment included a set of Sony A7R Mark II (Sony Corporation, Tokyo, Japan) cameras with 90 mm macro fixed-focal length objective, exposure control procedure and working distance below 1 m. Each picture had a surface of 0.3976 × 0.2652 m 2 . This 3D model is used to re-project hyperspectral data to it, remove the conical perspective of data and be able to project data and results orthogonally. Hyperspectral Imaging System The spectral camera is the combination of the Specim V10E spectrograph with a monochrome sCMOS camera (CL-30). This generates a hyperspectral image that allows solving colorimetric applications in both scientific and industrial applications ( Figure 2). Specim V10E is a spectrographic system that, combined with a monochrome camera, becomes a hyperspectral camera covering a spectral range of 400-1000 nm. It reads a line from the sample and decomposes the spectrum information of that line into a 2D image, composed of spectral information on the Y-axis and spatial information on the X-axis. 214 spectral bands were recorded and the spectral resolution was 5.6 nm. Illumination Sources Caves in general are places with very stable conditions. The cave of El Castillo has an average temperature of 13.78 °C and total absence of light. This means illuminating the panels was necessary, following the recommendations of the International Council of Museums (ICOM) [30]. For this purpose, four Philips TL5 tubular fluorescent visible lamps supported with infrared and ultraviolet light-emitting diode (LED) lights were used. A FieldSpec Pro FR Hyperspectral Imaging System The spectral camera is the combination of the Specim V10E spectrograph with a monochrome sCMOS camera (CL-30). This generates a hyperspectral image that allows solving colorimetric applications in both scientific and industrial applications ( Figure 2). Specim V10E is a spectrographic system that, combined with a monochrome camera, becomes a hyperspectral camera covering a spectral range of 400-1000 nm. It reads a line from the sample and decomposes the spectrum information of that line into a 2D image, composed of spectral information on the Y-axis and spatial information on the X-axis. 214 spectral bands were recorded and the spectral resolution was 5.6 nm. Illumination Sources Caves in general are places with very stable conditions. The cave of El Castillo has an average temperature of 13.78 • C and total absence of light. This means illuminating the panels was necessary, following the recommendations of the International Council of Museums (ICOM) [30]. For this purpose, four Philips TL5 tubular fluorescent visible lamps supported with infrared and ultraviolet light-emitting diode (LED) lights were used. A FieldSpec Pro FR spectroradiometer (Analytical Spectral Devices, Inc., Colorado, USA) was used to record the spectral signature of lights in the range of 350-1050 nm ( Figure 6). Prior to image acquisition, dark (D) and white references (W) were measured to calculate the calibrated reflectance values (I) of raw sample images (I0), using the following formula: Pre-Processing (a) Reflectance Calibration Prior to image acquisition, dark (D) and white references (W) were measured to calculate the calibrated reflectance values (I) of raw sample images (I 0 ), using the following formula: The hyperspectral reflectance image was georeferenced and ortho-corrected by using the georeferenced 3D model obtained from photogrammetry. This process makes it possible to compare the information collected. To cover the working area, there were collected two hyperspectral tracks whose pixel size was set to 500 microns, and the area covered was approximately 0.6 × 1.25 m; that is, 0,75 m 2 in a 1312 × 2500 image. The process was run in Meshlab and the mean reprojection error of the adjustment was 312 microns. Pigment Analysis The processes that have been integrated can be divided into two families: • Pigment analysis techniques, whose end is to create a cartographic representation of pigment by classifying calibrated data and generating thematic information. These techniques produce "fully interpreted" information. • Enhanced visualization techniques, whose end is to be able to create false color compositions to enhance paintings that can hardly be appreciated with the naked eye. These techniques produce "fully uninterpreted" information. The main difference between both techniques is the result; while the output of enhanced visualization technique is images, pigment analysis is a classified pixel. Minimum Noise Fraction Transformation The original data had 214 spectral bands; the minimum noise fraction (MNF) transformation determines the dimensionality of the image, segregates the noise from the data and reduces the hardware requirements inherent in the process [31]. It also determines spatial coherence in the eigenimages and defines the cut-off between "signal" and "noise", allowing specific and precise analyses. It produces orthogonal bands ordered by their information content, which implies it is also used to eliminate noise. Pixel Purity Index The pixel purity index (PPI) aims to locate the purest spectral points of the hyperspectral image. To do this, the method is based on the assumption that the most extreme points in the scatter plots are the best candidates to be used as endmembers. This model offers satisfactory results when the components that reside at a sub-pixel level appear spatially separated. In this situation, the absorption and reflection phenomena of incident electromagnetic radiation can be characterized following a strictly linear pattern. The algorithm [32] proceeds by generating a large number of random N-dimensional vectors called "skewers". Each point of the image is projected onto each skewer and the points corresponding to the ends in the direction of a skewer are identified and stored in a list. As the number of skewers increases, the list also grows, and the number of times a given pixel is stored in the list also increases. Pixels with largest increments are considered to be the final endmembers. n-D Display It is used to locate, identify and group the purest pixels and the most extreme spectral responses in a data set. The n-D viewer helps to visualize the shape of a data cloud resulting from the plotting of image data in spectral space (with image bands as raster axes). The maximum number of bands to be displayed was 54. Usually a spatial subset of minimum noise fraction (MNF) data using only the purest pixels determined from the pixel purity index (PPI) is used. It is also used to check the separability of its classes when generating regions of interest (ROI) as input to supervised classifications. Spectral Analysis The sampling was carried out using Analytical Spectral Devices' full resolution spectrometer. This spectrometer can obtain data 350-2500 nm, but it is important to note that the hyperspectral imaging system takes advantage only of data between 400 and 1000 nm. The spectral signatures used to train the system were obtained from existing panels in the cave to train the subsequent process of the mapping methods [4]. Areas without calcitic concretion and biodeterioration were sampled in different areas with ochre pigment, black pigment, and rocky support. Spectral Angle Mapper The spectral angle mapper (SAM) is a fully automated method used to compare spectral signatures of the image with spectral libraries [32]. For the SAM classification to be effective [33], the image data needs to be converted to apparent reflection. The algorithm determines the similarity between the two spectral signatures by calculating the spectral angle between them, treating them as vector units in the spectral space with spectral dimensionality equal to the number of bands. The evaluation was carried out empirically by selecting a sample of pixels from the thematic map obtained (classified image) and comparing the assigned class with the actual class determined from reference data, obtained in the field sampling. In this way, the percentage of pixels in each class that have been correctly classified can be estimated, as well as the proportion of errors due to confusion between the different classes. Mixture Tuned Matched Filtering Mixture-tuned matched filtering (MTMF) is a hybrid method based on both linear mixture theory and signal processing [34]. MTMF results are presented as two sets of images, the infeasibility images and the matched filter images with values from 0 (no match and feasible) to 1 (perfect match and infeasible). MTMF does not require the knowledge of all the endmembers due to the fact that it maximizes the response of a known endmember and suppresses the response of the composite unknown classes to match the known signatures [35]. Visualization Improvement Visualization enhancement methods are processed independently in order to create image transformations that decorrelate and rescale the noise in data (a process known as noise whitening) resulting in transformed data in which the noise has unit variance and there is no band-to-band correlation. Different methods have been applied in order to create the transformed images. Decorrelation Adjustment Traditionally, digital filters have been used to enhance images. The image analysis techniques used to enhance elements of interest in digital images can be divided into two types: -Those that alter the radiometry of images by increasing their contrast, compressing or stretching the histogram of the images -Those aimed at eliminating redundant data, or decorrelation of data, which are primarily based on Principal Component Analysis (PCA) [36]. This analysis is very practical when dealing with several spectral intervals, since it allows reducing the dataset to a more manageable number, eliminating redundant information. It is applied to generic binary data. Apart from these traditional decorrelation techniques, two decorrelation stretch techniques have been implemented [37] that are recommended for applications on highly correlated multispectral images, called direct decorrelation stretch (DDS) and intensity conservation DDS (ICDDS). A technique called IHS substitution (IHSS) has also been implemented [38]. Hyperspectral bands are usually highly correlated, so they cannot be displayed by independent contrast stretching in false color compositions as common RGB images due to the fact that the colors can appear undersaturated. To solve this problem, DDS increases color saturation by reducing the achromatic component of the RGB image. ICDDS preserves the original image intensity more than DDS IHSS technique has been applied by making two RGB-IHS transformations. In this case, the hue is unaltered and the saturation of a pixel is inversely proportional to its intensity value due to the fact that the product of saturation and intensity values equals to a constant value. The level of degradation of chromatic information is important when implementing saturation stretching algorithms. Principal Component Analysis It is a statistical technique that transforms data from multivariate radiance bands, which are often highly correlated (i.e., visually and numerically similar). It is an image space transformation designed to eliminate this spectral redundancy. The PCA transform is optimal in the sense that, of all the possible transformations, we choose the particular matrix that diagonalizes the covariance matrix of the original multispectral image. PCA is very practical when dealing with several spectral ranges [39], since it allows reducing the dataset to a more manageable number, eliminating redundant information. The Karhuenen-Loeve [40] transformation has been implemented in IDL language. Minimum Noise Fraction Transformation Minimum noise fraction transformation (MNF) was developed to increase the PCA's inability to reliably separate signal and noise components in multispectral images. The MNF is a transformation which, instead of maximizing the variance of the data, maximizes the noise content of each component to obtain the maximum signal-to-noise ratio of each component when is reversed. The MNF transformation allows the removal of noise from the image [31], thus determining the actual size of the image, and reducing computational requirements. It is a linear transformation consisting of the following separate principal component analysis rotations: - The first rotation uses the main components of the noise covariance matrix to decorrelate and rescale the noise in the data resulting in transformed data in which there is no band-to-band correlation. - The second rotation uses the main components derived from the original image data after the noise of the first rotation has been whitened and rescaled by the standard deviation of the noise. Since there will be more spectral processing later, the inherent dimensionality of the data will be determined by examining the final eigenvalues and associated images. Independent Component Analysis ICA transformation is used as a tool for blind source separation. The fundamental objective of independent component analysis (ICA) is to provide a method for finding a linear representation of non-Gaussian data so that the components are statistically independent or as independent as possible [41,42]. Such representation makes it possible to obtain the fundamental structure of the data in many applications, including feature extraction and signal separation. Compared to principal component analysis, ICA analysis offers some unique advantages: -PCA is an orthogonal decomposition. It is based on the analysis of the covariance matrix, which is based on a Gaussian distribution assumption. ICA analysis is based on the assumption of non-Gaussian distribution from independent sources. -PCA analysis uses only second-order statistics, while ICA analysis uses higher-order statistics. Higher order statistics are a more robust statistical assumption, revealing interesting features in generally non-Gaussian hyperspectral datasets. If the characteristic of interest (such as an anomaly) occupies only a small part of all pixels, which makes the contribution to the entire image covariance matrix negligible; in ICA analysis, the characteristic of interest will be considered as the noise bands. In this method, the characteristics are distinguished from the noise bands. Results and Discussion The studied panel is quite complex since it is not in a very good state of preservation. Remains of red and black color, covered by a thin calcite layer, can be appreciated. Working Area This area is located in a place named the tunnel, on the left wall. There is a single small protrusion in what would be the roof. The motifs were made on limestone covered by a thin calcite layer. It consists of a tectiform or ideomorph of thick and intense red color, in a rectangular shape. The interior is very subdivided or compartmentalized. Around it there are some black pigment remains, among which no figure has been identified. The technical characterization tells us that this is a flat ink drawing, created using iron oxide. (Figure 7 Once the workflow is applied, some conclusions can be obtained: 1. ICA and MNF analysis allow the isolation and identification of areas affected by calcite coating, to differing degrees, due to the decrease of signal intensity (Figure 8). Once the workflow is applied, some conclusions can be obtained: 1. ICA and MNF analysis allow the isolation and identification of areas affected by calcite coating, to differing degrees, due to the decrease of signal intensity (Figure 8). 2. Different mineralogical compositions provide different spectral signals. In this case, areas of ochre and black have been distinguished under the calcite. 3. Recovery of the pigments, even below the calcite layer: the limits of the figure are clearly defined, and the motif can be formally reconstructed thanks to the 2nd PCA component (Figure 9) but also similar results can be obtained with MNF and ICA transformations. 4. Extraction of underlying pigments under the calcic crust and the ochre pigment help reconstruct the painting sequence: in the image on the left you can see the legs of a goat and on the right the head of a female deer. (Figure 10). Once the workflow is applied, some conclusions can be obtained: 1. ICA and MNF analysis allow the isolation and identification of areas affected by calcite coating, to differing degrees, due to the decrease of signal intensity (Figure 8). 3. Recovery of the pigments, even below the calcite layer: the limits of the figure are clearly defined, and the motif can be formally reconstructed thanks to the 2nd PCA component (Figure 9) but also similar results can be obtained with MNF and ICA transformations. Classified data has been isolated and used to create the final tracing ( Figure 11). After the assignment phase of the classification, the reliability of the results obtained must be assessed. This will give us an idea of the level of confidence we can have in the classification and check whether the objectives of the analysis have been met. Classified data has been isolated and used to create the final tracing ( Figure 11). After the assignment phase of the classification, the reliability of the results obtained must be assessed. This will give us an idea of the level of confidence we can have in the classification and check whether the objectives of the analysis have been met. The overall reliability of the classification was obtained by dividing the sum of the main diagonal by the total number of pixels. In the present paper, a comparison of the overall accuracy results shows that SAM classification performed the best with 89.075% (Table 1) overall classification accuracy, compared to 83.71% of the MTMF approach, so it was decided to keep the first results. The worst results occur in the class corresponding to thin calcite, as it is sometimes confused with the elements that underlie it. The overall reliability of the classification was obtained by dividing the sum of the main diagonal by the total number of pixels. In the present paper, a comparison of the Appl. Sci. 2021, 11, 1292 13 of 18 overall accuracy results shows that SAM classification performed the best with 89.075% (Table 1) overall classification accuracy, compared to 83.71% of the MTMF approach, so it was decided to keep the first results. The worst results occur in the class corresponding to thin calcite, as it is sometimes confused with the elements that underlie it. In our case, a comparison of the overall accuracy results shows that SAM classification performed the best with 75.00% overall classification accuracy, compared to 60.71% of the MTMF approach. Comparison with the Known The only documentation of the present motif is that made by Breuil [15] (Figure 12). Around it there are some black remains, among which some researchers such as E. Ripoll [16] could not identify any figure. It is also a rectangular shape, but apparently does not have the upper appendix. According to Professor Ripoll, this ideomorphic figure is apparently painted on a previous figure, since in the lower right part of it we can see some black painted legs of a quadruped arranged towards the left. Comparison with the Known The only documentation of the present motif is that made by Breuil [15] (Figure 12). Around it there are some black remains, among which some researchers such as E. Ripoll [16] could not identify any figure. It is also a rectangular shape, but apparently does not have the upper appendix. According to Professor Ripoll, this ideomorphic figure is apparently painted on a previous figure, since in the lower right part of it we can see some black painted legs of a quadruped arranged towards the left. With the application of the proposed hyperspectral methodology, we have been able to verify that the techtiform described by Breuil is correct, but there are a good number of figures that have been subordinated and superimposed on it. We can clearly distinguish the legs that Ripoll already intuited, but in reality, there is a large, almost complete hind figure (Figure 11), of great size, oriented to the right and painted in black. The raised head with the ears can be clearly seen. The neck must have been filled with flat black ink and the line of the back extends below the keycap up to the rump and legs. The hands and the chest are visible to the eye, bordering on the right side of the sign. With the application of the proposed hyperspectral methodology, we have been able to verify that the techtiform described by Breuil is correct, but there are a good number of figures that have been subordinated and superimposed on it. We can clearly distinguish the legs that Ripoll already intuited, but in reality, there is a large, almost complete hind figure (Figure 11), of great size, oriented to the right and painted in black. The raised head with the ears can be clearly seen. The neck must have been filled with flat black ink and the line of the back extends below the keycap up to the rump and legs. The hands and the chest are visible to the eye, bordering on the right side of the sign. But on the lower part of the ideomorph, there are other lines that do not correspond to the deer. From left to right we can see a small bison head painted in red ochre and facing left. On the belly of the large deer, a quadruped with head or covered by the red sign can be seen, which has a high density of pigment in this area. We can see the hindquarters and both the back line and the ventral line perfectly painted in black. Just below the previous representation we see a large, very elongated hind head, also painted in black and facing right. One ear stands out clearly. On the right, in a slightly lower position, we see the front part of a large reindeer figure painted in black and facing right. The taxonomic attribution is made on the basis of the beard and the horns which are clearly distinguishable. In this panel all the figures painted in black are framed in Phase 3 (Gravettian) while the large sign is from Phase 8 (final Magdalenian) and the red bison we classify in Phase 6 (lower Magdalenian) of the chrono-chromatic stratigraphy of the cave of El Castillo [20]. Apparently, this ideomorphic figure is painted on a previous figure since on the lower right side of it there are some black painted legs of a quadruped arranged towards the left ( Figure 11). In the interior of the techtiform we have discovered a deer head facing right. Hyperspectral analysis has clearly shown that it is in a roaring attitude, with its head raised and its mouth open (in black). In addition, other figures have appeared, such as the head of a doe facing left, a reindeer, and a bison head, among others ( Figure 13). Just below the previous representation we see a large, very elongated hind head, also painted in black and facing right. One ear stands out clearly. On the right, in a slightly lower position, we see the front part of a large reindeer figure painted in black and facing right. The taxonomic attribution is made on the basis of the beard and the horns which are clearly distinguishable. In this panel all the figures painted in black are framed in Phase 3 (Gravettian) while the large sign is from Phase 8 (final Magdalenian) and the red bison we classify in Phase 6 (lower Magdalenian) of the chrono-chromatic stratigraphy of the cave of El Castillo [20]. Apparently, this ideomorphic figure is painted on a previous figure since on the lower right side of it there are some black painted legs of a quadruped arranged towards the left (Figure 11). In the interior of the techtiform we have discovered a deer head facing right. Hyperspectral analysis has clearly shown that it is in a roaring attitude, with its head raised and its mouth open (in black). In addition, other figures have appeared, such as the head of a doe facing left, a reindeer, and a bison head, among others ( Figure 13). The traditional drawing of the panel [15] is shown in Figure 12. If we compare it with the result of the SAM classification displayed in Figure 13, the ochre pigments of the ideomorph symbol are shown in pink and, in blue tones, the dark pigments corresponding to other animals. The traditional drawing of the panel [15] is shown in Figure 12. If we compare it with the result of the SAM classification displayed in Figure 13, the ochre pigments of the ideomorph symbol are shown in pink and, in blue tones, the dark pigments corresponding to other animals. Traditionally [15], only one symbol was known, but using hyperspectral technology, one ideomorph, two hinds, one bison, one reindeer, and one headless quadruped have now been documented [3]. Conclusions The presented methodology allows us to extend our knowledge of rock art and its documentation. The cave has been studied by different researchers who used different techniques throughout history since it was discovered in 1903. The reviewing of the rock art panels has produced an increase of six motifs where, previously, only one had been documented. Nowadays, recording methodologies involving rock art have to be non-intrusive to prevent the panels from being damaged when the data is recorded. The present research shows that integrating geomatics methods is key to reaching an accurate, global, comprehensive and synoptic vision of rock art, and it guarantees it can be repeated over time. It makes it possible to obtain information about non-visible regions of the electromagnetic spectrum. The proposed workflow makes the documentation process more sustainable for the cave, because it reduces the time spent inside it, and it avoids the use of targets, increasing the accuracy of control points by up to 2 mm; and, what is more important still, it is possible to have them throughout the panel, thus improving the robustness of the model. Most figures located inside cavities are subject to natural and artificial processes that cause the loss of coloring matter or the erosion of engraved surfaces, so the motifs can often be read with little definition. Applying the present methodology has made it possible to precisely define the original morphology of some painted motifs in different colours (and probably of different chemical composition). It is possible to accurately define the contours of the figures, to precisely recognize anatomical parts or areas of specific figures and, to obtain images that represent a highly reliable reconstruction of the original painting. In this way, figures difficult to see can be "reconstructed" and, thus, allow precise formal studies, or even serve as an efficient support for the production of facsimiles. In this case study, four topics of interest were presented: (a) Recognition of figures, (b) characterization of the coloring matters, (c) study of the state of conservation, and (d) analysis of the technical process. The results show it is possible to clearly differentiate mineralogical compositions, and coloring matter; something that is of great importance and must be studied in depth in the future. The main tool provided by hyperspectral data is the possibility of documenting the conservation of a motif, discriminating veiled areas, leached areas, flaking and any other taphonomic action to which the figure is subjected. They can deteriorate the coloring matter, hindering the interpretation of the motifs due to the decrease in signal strength. It has also been possible to accurately reveal the morphology of motifs, painted in different colors (and, therefore, using different chemical compositions), being possible to create cartography of each figure and the conservation problems associated with them. In rock art studies, usually conditioned by the state of conservation of the figures, reading superimpositions between strokes or figures is one of the fundamental problems. The method presented has made it possible to recognize overlaps of different pigments even below the calcite layer. PPI analysis and SAM classifications have permitted to obtain a map with fully interpreted classified information where each pixel has been assigned a thematic value. This makes it possible to read the line composition of motifs, allowing us to reconstruct the overlaying of lines in relation to the coloring matter. The presented methodology can generate rigorous documentation regarding the conservation of a motif, and the mapping of veiled and leached areas to create a reliable thematic cartography. It can therefore be concluded that hyperspectral remote sensing has a high potential in rock art applications when technical analysis, management and conservation of cultural heritage studies are performed.
9,427.6
2021-02-01T00:00:00.000
[ "Environmental Science", "Art", "Materials Science" ]
Wireless strain sensing system for assessing condition of civil infrastructure facilities Wireless sensors and sensor networks are emerging as substitutes for traditional structural monitoring systems. Their benefit lies in a lower cost of installation because extensive wiring is no longer required between sensors and the data acquisition system. Studies carried out to evaluate performance of wireless strain measurement units are described in this paper. An example is given of a wireless system used for measuring behaviour of a railway bridge, and comparison with traditional systems is made. Introduction Our daily lives are becoming more and more dependent on civil infrastructure, including bridges, buildings, pipelines, offshore structures, etc.It is very important to monitor the condition of such structures, as the monitoring activity enables their proper maintenance.The evaluation of structural condition is highly important after natural hazards such as earthquakes, and after man-made disasters such as terrorist attacks.The condition of such structures has to be evaluated without delay, and they have to be repaired at once to minimize the impact of the disaster and to facilitate recovery of the affected local population and wider community.Tragic disasters, such as the collapse of bridges or residential buildings, often result in a large number of casualties, and generate social and economic problems.The structural health monitoring (SHM) is an emerging field in civil engineering offering various possibilities for the continuous and periodic assessment of the safety and integrity of civil infrastructure.Damage detection strategies can ultimately reduce total costs, i.e. cost incurred over the entire life cycle of structures.In general terms, damage can be defined as changes introduced into a system that adversely affect its performance.Most structural health monitoring methodologies require direct measurement of input excitation for the monitoring to be effective.Methods based on ambient vibrations have gained a considerable importance in the field of structural health monitoring and in detecting the level of damage.An additional challenge arises from the fact that damage in structures is an intrinsically local phenomenon.Sensors close to a damaged site are expected to react better to damage compared to more distant sensors.Therefore, sensors must be densely distributed throughout the structure to effectively detect the place of damage at a point within the structure.It is quite challenging to use traditional wired sensors to implement such a structural health monitoring system based on a dense array of sensors because of the difficulties in deploying and maintaining the associated wiring.In addition, managing such a large amount of data is complex and is not cost-effective.Recent achievements in the development of wireless sensors have enabled implementation of the SHM procedure using a dense array of sensors.Such dense arrays of low-cost wireless sensors can improve the quality of structural health monitoring quite significantly, as these sensors enable wireless communication.Such wireless sensors provide a great quantity of data that can further be utilised by structural health monitoring algorithms to detect, locate, and assess structural damage caused by severe loading events.The information obtained from densely instrumented structures is expected to provide a better insight into the physical state of structural systems.Studies carried out to evaluate performance of wireless sensing units for strain measurements, and development of the associated real time data-acquisition software, are described in detail in this paper.The wireless sensor system was successfully deployed at a railway bridge site, where various structural performance parameters were evaluated, and comparisons were made with parameters obtained by numerical simulations. Wireless sensor networks -system overview Wireless Sensor Networks (WSNs) can be described as a class of information technology infrastructure where computations are embedded into the real-life physical world.Wireless sensor networks consist of a large number of spatially distributed devices known as smart sensors, which are characterized by computing and sensing capabilities.Wireless sensor networks are used for monitoring condition of structures, environmental influences, traffic, manufacturing and plant automation [1,2].They consist of appropriately distributed and wirelessly enabled embedded devices in which a variety of electronic sensors can be used.Each node in a wireless sensor network is equipped with one or more sensors and with a microcontroller, wireless transceiver, and energy source.The microcontroller functions with electronic sensors and as a transceiver, and so an efficient system is formed for relaying small amounts of important data with a minimum power consumption.When deployed in the field, the microcontroller automatically initiates communication with every other node in the range, thus creating an ad hoc mesh network for relaying information to and from the gateway node.This eliminates the need for costly and complex wiring between systems, and also provides flexibility of mesh networking algorithms to transport information from node to node.This allows nodes to be deployed in almost any location and offers flexibility of monitoring a large number of structures, which greatly increases the possibility of using appropriate structural health monitoring solutions [3][4][5].The wireless sensor network has been developed to address limitations of the existing structural health monitoring techniques, which rely on either periodic visual inspections or expensive wired data acquisition systems.With the wireless sensor network platform, users can easily monitor condition of structures or environment using reliable, battery-powered measurement nodes that satisfy industrial ratings, and local analysis and inspection requirements.The wireless sensor network system by National Instruments (NI), which was used in this research, is based on the IEEE 802.15.4 wireless mesh network.The wireless sensor network consists of three main components: nodes, gateways and software.The spatially distributed measurement nodes interface with sensors to monitor structures.The acquired data wirelessly transmits to the gateway, which can operate independently or is connected to a host system where the user can collect, process, analyse and present measurement data using an appropriate software.Routers are a special type of measurement nodes that can be used to extend distance and reliability of the wireless sensor network. Wireless sensor network gateway In a wireless sensor network system, the gateway acts as the network coordinator in charge of node authentication and message buffering.The gateway collects measurement data Wireless strain sensing system for assessing condition of civil infrastructure facilities obtained from nodes and sends them to the company network, where the user can collect, process, analyse, and present the measurement data using a variety of software.The user can use multiple gateways, each communicating on a different, non overlapping software-selectable wireless channel.The NI 9791 programmable gateway, which is used in this research (Figure 1), operates by running deployed LabVIEW real-time applications.The NI WSN-9791 ethernet gateway is a passthrough device that must be connected to a host system.This gateway has a 2.4 GHz IEEE 802.15.4 radio to collect measurement data from the sensor network. Wireless sensor network measurement nodes The following properties are highly significant for wireless sensor network measurement nodes: direct sensor connectivity, reliable communication, and satisfying ratings for use in industry.As nodes are programmable, LabVIEW can be used to customize node behaviour and, adding intelligence, perform local analysis and control.The NI WSN-3214 strain measurement node (Figure 2), which brings waveform acquisition capabilities to the wireless sensor network product line, is ideal for wireless structural health monitoring applications.The node features four analogue channels that support quarter-, half-and full-bridge completion, as well as two digital I/O channels for event detection and programmatic control.A 2.4 GHz radio is used to wirelessly transmit data to the wireless sensor network gateway.The wireless sensor network node can also be configured as a mesh router to increase network distance and connect more nodes in the wireless sensor network system.The node provides 2.5V excitation and supports 350 W and 1 kW strain gauges.When a node powers up, it scans for available networks, locates either a gateway or router node, and attempts to join.When the node joins the network, it downloads the latest configuration from the gateway and begins its normal operation of acquiring measurement data, controlling digital input/output, and transmitting data back to the gateway for processing, alarming, and visualization.section consist of a user-friendly login with information regarding the authentication details, and the path of the acquired data file to store.After entering the main login screen, the user has to provide the path where the configuration and test data will be saved for further reference.After setting the path, the node status window pops up and there the battery state, link quality, network mode, etc. is displayed as shown in Figure 3.The user interface of the developed program has login & configuration options, display settings, and options for the real time data acquisition and display, networking, and post processing (Figure 4).The configuration setup helps in incorporating the configuration of the sensors by reducing the operation time.The configuration icon in the menu screen is used for the node configuration settings.The start icon is used to start the test.The online monitor icon is used to view the data acquired from the wireless sensor network node when Arun Sundaram B., Ravisankar K., Senthil R., Parivallal S. the test is being carried out.The user icon is used to add a new user, or to remove the existing user.The offline viewer is used to view the data for analysis after the test is completed.The help icon is used to help the user to find out more about the measurement system and parameter settings.The offset button is used to initialize the strain value.The node status button is used to view the status of the node.The developed program also enables selection of various configurations for strain measurement.The excitation voltage, measurement range, and bridge configuration settings are also incorporated for each channel and can easily be selected.Each node of the wireless sensor network is configured according to the activation period of the node.The scheduling and data logging of the node is also incorporated.Each channel can be configured to acquire data through a different bridge configuration depending upon the requirements, i.e. quarter bridge, half bridge, and full bridge.Then the parameters like the node, channel number, bridge type, gage factor, Poisson ratio, sampling rate, and waveform interval, are assigned.At that point the settings have to be saved (Figure 5).The developed program also has options for an offline analysis of recorded data, at any point in time.The offline viewer icon is used to view the test data after the test is stopped.Topology plays an important role in the wireless sensor network.Several types of topology can be differentiated in the wireless sensor network system.The topology configuration helps in deciding which node would be acting as the router or the node itself.From the topological standpoint, each node can act as a router and forward the data to the gateway node.Some of the topologies developed in the program are egalitarian topologies (Peer to Peer, or Point to Point), and Star-, Tree-, and Meshbased topologies. Performance evaluation for measurement nodes and developed program The performance of the measurement nodes and the developed program needs to be evaluated under the real time testing environment.Laboratory level calibration studies were therefore carried out using the wireless sensor network for known loadings.For this purpose, a cantilever was instrumented with strain gages along the longitudinal and transverse directions.These strain gages were connected to the wireless sensor node in various bridge configurations.The cantilever beam was subjected to known loads, and strains were measured through the wireless nodes (Figure 6).Wireless strain sensing system for assessing condition of civil infrastructure facilities The measured strains were transmitted wirelessly to the gateway, and the data were viewed in real time on the host personal computer.Hence the software was able to collect the data from wireless sensors through the gateway, and also to plot the data in real time (Figure 7).Thus the application developed for this purpose was able to receive the data from NI measurement nodes wirelessly, and to present the acquired data in real time on the host PC.Additional studies were carried out to evaluate performance of measurement nodes.The strain gages placed on the cantilever beam were connected to a standard conventional data logger in the same configuration.Known loads were applied to the cantilever and the strain was measured at the data logger.The measured strains were compared with the strains measured via wireless nodes.A good correspondence of these strain values was established, with the variation in measured strain of less than 2 %.Hence the measurement nodes were found to be suitable for structural health monitoring applications. Bridge monitoring using wireless strain sensing system Performance of the wireless sensor system was evaluated in laboratory conditions.Responses measured from wireless sensors were compared with those of conventional sensors.The responses were matching well and hence the wireless sensor system was used for assessing condition of a railway bridge.The condition assessment was carried out on the pre-stressed concrete bridge consisting of simply supported spans, as shown in Figure 8.The middle span, measuring 12.9 m in length and almost 1.68 m in depth, was selected for this evaluation.The bridge span consists of two prestressed girders.Each girder is an I-section with seven tendons for effective pre-stressing.Strain gages were placed at five positions, i.e. at quarter spans, at the mid-span, and at supports (Figure 9).The strain gages were connected to the wireless nodes (Figure 10).In addition to the wireless system, a conventional system was also used to enable comparison with the wireless system.The test train formation consisted of two front WAG5 locos, BoxN wagons loaded with iron ore, a BV-cabin, and a rear loco. Since it is very difficult to evaluate dynamic bending moments that are induced in a bridge due to movement of trains, static tests were conducted to evaluate calibration factors for converting the measured dynamic strains into bending moments.Three static tests were conducted by positioning the loco and heavily loaded wagons at predetermined locations within the span.Calibration factors were obtained by correlating the maximum static strain values measured at various sections with the corresponding theoretical bending moment.The average calibration factor of 12.85 obtained from the static test was used in dynamic tests to evaluate structural parameters (Table 1).Dynamic tests for ambient running conditions were performed on the bridge, and the corresponding strain responses were acquired.Measured Arun Sundaram B., Ravisankar K., Senthil R., Parivallal S. strain and calibration factors were used to determine bending moments on the girder for dynamic loads.Parameters evaluated for the mid-span location are given in Table 2.A good agreement between the wireless system and conventional system results was established.The system deployed at the site was used for getting responses from the bridge during the passage of trains.A typical variation of the bending strain response measured via a wireless sensor during passage of a train (load case 1) is shown in Figure 11. Conclusion This paper briefly describes studies carried out to evaluate performance of wireless NI measurement nodes.An application was developed in LabVIEW for acquiring the real time data from measurement nodes in wireless mode.The developed application was tested in laboratory and was found to be working satisfactorily.It enables real-time plotting of measured data, as well as subsequent processing of these data.Comparison was made with the strain measured via a conventional data logger.A good correspondence was established between these strain values.After that, the wireless nodes were deployed at a railway bridge site, along with a conventional system.Responses were measured during the passage of trains and structural parameters were evaluated.It was observed that the Figure 1 . Figure 1.Wireless sensor network gateway Figure 2 . Figure 2. Typical wireless sensor network measurement node3.Development of strain measurement programStudies were carried out towards development of software for wireless data acquisition in the scope of the wireless sensor network, using the LabView interface.National Instruments offers its Measurement & Automation Explorer (MAX), a graphical user interface, to configure the wireless sensor network.This measurement & automation explorer is usually installed with one of the NI application development environments such as LabVIEW or Measurement Studio, or with one of the NI hardware product drivers.The measurement & automation explorer is used for configuring the measurement nodes and the gateway.The connection status of the wireless sensor network is verified in the measurement & automation explorer.The configuration settings like addition, removal and modification of wireless sensor network nodes are performed in the explorer.Applications developed in LabVIEW are encrypted with a unique ID and Password.The login & configuration section consist of a user-friendly login with information regarding the authentication details, and the path of the acquired data file to store.After entering the main login screen, the user has to provide the path where the configuration and test data will be saved for further reference.After setting the path, the node status window pops up and there the battery state, link quality, network mode, etc. is displayed as shown in Figure3.The user interface of the developed program has login & configuration options, display settings, and options for the real time data acquisition and display, networking, and post processing (Figure4).The configuration setup helps in incorporating the configuration of the sensors by reducing the operation time.The configuration icon in the menu screen is used for the node configuration settings.The start icon is used to start the test.The online monitor icon is used to view the data acquired from the wireless sensor network node when Figure 5 . Figure 5. Wireless sensor network configuration settings Figure 6 . Figure 6.Strain gages connected to wireless sensor network (WSN) measurement nodes Figure 7 . Figure 7. Real time data plot at Host PC Figure 8 .Figure 9 . Figure 8. Railway bridge site for deployment of wireless strain sensing system Figure 10 . Figure 10.Wireless sensor node deployed at bridge site
4,002.6
2016-04-18T00:00:00.000
[ "Computer Science" ]
A novel mutual dependence measure in structure learning Mutual dependence between features plays an important role in the formulation of classifiers, clustering and other machine intelligent techniques. In this study a novel measure of mutual information known as integration to segregation (I2S), explaining the relationship between the two features is proposed. Some important characteristics of the proposed measure was investigated and its performance in terms of class imbalance measures was compared. It was shown that I2S possesses the characteristics, which are useful in controlling overfitting problems. In structure learning techniques such as Bayesian belief networks, conventional measures of dependency relationship cope with the overfitting problem by restricting the number of parents for a node; however it is still not impressive because complete overfitting is not eliminated. In contrast, I2S is capable of significantly maximizing the discriminant function with a better control of overfitting in the formulation of structure learning. INTRODUCTION Various computational techniques have produced large amounts of data dealing with multifarious complexities and noticeable heterogeneity, yielding uncertainties and risks.Machine learning and data mining techniques have enabled the researchers to extract useful patterns out of a large dataset.Classification is a notable and impressive technique in machine learning and data mining.A classifier can be defined as a function where class instance is defined as to the objects described by a set of attributes .The dataset of attributes X contain N labelled instances of with the objective of correctly predicting the class label of a new data instance in the learning phase of a classifier.Among many of the classification systems introduced, a Bayesian belief network (BBN) is considered a robust technique by virtue of its ability to decompose complex probabilistic models into brief and tractable elements (Jensen & Neilson, 2007).The data mining community has extensively used it in knowledge discovery tools due to its solid statistical foundation and the capability for inference (Cooper & Herskovits, 1992;Chen et al., 2008;Etminani et al., 2010;Carvalho et al., 2011).The BBN is a strong probabilistic model for knowledge representation. A BBN is drawn by a directed acyclic graph (DAG) representing a set of conditional probability distributions for each stochastic node of the DAG; whereas, each arc between the two nodes represents the direction of inference or induction.A node (child), which is directly pointed to by another node (parent) receives inference from its parent node(s), while the parent node obtains induction from the child node in terms of probabilistic distribution.These concepts of inference and induction are helpful in formulating BBN classifiers. The mutual dependence and correlation between two attributes of a dataset is a key problem in the sphere of structure learning.Numerous pairwise measures have been introduced explaining a particular or general relationship (Gibbons & Subhabrata, 2003;Wasserman, 2007;Corder & Foreman, 2009;Bagdonavicius et al., 2011).However, it has been described that correlation and dependence are intrinsically different phenomena.Although wide application of correlation in various domains of interest has been reported, a careful examination of correlation measures highlights two problems in structure learning.The first issue is related to its incapability of describing the nonlinear structure between the random variables.It has been pointed out that two uncorrelated variables do not suggest their independence to each other (Grimmett & Stirzaker, 2001).The second problem is the inability of providing circumscribed knowledge about the underlying true dependence nature (Grimmett & Stirzaker, 2001).Thus arises a dictum that "correlation is unable to imply causation" emphasizing that correlation is not well suited in classification problems for the sake of establishing causal relationships between variables (Aldrich, 1995).Jensen & Neilson (2007) elaborated two important characteristics for scoring functions used in the belief network; (a) the ability of any scoring metric to balance the accuracy of a structure keeping in view the structure complexity and (b) the computational tractability of any scoring function (metric).Bayesian information criterion (BIC) (Schwarz, 1978), Bayesian Dirichlet equivalence uniform (BDeu) (Buntine, 1991), Akaike information criterion (AIC) (Akaike, 1974), entropy and minimum description length (MDL) (Lam & Bacchus, 1994;Suzuki, 1996), and factorized conditional log-likelihood (fCLL) (Carvalho et al., 2011) have been reported to satisfy these characteristics.Among these scoring functions, BIC, AIC, BDeu and MDL are based on log-likelihood (LL) as given below: Where G denotes directed acyclic graph given the dataset D. Other three counters include n, q i and r i representing the number of cases, the number of distinct states of a feature variable and the number of distinct states of a parent of an i th feature variable.The log-likelihood tends to increase its value as the number of features increases.The phenomenon occurs because the additions of every edge are prone to pay contributions to the resultant loglikelihood of the final structure.This process can be controlled considerably by means of introducing some penalty factor or otherwise restricting the number of parents for every node in the graph. AIC and BIC are usually applied under the hypothesis that regression orders k and i are identical.This assumption brings extra computation and also yields erroneous estimation in theoretical information measures in structure learning (Yang et al., 2013).Yang & Lee (2012) demonstrated the linear impact of improvement in model quality within the scope of exercising BIC function score in K2 (Cooper & Herskovits, 1992).However, it is arguable that there must be an intelligent heuristic to sharply extrapolate the optimized size of the training data.We are of the view that an optimized solution can be achieved by exploiting various intelligent algorithms for tree and graph. METHODS AND MATERIALS In the previous section a brief notion of the decomposability of various scoring measures into a frequency counting problem in structure learning was given.This frequency counting problem thus defined leads to a deficiency in correctly identifying discriminative approaches in defining a sink node correctly.An improved measure of approximation based on joint and marginal probability is proposed while establishing a hypothesis such that Hypothesis H 1 : I2S is a tractable approximation to correctly identify the topology between a pair of nodes in a DAG for structure learning. It details out the relationship between two features such that the states of a dependent feature can be explained as a result of the states of the independent feature.It is essential to point out the following two assumptions for defining I2S mathematically.These include the discrete nature of the dataset features.The second assumption is that each case of the dataset holds an independent probabilistic nature.I2S can be expressed by means of definition 1 and 2 as given below: Definition 1: Given two features F 1 and F 2 , I2S can be expressed mathematically; Conditional probability (CP) is a function of joint probability (JP) and marginal probability (MP) as shown by the equation below; The terms m and n point out the vector length of the 1 st feature F1 and the 2 nd feature F2, respectively in equation 3.There are four terms involved in the mathematical equation of I2S. .This characteristic is most important to correctly identify the true order of the two nodes in structure learning for decision making. I2S Network: It has been reported that the greedy approach is more popular in the application of building and learning belief networks (Carvalho et al., 2011).Moreover, it has been described that K2 (Cooper & Herskovits, 1992) is one of the most optimized techniques for searching algorithms in Bayesian networks.In K2 algorithm, ordering of the features is known a prior, which helps in selection of the most suitable set of parents for each feature.Its input parameters are a set of nodes sorted topologically.Every node in this set is scanned, while the previous nodes are added repeatedly until the resulting score given by the joint probability of the data and the network structure is not incremented.Some notation in the light of well known and relevant concepts of discrete belief networks were introduced and these concepts were formulated into a structure learner devised on the basis of I2S.I2S is a measure defined to measure the dependency (explanation) of one feature on another feature.It is a direct measurement of cardinal relationship in a way that if any distinct value of feature 2 is addressed by only a single value of feature 1, then this will increase the value of I2S where I2S is normalized between 0 to 1.It is described formally such as Î: I2S (F 1 F 2 ).The notation Î will be useful in defining the value of I2S from the 1 st feature (F 1 ) to the 2 nd feature (F 2 ).For a dataset D, a pairwise matrix of Î can be defined; I2S Network Classifiers: An ordered list of the features was developed using I2S.Let M be a matrix in which each element corresponds to the measurement of I2S from the i th feature to j th feature.Let be defined as a list of sorted matrix where sorting criteria is defined by It results in an ordered list known as .I2S based network classifier is a network over X = (X 1 , X 2 , X 3 , …X n ,C) where feature C is considered as a class, hence the goal is to classify the instances (X 1 , X 2 , X 3 , … , X n ) in terms of distinct states of the class.Usually, in the literature, it is common to restrict towards augmented naive Bayes classifier for the sake of computational efficiency (Carvalho et al., 2011); where class feature is placed at the top of the graph with null parents.This relaxation is based on the assumption that the goal is to retrieve the best possible structure, which truly represents the underlying dataset.All of the query variables, which have a parent node within a DAG must have various instances of unique states formally defined as , where C represents the unique values of the feature.We shall introduce notations related to non-augmented naïve Bayes models.Let the i th parent variable of any feature possess distinct values denoted as m ij , where j is the number of unique values, which i th holds.Hence the possible number of configurations of the parent set of any feature can be described as; description is useful in defining the function of conditional probability table (CPT) such as: The generation of CPT turns the network into a classifier.A given instance of data can be tested against this conditional probability table for its inference or induction. RESULTS This section will present the results with their empirical validation in detail.The performance of the proposed measure used in introduced classifiers is measured by accuracy, which is a function of true positive (TP) rate and false positive (FP) rate.It is formally defined as; Experimentation was performed on 29 datasets obtained from UCI (Blake & Merz, 1998) and was preprocessed into weka (Hall et al., 2009) five datasets marked by (*) (Table 1), in which the class feature was placed as the last attribute (this is a mandatory requirement by weka).The flags dataset was a class-less dataset, so the feature 'religion' was fixed as its class attribute.All of these datasets contain nominal, continuous and discrete features while some datasets also contain missing cases, which were ignored by default in weka.It is evident from Table 1 that the dataset is versatile in the number of classes, cases and attribute count so that no question of bias can be raised. Figure 1, which is a stacked cylindrical graph indicates the comparison of result accuracy for six scoring functions and introduced measures.Each cylinder is shown in three colours.The blue colour indicates the percentage of datasets in which the performance of I2S was significantly better than the other scoring function.The red colour indicates the number of datasets where the proposed measure neither delivers better nor demonstrates poor accuracy in classification.The green colour indicates the number of datasets in which I2S failed to yield better results.A careful examination of Figure 1 shows that the accuracy of I2S was comparably higher in comparison to AIC and entropy, where I2S delivers improved accuracy over 22 and 21 datasets while it does not give better results over 3 and 5 datasets, respectively.The recently introduced scoring function measure fCLL gives comparatively better accuracy in comparison to the other five scoring functions when competing with I2S. Apart from the results shown in Figure 1, one may argue that achieving accuracy may not be so impressive; whereas the percentage improvement in accuracy is more compelling.This motivates the introduction of results from another perspective shown in Figure 2, which indicates the percentage of average improvement of accuracy achieved by using the I2S classifier in the K2 searching algorithm.In the case of the entropy measure, the average increase in accuracy was observed as more than 7.5 % while it was 1.19 % in comparison to BDeu. To roughly characterize the computational complexity of the proposed scoring measure, it was noted that the time complexity of 12S was more or less equivalent to that of BDeu and BIC.However, the time complexity of entropy was slightly better than I2S.Moreover, the time complexity for AIC and MDL was significantly better than I2S in many of the datasets. CONCLUSION AND FUTURE WORK In classification, structure prediction from Bayesian inference models is a common practice for the purpose of retrieving hidden rules from masses of data.This process broadly consists of two steps.The first step deals with the construction of the best suitable structure from the data and the second part with the inference from this structure.This study was focused on the first part, which involved the construction of the most suitable network structure.The core part in the design of a BBN classifier is to introduce a discriminant function within the vector space of attributes through utilization of a priori knowledge.The effectiveness of the Bayesian belief network using greedy heuristics like the K2 searching mechanism has earned it an excellent place in the domain of classification systems.Arguments were presented about various scoring functions including BDeu, AIC, entropy, BIC, MDL and a recently introduced fCLL on the ground of overfitting while introducing a new dependency measure in the domain of structure learning.Theoretically, application of mutual information in structure learning is not a novel idea as it was introduced some six decades ago (Chow & Liu, 1968;Pearl, 1988).In this study a novel decomposable scoring function was introduced for the task of structure learning.The introduced measure, known integration to segregation is characterized by the mutual dependence approximated by marginal and joint probability.The novel measure is particularly designed for discriminative learning because it is decomposable and score-equivalent with the capability of permitting efficient estimation of structure learning.The accuracy merit of I2S is evaluated and compared to the common state of-the-art scoring measures given a reasonable size of benchmark datasets obtained from the UCI repository and preprocessed in weka.I2S performed better than generatively-trained Bayesian network classifiers using K2 searching algorithm and numerous scoring functions.The proposed measure is expected to generate a realistic network, which is likely to tally with the practical thinking of field experts in the domain of knowledge.Although the asymptotic complexity of the proposed measure is almost of the same order as the conventional BIC and BDeu scoring metrics, it is still poor in computational complexity as compared to MDL in particular. Acknowledgement We are greatly thankful to anonymous reviewers who suggested numerous insightful comments during the revision of this article. Definition 2 : CP among all of the states of the 2 nd feature.The term )CP of all of the states of the second feature.The factor m / (m-1) and MP i are used for scaling and normalizing the factors by which the final value of I2S always pulsates between 0 and 1.In the forthcoming section, the results of various feature selection techniques will be presented as compared to this technique based on the proposed measure I2S.Given a directed acyclic graph (DAG), I2S is sensitive to the order of sink and its parent node.A swap will change the value of I2S such that is useful in the development of structure learning. arff file format.No further preprocessing was done on these datasets except on September 2013 Journal of the National Science Foundation of Sri Lanka 41(3) Table 1 : Statistical information about dataset used in this study
3,746.4
2013-09-15T00:00:00.000
[ "Computer Science" ]
Integrated Precise Orbit Determination of Multi-GNSS and Large LEO Constellations : Global navigation satellite system (GNSS) orbits are traditionally determined by observation data of ground stations, which usually need even global distribution to ensure adequate observation geometry strength. However, good tracking geometry cannot be achieved for all GNSS satellites due to many factors, such as limited ground stations and special stationary characteristics for the geostationary Earth orbit (GEO) satellites in the BeiDou constellation. Fortunately, the onboard observations from low earth orbiters (LEO) can be an important supplement to overcome the weakness in tracking geometry. In this contribution, we perform large LEO constellation-augmented multi-GNSS precise orbit determination (POD) based on simulated GNSS observations. Six LEO constellations with di ff erent satellites numbers, orbit types, and altitudes, as well as global and regional ground networks, are designed to assess the influence of di ff erent tracking configurations on the integrated POD. Then, onboard and ground-based GNSS observations are simulated, without regard to the observations between LEO satellites and ground stations. The results show that compared with ground-based POD, a remarkable accuracy improvement of over 70% can be observed for all GNSS satellites when the entire LEO constellation is introduced. Particularly, BDS GEO satellites can obtain centimeter-level orbits, with the largest accuracy improvement being 98%. Compared with the 60-LEO and 66-LEO schemes, the 96-LEO scheme yields an improvement in orbit accuracy of about 1 cm for GEO satellites and 1 mm for other satellites because of the increase of LEO satellites, but leading to a steep rise in the computational time. In terms of the orbital types, the sun-synchronous-orbiting constellation can yield a better tracking geometry for GNSS satellites and a stronger augmentation than the polar-orbiting constellation. As for the LEO altitude, there are almost no large-orbit accuracy di ff erences among the 600, 1000, and 1400 km schemes except for BDS GEO satellites. Furthermore, the GNSS orbit is found to have less dependence on ground stations when incorporating a large number of LEO. The orbit accuracy of the integrated POD with 8 global stations is almost comparable to the result of integrated POD with a denser global network of 65 stations. In addition, we also present an analysis concerning the integrated POD with a partial LEO constellation. The result demonstrates that introducing part of a LEO constellation can be an e ff ective way to balance the conflict between the orbit accuracy and computational e ffi ciency. plays a dominant role in non-gravitational for LEO. All the POD computations are performed in the single-thread mode on HP Appollo2000 machines with 16-core CPU and Introduction The precise orbit and clock products of global navigation satellite system (GNSS) are of great importance for GNSS applications, such as precise point positioning (PPP), GNSS meteorology etc. using simulated data. Their initial results demonstrated the significant accuracy improvements of GPS and BDS orbits as a result of the LEO constellation. The aforementioned works evidence the contribution of onboard GNSS observations to the GNSS POD, and shed light on the performance of the integrated POD. However, these studies mainly focus on the integrated POD with a few LEO satellites, since no onboard GNSS observations from large LEO constellations are available at present. Additionally, the impact of different LEO configurations, such as LEO number, orbital altitude, and orbit type, on integrated POD has not yet been well studied. Motivated by the aforementioned studies, we propose a LEO constellation-augmented multi-GNSS precise orbit determination method and assess the LEO-augmented POD performance with different configurations of ground stations and LEO constellations. Within this paper, we mainly focus on the geometric factors that limit the accuracy of GNSS orbit determination. Six LEO constellations with different orbital altitudes, satellite numbers, and orbit types are designed to evaluate the influence of LEO constellations on the GNSS POD. Several station networks with different station number and distributions are employed to discuss the dependency of the integrated POD on ground stations. The paper is structured as follows. Section 2 starts by introducing GNSS and LEO constellations we used, and gives a detailed description of the sophisticated simulation processing methods used for GNSS observations. Then, the LEO constellation-augmented four-system precise orbit determination algorithm and strategy are introduced in Section 3. Section 4 analyzes LEO-augmented GNSS POD results under different tracking conditions in detail. A discussion of our result is given in Section 5. Finally, conclusions are provided in Section 6. Constellation Design With the full operation of the Galileo and BDS constellations, there will be more than 120 GNSS satellites in the sky by 2020. As of August 2019, Galileo had reached its nominal constellation, with only back-up satellites to be launched in the future, while the BDS constellation, which is still under construction, has 19 BDS-3 satellites providing navigation services. Therefore, in order to perform the integrated POD with the whole constellation of the four systems, the orbits of the GPS, GLONASS, Galileo, and BDS satellites have to be simulated based on the nominal constellation configurations [22][23][24][25] of the four systems. The nominal GPS constellation consists of 24 medium earth orbit (MEO) satellites unevenly distributed in six planes. MEO satellites are also employed by GLONASS and Galileo using a 24/3/1 Walker constellation to provide global coverage. In addition, six additional MEO satellites serve as in-orbit spare satellites for the Galileo constellation. Different from the other three constellations, BDS-3 is made up of 24 MEO satellites, 3 geostationary earth orbit (GEO) satellites, and 3 inclined geosynchronous orbit (IGSO) satellites. The three GEO satellites are located at 80 • E, 110.5 • E, and 140 • E, while the IGSO satellites operate in an orbit with an inclination of 55 • ; the right ascension of the ascending nodes (RAAN) is 118 • E. Table 1 presents detailed information on the GNSS constellations. To comprehensively evaluate the impact of LEO configuration on GNSS POD, six LEO constellations with different configurations are designed and simulated. Table 2 lists the LEO constellation parameters. Nearly-polar and sun-synchronous orbits are selected in our study to investigate the influence of the LEO orbit type, which are two typical orbit types for many LEO missions. The nearly-polar orbit we choose has the same orbit inclination, i.e., 84.6 degrees, as the Iridium satellite constellation. Considering the fact that computation time rises sharply for the integrated POD due to huge amounts of observations to be processed and unknown parameters to be recovered, only the constellations with satellites number of 60, 66, 96 are adopted. The Chinese Hongyan project, which has been announced by the China Aerospace Science and Technology Corporation (CASC) for the purpose of global communication and LEO-augmented positioning, proposes a constellation made up of 60 LEO satellites as its backbone, while the 66-LEO constellation is employed by the Iridium satellite constellation for global coverage. In order to assess the influence of LEO altitude on the LEO-augmented GNSS POD, the 60-LEO constellation is simulated with altitudes of 600 km, 1000 km, and 1400 km. Figure 1 presents a sketch of the designed LEO constellations. Observations Simulation Configuration At present, no onboard multi-GNSS observations from large LEO constellations are available, because all the LEO constellations providing augmented navigation services are still under construction. Hence, in order to achieve an augmented four-system POD, all kinds of data should be simulated. Since we mainly focus on the contribution of onboard multi-GNSS observations to the GNSS POD, only ground and onboard multi-GNSS measurements are taken into consideration in the Observations Simulation Configuration At present, no onboard multi-GNSS observations from large LEO constellations are available, because all the LEO constellations providing augmented navigation services are still under construction. Hence, in order to achieve an augmented four-system POD, all kinds of data should be simulated. Since we mainly focus on the contribution of onboard multi-GNSS observations to the GNSS POD, only ground and onboard multi-GNSS measurements are taken into consideration in the data simulation. In this study, we adopt the same simulation method described in Li et al. [9]. To make the simulated data as close as possible to the real observations, all errors related to the satellite, receiver, and signal path, as well as observation noise, should be considered and calculated in the process of data simulation. The undifferenced code and carrier phase observations for ground stations and LEO satellites can be expressed as follows: where s, g, leo, and j represent GNSS satellites, ground stations, LEO satellites and signal frequency respectively. P and L denote pseudorange and carrier phase observations respectively. ρ s g is the distance between the mass center of satellites and ground receiver, while ρ s leo is the distance between the mass center of both GNSS satellites and LEO satellites. c is the speed of light in a vacuum. δt g , δt leo , and δt s represent the clock offsets for the ground receiver, onboard receiver, and satellite respectively. I s g,j and I s leo,j are the ionospheric delays at frequency j. T s g is the tropospheric delay of ground station. λ j is the wavelength at the frequency j. N s g,j and N s leo,j refers to the integer ambiguities for ground and onboard receiver respectively. (b g, j , b leo,j , b s j ) and (B g,j , B leo, j , B s j ) represent the code hardware delay and carrier phase hardware delay of the ground receiver, onboard receiver, and satellites respectively. (ε s g,j , ε s leo,j ) and (ω s g,j , ω s leo,j ) are the combination of multipath and noise for code and carrier phase observations. The main task of the GNSS observation simulations is to accurately compute the components on the right side of Equation (1) using the existing models, and to guarantee that the simulated GNSS observations reflect a real-world environment as much as possible. Before the observation simulation, the orbits of both GNSS and LEO satellites are firstly simulated. Then, we performed a standard multi-GNSS PPP using the real GNSS measurements of each ground station to provide a weekly solution of ground station position, receiver clock offset, the wet components of tropospheric delay, inter-system bias (ISB), and inter-frequency bias (IFB) for the subsequent observation simulation. The satellite-to-receiver distance is calculated based on the position for station at the signal receiving time and mass center position for satellites at the signal transmitting time. The previous PPP processing provides a weekly solution of the ground station position, while the GNSS satellites position can be obtained from the simulated orbits. The phase center offset (PCO) and phase center variation (PCV) values of satellites and ground stations should be considered using the values from "igs14.atx" [26], though they are not presented in Equation (1). The multi-GNSS receiver clock offsets are simulated using the PPP-derived receiver clock offsets, as well as both ISB and IFB. The multi-GNSS precise clock products from GeoForschungsZentrum (GFZ) are used to provide the values of satellite clock offsets. Given the ongoing development of BDS and Galileo constellations, the clock offsets of the unoperated satellites are replaced with the existing one, e.g., the clock offset of BDS C27 satellite is considered the same as that of C13 satellite. It's noteworthy that both the satellite precise clock products and PPP-derived receiver clock offsets are generated based on an ionosphere-free (IF) combination, and they usually absorb the IF combination of code hardware delays for the satellite and receiver, respectively, Remote Sens. 2019, 11, 2514 6 of 22 due to the strong coupling between the clock offsets and the code hardware delay. Hence, Equation (1) can be rewritten as with The combination of code hardware delays in the code observation equation can be calculated using known differential code biases (DCB) from Center for Orbit Determination in Europe (CODE) [27,28]. The ionospheric delay along the direction of the signal propagation at a specific frequency can be modeled using the international GNSS service (IGS) global ionosphere maps (GIM). The dry component of slant tropospheric delay is computed using the Saastamoinen model [29], as well as the global mapping function (GMF) [30], while the wet component is derived as a parameter of the PPP solution. We set the carrier phase ambiguities of each continuous arc as integer values and the phase delay was assumed to be a small floating constant. The residuals of code and carrier phase observations of the PPP process are employed to calculate multipath errors and noise corrections. Moreover, the relativistic correction, phase wind-up correction, and tidal displacements are also taken into consideration in the simulation. In the LEO onboard observations simulation, the calculation method of the components associated with GNSS satellites is the same as that for the ground measurements simulation. Similar to GNSS satellites, the position of the LEO mass center can be acquired from the simulated LEO orbits. The PCO and PCV values of onboard antennas are set to zero. The receiver clock offsets of ground stations are used with the ISB and IFB values derived from GFZ multi-GNSS bias products to simulate the multi-GNSS clock offsets for each LEO onboard receiver. Different from the ground stations, the signals between the GNSS and LEO satellites only pass through the topside part of the ionosphere. Hence, the ionospheric delay for onboard observations is generated according to its contribution to total electron content computed from GIM [31]. In terms of observation noise, the standard deviations of the random noise for code and phase observations are set to 1 m and 5 mm respectively. Details of the employed models and simulation standards are described in Li et al. [9]. Integrated POD Method Based on the simulated GNSS observations, the LEO-augmented four-system POD, also known as the integrated POD, can be performed. The ionosphere-free (IF) combination is adopted to eliminate the ionospheric delay in the dual-frequency code and phase measurements. The linearized observation equation for IF combinations can be written as follows: O leo,0 = x leo,0 , y leo,0 , z leo,0 , v leo,x , v leo,y , v leo,z , p leo,1 , p leo,2 , · · · , p leo,n where u s g and u s leo denote the unit vector of the direction from ground receiver to satellite and from LEO to satellite respectively. ϕ(t, t 0 ) s and ϕ(t, t 0 ) leo refer to the state-transition matrix for satellite and LEO, which transfer the satellites state from reference epoch t 0 to epoch t. O s 0 and O leo,0 are the initial state vector for GNSS and LEO satellites respectively, which consist of the position and velocity of the satellite at the initial epoch and dynamics parameters. For GNSS satellites, the estimated dynamics parameters mainly refer to the solar radiation pressure (SRP) parameters, while for LEO satellites, these dynamics parameters are usually made up of SRP parameters, atmosphere drag parameters, and empirical accelerations. M s g is the mapping function for tropospheric delay, and Z g denotes the zenith delay of the tropospheric wet component. N s g,IF and N s leo,IF represent the float ambiguity for ground station and LEO satellites respectively. The other symbols have similar meanings as those described in Equation (2), except for the IF linear combination. In the case of the multi-GNSS integrated POD, the estimated parameters can be divided into satellite-dependent parameters, ground station-dependent parameters, and LEO-dependent parameters, which can be expressed as follows: where superscript G, R, E, and C represent GPS, GLONASS, Galileo, and BDS satellites respectively. Processing Strategy We select about 120 globally-distributed stations from the Multi-GNSS Experiment (MGEX) [1], and 22 global stations from the International GNSS Monitoring and Assessment System (iGMAS) [32], to simulate one week of ground GNSS observations from day of year (DOY) 001-007, 2018. Figure 2 indicates the distribution of ground stations we used. The onboard GNSS measurements are also simulated based on the LEO constellations described in Section 2.1. The simulated observation interval is set to 30 s for both ground and onboard observations. Table 3 lists the processing strategy of integrated multi-GNSS POD in detail. Due to the large amount of observations to be processed, we selected an arc length of 24 h and a processing interval of 300 s. The cut-off elevation angle is set to 7 • and 1 • for ground stations and LEO satellites respectively. In terms of force model, GNSS and LEO satellites suffer from different perturbative forces, since they move at different orbital altitudes, especially in the aspect of non-gravitational forces. For GNSS satellites, the solar radiation pressure serves as the primary source of non-gravitational forces due to the thin atmosphere at the GNSS altitude; the atmosphere drag is neglected in the GNSS POD processing. Different from GNSS satellites, as the trajectories of LEO satellites undergo perturbation from both solar radiation pressure and atmosphere drag, the atmosphere drag plays a dominant role in the non-gravitational forces for LEO. All the POD computations are performed in the single-thread mode on HP Appollo2000 machines with 16-core CPU and 96-GB memory. Processing Strategy We select about 120 globally-distributed stations from the Multi-GNSS Experiment (MGEX) [1], and 22 global stations from the International GNSS Monitoring and Assessment System (iGMAS) [32], to simulate one week of ground GNSS observations from day of year (DOY) 001-007, 2018. Figure 2 indicates the distribution of ground stations we used. The onboard GNSS measurements are also simulated based on the LEO constellations described in Section 2.1. The simulated observation interval is set to 30 s for both ground and onboard observations. Table 3 lists the processing strategy of integrated multi-GNSS POD in detail. Due to the large amount of observations to be processed, we selected an arc length of 24 h and a processing interval of 300 s. The cut-off elevation angle is set to 7° and 1° for ground stations and LEO satellites respectively. In terms of force model, GNSS and LEO satellites suffer from different perturbative forces, since they move at different orbital altitudes, especially in the aspect of non-gravitational forces. For GNSS satellites, the solar radiation pressure serves as the primary source of nongravitational forces due to the thin atmosphere at the GNSS altitude; the atmosphere drag is neglected in the GNSS POD processing. Different from GNSS satellites, as the trajectories of LEO satellites undergo perturbation from both solar radiation pressure and atmosphere drag, the atmosphere drag plays a dominant role in the non-gravitational forces for LEO. All the POD computations are performed in the single-thread mode on HP Appollo2000 machines with 16-core CPU and 96-GB memory. Table 3. Detailed processing strategy for the integrated multi-GNSS POD. Result Analysis and Discussion In this section, the performance of the multi-GNSS integrated POD with different LEO constellations and different station network schemes is analyzed. Several impact factors including LEO numbers, orbital types, and altitude, as well as the numbers and distribution of ground stations, are discussed in detail. The previously simulated orbits are regarded as the true values, and the differences between our estimated orbits and true orbits provide a method by which to assess the accuracy of the integrated POD results. Integrated POD with Different Numbers of LEO We firstly performed the integrated POD using global network from MGEX. The ground observations from the 120-station MGEX network are processed as a reference. In order to make a fair comparison, only a 65-station network with a good global distribution from the MGEX we selected is used to perform the integrated POD (shown as red dots with a black circle edge in Figure 2). The onboard observations from 60, 66, and 96 polar-orbiting LEO satellites with altitudes of 1000 km (see Figure 1a,b,d) are processed to investigate the influence of LEO numbers on the integrated POD. Figures 3-6 present the average 3D Root Mean Square (RMS) values of orbit differences between integrated POD solution and true orbits for all satellites from GPS, GLONASS, Galileo, and BDS constellations respectively. The result shows that by using only the ground multi-GNSS observations, the majority of GNSS satellites can achieve a 3D orbit accuracy of better than 4 cm, which is comparable to the accuracy of the 24-h orbit recovered from the real ground tracking measurements. This demonstrates that the error models and simulation strategy are close to the real situation. Table 4 computes the average 3D RMS of orbit differences for four systems. The worst orbit quality can be found in BDS GEO, and the average 3D RMS values of orbit differences even exceeds 2 m. This is reasonable, because the stationary and regional coverage feature of GEO satellites lead to poor tracking geometry, greatly hampering the orbit determination for GEO satellites. Galileo, and BDS constellations respectively. The result shows that by using only the ground multi-GNSS observations, the majority of GNSS satellites can achieve a 3D orbit accuracy of better than 4 cm, which is comparable to the accuracy of the 24-h orbit recovered from the real ground tracking measurements. This demonstrates that the error models and simulation strategy are close to the real situation. Table 4 computes the average 3D RMS of orbit differences for four systems. The worst orbit quality can be found in BDS GEO, and the average 3D RMS values of orbit differences even exceeds 2 m. This is reasonable, because the stationary and regional coverage feature of GEO satellites lead to poor tracking geometry, greatly hampering the orbit determination for GEO satellites. As shown in Figures 3-6, all satellites providing global coverage can achieve a sub-centimeter level orbit with the support of LEO onboard observations, while the orbit accuracy of regional coverage satellites (BDS GEO and IGSO) is at the centimeter level. It can be seen that LEO satellites can make more contributions to GNSS orbits than the same numbers of ground stations, since they can not only provide high-quality observations, but also improve the geometry diversity. The orbit of GNSS satellites obtains a remarkable accuracy improvement after introducing the LEO observations into the POD processing. The accuracy improvement with respect to the ground-based POD result can reach over 70% for all of 60 LEO-, 66 LEO-, and 96 LEO-augmented POD schemes (as shown in Table 4). This indicates that the onboard data from the large LEO constellations can be an important supplement to the GNSS POD, even under the condition of a global network. We find that the BDS GEO satellites present the largest accuracy improvement when the integrated POD is implemented; the improvement percentage even reaches up to about 98%. This inspiring result evidences the contribution of LEO to improve the tracking geometry of GEO satellites. Furthermore, it can be seen that smaller orbit differences can be achieved when more LEO satellites are introduced into the integrated POD. The 96-LEO solution achieves a slightly better orbit accuracy than the other two LEO-augmented POD solutions. However, compared with the slight accuracy improvement of the 96-LEO solution, introducing additional 30 LEO satellites into the processing of the integrated POD leads to a huge burden on the computational efficiency because of the larger number of unknown parameters to be solved. The average computation time of integrated POD for the 96-LEO scheme is about 38 h, which is almost three times more than that of 60-LEO and 66-LEO schemes (about 13 h). As shown in Figures 3-6, all satellites providing global coverage can achieve a sub-centimeter level orbit with the support of LEO onboard observations, while the orbit accuracy of regional coverage satellites (BDS GEO and IGSO) is at the centimeter level. It can be seen that LEO satellites can make more contributions to GNSS orbits than the same numbers of ground stations, since they can not only provide high-quality observations, but also improve the geometry diversity. The orbit of GNSS satellites obtains a remarkable accuracy improvement after introducing the LEO observations into the POD processing. The accuracy improvement with respect to the ground-based POD result can reach over 70% for all of 60 LEO-, 66 LEO-, and 96 LEO-augmented POD schemes (as shown in Table 4). This indicates that the onboard data from the large LEO constellations can be an important supplement to the GNSS POD, even under the condition of a global network. We find that the BDS GEO satellites present the largest accuracy improvement when the integrated POD is implemented; the improvement percentage even reaches up to about 98%. This inspiring result evidences the contribution of LEO to improve the tracking geometry of GEO satellites. Furthermore, it can be seen that smaller orbit differences can be achieved when more LEO satellites are introduced into the integrated POD. The 96-LEO solution achieves a slightly better orbit accuracy than the other two LEO-augmented POD solutions. However, compared with the slight accuracy improvement of the 96-LEO solution, introducing additional 30 LEO satellites into the processing of the integrated POD leads to a huge burden on the computational efficiency because of the larger number of unknown parameters to be solved. The average computation time of integrated POD for the 96-LEO scheme is about 38 h, which is almost three times more than that of 60-LEO and 66-LEO schemes (about 13 h). To assess the impact of LEO orbital type, two typical LEO orbits, nearly-polar and sun-synchronous orbits, are adopted in this study. Both of LEO constellations consist of 60 LEO satellites flying at an altitude of 1000 km (see Figure 1a,b). The ground observations from a global network with 65 stations are used. The results are shown in Figure 7. It can be seen that the sun-synchronous-orbiting constellation presents a slightly stronger enhancement to the GNSS orbits than the polar-orbiting constellation. With the sun-synchronous-orbiting data of 60 satellites, the orbit of GNSS satellites can achieve an accuracy similar to that of the 96-polar-orbiting scheme, and the orbit accuracy of BDS GEO for the sun-synchronous scheme is even better than that for the 96-polar-orbiting scheme. To assess the impact of LEO orbital type, two typical LEO orbits, nearly-polar and sunsynchronous orbits, are adopted in this study. Both of LEO constellations consist of 60 LEO satellites flying at an altitude of 1000 km (see Figure 1 (a) and (b)). The ground observations from a global network with 65 stations are used. The results are shown in Figure 7. It can be seen that the sunsynchronous-orbiting constellation presents a slightly stronger enhancement to the GNSS orbits than the polar-orbiting constellation. With the sun-synchronous-orbiting data of 60 satellites, the orbit of GNSS satellites can achieve an accuracy similar to that of the 96-polar-orbiting scheme, and the orbit accuracy of BDS GEO for the sun-synchronous scheme is even better than that for the 96-polarorbiting scheme. As reported by Li et al. [9], the polar-orbiting constellation outperforms the sun-synchronousorbiting constellation in terms of the convergence time of PPP. However, the situation is reversed As reported by Li et al. [9], the polar-orbiting constellation outperforms the sun-synchronous-orbiting constellation in terms of the convergence time of PPP. However, the situation is reversed when the LEO constellation is employed to enhance the GNSS precise orbit determination. In order to determine the reason for this, we computed the actual number of LEO satellites used for the GNSS POD and calculated an orbit dilution of precision (ODOP) value for every GNSS satellite using a similar algorithm like the position dilution of precision (PDOP) for ground stations. Assuming one GNSS satellite can be tracked by n ground stations simultaneously, the ODOP of this GNSS satellite can be calculated as follows: where (x n , y n , z n ) and (x s , y s , z s ) are the positions of ground station and GNSS satellite respectively. → r n = (x n − x s , y n − y s , z n − z s ) is the distance vector with the direction from GNSS satellite to ground station. The ODOP value describes the geometric strength of observations for GNSS satellites. The better the geometric strength, the smaller the ODOP value. Figure 8 shows the used LEO numbers and ODOP values for C02 (GEO) and G13 (MEO) respectively. The two 60-LEO schemes contribute an almost comparable number of LEO satellites to the orbit determination for BDS GEO C02. The average number of the used sun-synchronous-orbiting LEO satellite is 21.5, which is slightly larger than polar-orbiting LEO number, i.e., 20.6. In terms of ODOP, the ODOP values of the sun-synchronous-orbiting constellation present a stronger fluctuation with a larger amplitude than those of polar-orbiting constellation. For BDS GEO satellites, the changeless tracking geometry is the key limitation of their orbit determination. The main contribution of LEO satellites is to bring a fast variation to the tracking geometry for BDS GEO. The faster fluctuation of ODOP indicates that the sun-synchronous-orbiting constellation can bring more variations to the GEO tracking geometry, which can yield more pronounced orbit accuracy improvements for GEO satellites. In the case of satellite G13, the differences between the ODOP variation of the two 60-LEO solutions is small, which leads to a relatively small discrepancy in the corresponding orbit accuracy, i.e., 1 mm. Table 5 lists the result of the integrated POD with LEO satellites at different altitudes. The 60 polar-orbiting LEO satellites at altitudes of 600, 1000, and 1400 km are considered (see Figure 1(a)). It can be seen that there are almost no orbit accuracy differences among the solutions. The integrated POD with higher LEO satellites presents a slightly higher orbit quality than that with lower LEO satellites, except for the BDS GEO satellites. For BDS GEO, the best orbit accuracy is achieved by the 600 km solution. Given the motionless characteristic of GEO satellites, the LEO satellites orbiting at Table 5 lists the result of the integrated POD with LEO satellites at different altitudes. The 60 polar-orbiting LEO satellites at altitudes of 600, 1000, and 1400 km are considered (see Figure 1a). It can be seen that there are almost no orbit accuracy differences among the solutions. The integrated POD with higher LEO satellites presents a slightly higher orbit quality than that with lower LEO satellites, except for the BDS GEO satellites. For BDS GEO, the best orbit accuracy is achieved by the 600 km solution. Given the motionless characteristic of GEO satellites, the LEO satellites orbiting at an altitude of 600 km can contribute more improvements to the tracking geometry for BDS GEO due to their faster motion, which may be the reason for the better performance of the 600-km solution in GEO orbit accuracy. In addition, it should be noted that the LEO satellites at low altitude usually suffer from complex SRP modelling. Also, the trajectory of low-altitude LEO is perturbed by a stronger drag force, which will increase the difficulty of drag force modeling and lead to accelerated orbital decay for LEO. The difficulty of air drag and SRP modelling may negatively affect the result of the integrated POD. Fortunately, with the proper modeling of these forces, such as estimating the drag parameter with a shorter interval, low-altitude LEO satellites can obtain a high precision orbit result, which has been evidenced in many low-altitude LEO missions, such as Swarm [38]. Integrated POD with Different Ground Network In order to evaluate the impact of station quantity and distribution on the LEO-augmented POD, fewer global stations are selected to perform the integrated POD with different LEO constellations. We firstly adopted about 22 global stations from iGMAS (see Figure 2). Typically, the three LEO schemes, i.e., 60 polar-orbiting LEO, 60 sun-synchronous-orbiting LEO, and 96 polar-orbiting LEO constellations at the altitude of 1000 km, are chosen. Figure 9 illustrates the integrated POD results for GPS, GLONASS, Galileo, and BDS using iGMAS ground observations. For comparison, we also plot the GNSS orbit results determined from the 65 MGEX stations and 60 sun-synchronous-orbiting LEO satellites. The corresponding statistical values are provided in Table 6. With regard to the numbers provided in Table 4, a clear degradation in the orbit accuracy can be recognized for all GNSS satellites compared with the POD result using MGEX network when only ground iGMAS observations are processed. This is reasonable, because fewer ground stations are employed to recover GNSS satellites orbit. Similar to the MGEX solution, the significant orbit accuracy improvement for all satellites can be recognized after the incorporation of LEO onboard observations into the GNSS orbit determination. Certainly, BDS GEO satellites exhibit the largest orbit accuracy improvement. The different LEO constellations present a similar performance to that in the MGEX solution. As shown in Table 6, the sun-synchronous-orbiting constellation yields slightly better performance in LEO-augmented POD compared to the two polar-orbiting constellations. fewer ground stations are employed to recover GNSS satellites orbit. Similar to the MGEX solution, the significant orbit accuracy improvement for all satellites can be recognized after the incorporation of LEO onboard observations into the GNSS orbit determination. Certainly, BDS GEO satellites exhibit the largest orbit accuracy improvement. The different LEO constellations present a similar performance to that in the MGEX solution. As shown in Table 6, the sun-synchronous-orbiting constellation yields slightly better performance in LEO-augmented POD compared to the two polarorbiting constellations. Figure 9. Average 3D orbit differences RMS of GNSS satellites for the integrated POD using the iGMAS network. P and S denote the nearly-polar and sun-synchronous orbits, respectively. Figure 9. Average 3D orbit differences RMS of GNSS satellites for the integrated POD using the iGMAS network. P and S denote the nearly-polar and sun-synchronous orbits, respectively. Meanwhile, it should be noted that although a sparse iGMAS network is employed to perform the integrated POD, the orbit accuracy of all three iGMAS integrated POD schemes is comparable to that of the corresponding MGEX solutions. The performance of the iGMAS and MGEX solutions mainly differs in their estimated BDS GEO orbits. Compared with the MGEX solution, the integrated POD with the iGMAS network provides a relatively worse orbit for the BDS GEO. The result demonstrates that the dependence of GNSS orbit determination on the ground stations can be largely reduced when introducing a large number of LEO satellites into the POD processing. Serving as moving stations, LEO satellites can not only provide a large amount of onboard observations, but also bring evident improvements to the tracking geometry for GNSS satellites. The above results indicate that the onboard observations play an important role in reducing the contribution of ground observations to GNSS orbit determination when introducing a large number of LEO satellites. To further investigate the dependency of the integrated POD on the ground stations, we design three integrated POD schemes with only 8 or 4 ground stations involved. The distribution of the small number of stations we selected is shown in Figure 10. The sun-synchronous-orbiting constellation with 60 LEO satellites is adopted due to its best performance in the previous study. It should be noted that the earth rotation parameters are fixed when we perform the integrated POD with a few stations. contribution of ground observations to GNSS orbit determination when introducing a large number of LEO satellites. To further investigate the dependency of the integrated POD on the ground stations, we design three integrated POD schemes with only 8 or 4 ground stations involved. The distribution of the small number of stations we selected is shown in Figure 10. The sun-synchronous-orbiting constellation with 60 LEO satellites is adopted due to its best performance in the previous study. It should be noted that the earth rotation parameters are fixed when we perform the integrated POD with a few stations. As shown in Figure 11, the estimated orbit of the integrated POD with 8 regional stations can achieve a centimeter-level accuracy, which is better than the POD result only using ground MGEX observations (see Table 4). This indicates that with the assistance of 60 LEO satellites, only using observations from regional stations can already obtain a relatively high-quality satellite orbit. A significant reduction in orbit difference can be recognized for all the GNSS satellites when the 8 regional stations are replaced by 8 global stations. We can find that the orbit accuracy of the integrated POD with 8 global stations is almost in line with the accuracy of the corresponding MGEX and iGMAS integrated POD schemes. The accuracy improvement can be attributed to the better performance of the globally distributed stations in anchoring the whole constellations compared to the regional stations. Slight orbit degradation for GNSS satellites is observed when the number of global stations is reduced to 4. The result shows that the POD of GNSS satellites has less dependence on the number of ground stations and is more sensitive to the distribution of ground stations after the inclusion of the large LEO constellation. As shown in Figure 11, the estimated orbit of the integrated POD with 8 regional stations can achieve a centimeter-level accuracy, which is better than the POD result only using ground MGEX observations (see Table 4). This indicates that with the assistance of 60 LEO satellites, only using observations from regional stations can already obtain a relatively high-quality satellite orbit. A significant reduction in orbit difference can be recognized for all the GNSS satellites when the 8 regional stations are replaced by 8 global stations. We can find that the orbit accuracy of the integrated POD with 8 global stations is almost in line with the accuracy of the corresponding MGEX and iGMAS integrated POD schemes. The accuracy improvement can be attributed to the better performance of the globally distributed stations in anchoring the whole constellations compared to the regional stations. Slight orbit degradation for GNSS satellites is observed when the number of global stations is reduced to 4. The result shows that the POD of GNSS satellites has less dependence on the number of ground stations and is more sensitive to the distribution of ground stations after the inclusion of the large LEO constellation. Integrated POD with Partial LEO Constellation The previous results demonstrate the strong enhancement of the entire LEO constellation to the GNSS orbit estimation. However, it can be seen that the integrated POD suffers from a long computation time when a large number of LEO satellites are introduced. The computation time of Integrated POD with Partial LEO Constellation The previous results demonstrate the strong enhancement of the entire LEO constellation to the GNSS orbit estimation. However, it can be seen that the integrated POD suffers from a long computation time when a large number of LEO satellites are introduced. The computation time of the 60-LEO scheme is about 13 h, while for 96-LEO scheme, it reaches about 38 h. Only moderate accuracy differences are obtained for the 60-, 66-and 96-LEO constellations. In order to balance the conflict between orbit accuracy and computational efficiency, it is worthwhile to investigate the integrated POD with a partial LEO constellation, and to explore how many LEO satellites can be used as an auxiliary to improve the orbit accuracy of GNSS satellites. In this section, we process the observation from 65 MGEX stations and a partial LEO constellation. The 60 polar-orbiting LEO constellation and 60 sun-synchronous-orbiting LEO constellation at an altitude of 1000 km are chosen. Figure 12 shows the average 3D RMS of GNSS orbit differences for the integrated POD as a function of the number of LEOs. We find that with only adding 10 LEO satellites, BDS GEO satellite orbits can already achieve an accuracy of better than 10 cm, while the orbit differences of other satellites are below 2 cm. The corresponding computation time is less than 2 h (shown in Figure 13). This result indicates that the introduction of a small number of LEO satellites can not only obtain a relatively high accuracy GNSS orbit, but that it also limits the computation time, which is very beneficial to the GNSS application with high timeliness requirements. For both, the orbit differences of GNSS satellites decrease monotonically as the number of introduced LEO satellites increases. However, with more LEO satellites included, the improvement of orbit accuracy gradually becomes smaller. A reduction of about 0.5-4 cm can be observed in the orbit differences when the LEO increases from 10 to 20, whereas the additional 10 LEO satellites can only contribute to an accuracy improvement of less than 0.2 cm as the LEO increases from 40 to 50. On the other hand, the increase of LEO satellites introduces a large number of parameters to be estimated, resulting in a steep rise in computational time (shown in Figure 13). For example, the computation time of the integrated POD greatly rise from 9.46 h to 13.42 h when the LEO number increases from 50 to 60. But the addition of 10 LEO satellites only yields an orbit accuracy improvement of about 1 mm for the majority of GNSS satellites. satellites are below 2 cm. The corresponding computation time is less than 2 h (shown in Figure 13). This result indicates that the introduction of a small number of LEO satellites can not only obtain a relatively high accuracy GNSS orbit, but that it also limits the computation time, which is very beneficial to the GNSS application with high timeliness requirements. For both, the orbit differences of GNSS satellites decrease monotonically as the number of introduced LEO satellites increases. However, with more LEO satellites included, the improvement of orbit accuracy gradually becomes smaller. A reduction of about 0.5-4 cm can be observed in the orbit differences when the LEO increases from 10 to 20, whereas the additional 10 LEO satellites can only contribute to an accuracy improvement of less than 0.2 cm as the LEO increases from 40 to 50. On the other hand, the increase of LEO satellites introduces a large number of parameters to be estimated, resulting in a steep rise in computational time (shown in Figure 13). For example, the computation time of the integrated POD greatly rise from 9.46 h to 13.42 h when the LEO number increases from 50 to 60. But the addition of 10 LEO satellites only yields an orbit accuracy improvement of about 1 mm for the majority of GNSS satellites. In addition, it can be seen that the sun-synchronous LEO satellites exhibit a stronger enhancement to the GNSS orbits than those in nearly-polar orbit, no matter how many LEO satellites are introduced. The accuracy differences between the sun-synchronous LEO scheme and nearly-polar LEO scheme is very small for the GPS, GLONASS, Galileo, BDS IGSO, and MEO satellites. This is because the relative motion between these satellites and ground stations already yields good geometric diversity, so that the small contribution differences in tracking geometry between sun- In addition, it can be seen that the sun-synchronous LEO satellites exhibit a stronger enhancement to the GNSS orbits than those in nearly-polar orbit, no matter how many LEO satellites are introduced. The accuracy differences between the sun-synchronous LEO scheme and nearly-polar LEO scheme is very small for the GPS, GLONASS, Galileo, BDS IGSO, and MEO satellites. This is because the relative motion between these satellites and ground stations already yields good geometric diversity, so that the small contribution differences in tracking geometry between sun-synchronous LEO and nearly-polar LEO satellites have little impact on the orbit determination of these satellites. However, differences of more than 1 cm are visible in the BDS GEO orbits between the two solutions. Evidently, the combination of global stations and sun-synchronous LEO satellites result in better quality GEO orbits. This is because LEO satellites in sun-synchronous orbit can give rise to faster variations of observation geometry for BDS GEO satellites (see Figure 8). Discussion Serving as moving stations, the LEO satellites can significantly strengthen the tracking geometric diversity of GNSS satellites and improve the accuracy of the GNSS precise orbit determination, particularly in the case of regional network or sparse global network. Although many studies have reported the contribution of LEO onboard observations to the GNSS POD, few studies have focused on the performance of the integrated POD with a large LEO constellation. Meanwhile, the impact factors of the integrated POD, such as LEO number, orbital height, and orbit type, is barely analyzed. In this study, we presented the GNSS orbit solution derived in an integrated processing of the ground network and the large LEO constellation using simulated data. Six LEO constellations are designed to study the performance of the integrated POD with different tracking configurations. The results demonstrate that the orbit accuracy of GNSS satellites can be dramatically improved when a large number of LEO satellites is introduced; the more LEO satellites, the better the orbit accuracy. Notably, BDS GEO orbits present an accuracy of a few centimeters with the recruitment of LEO satellites. Moreover, the LEO satellite in sun-synchronous orbit can contribute more to GNSS orbits than nearly polar-orbiting satellites, since they give rise to a stronger observation geometry. Furthermore, the orbit altitude of LEO satellites is found to have no evident impact on the GNSS satellites except BDS GEO satellites. In addition, it can be seen that the integrated POD with the entire LEO constellation suffers from a long computational time because of the abundant GNSS observations to be processed and the large number of parameters to be estimated. The long computation time of the integrated POD cannot meet the timeliness requirement of the real-time precise positioning. This weakness in terms of computational efficiency can be overcome by just introducing a certain number of LEO satellites. Although our study presents an optimistic integrated POD result due to the simulated data, the impact of introducing LEO constellations to improve GNSS orbit determination is clearly demonstrated. Indeed, not only tracking geometry, but many other factors, such as attitude model, solar radiation pressure and drag models, and antenna calibrations, constitute key limitations in realworld LEO and GNSS POD. Fortunately, with the efforts of many scientists, the force and observation models have been rigorously refined in recent years, which can reduce the impact of model errors on the GNSS POD as much as possible. Using the current models, GNSS satellites (except BDS GEO) can Discussion Serving as moving stations, the LEO satellites can significantly strengthen the tracking geometric diversity of GNSS satellites and improve the accuracy of the GNSS precise orbit determination, particularly in the case of regional network or sparse global network. Although many studies have reported the contribution of LEO onboard observations to the GNSS POD, few studies have focused on the performance of the integrated POD with a large LEO constellation. Meanwhile, the impact factors of the integrated POD, such as LEO number, orbital height, and orbit type, is barely analyzed. In this study, we presented the GNSS orbit solution derived in an integrated processing of the ground network and the large LEO constellation using simulated data. Six LEO constellations are designed to study the performance of the integrated POD with different tracking configurations. The results demonstrate that the orbit accuracy of GNSS satellites can be dramatically improved when a large number of LEO satellites is introduced; the more LEO satellites, the better the orbit accuracy. Notably, BDS GEO orbits present an accuracy of a few centimeters with the recruitment of LEO satellites. Moreover, the LEO satellite in sun-synchronous orbit can contribute more to GNSS orbits than nearly polar-orbiting satellites, since they give rise to a stronger observation geometry. Furthermore, the orbit altitude of LEO satellites is found to have no evident impact on the GNSS satellites except BDS GEO satellites. In addition, it can be seen that the integrated POD with the entire LEO constellation suffers from a long computational time because of the abundant GNSS observations to be processed and the large number of parameters to be estimated. The long computation time of the integrated POD cannot meet the timeliness requirement of the real-time precise positioning. This weakness in terms of computational efficiency can be overcome by just introducing a certain number of LEO satellites. Although our study presents an optimistic integrated POD result due to the simulated data, the impact of introducing LEO constellations to improve GNSS orbit determination is clearly demonstrated. Indeed, not only tracking geometry, but many other factors, such as attitude model, solar radiation pressure and drag models, and antenna calibrations, constitute key limitations in real-world LEO and GNSS POD. Fortunately, with the efforts of many scientists, the force and observation models have been rigorously refined in recent years, which can reduce the impact of model errors on the GNSS POD as much as possible. Using the current models, GNSS satellites (except BDS GEO) can achieve centimeter-level orbits [1], though more efforts are needed to further refine the force and observation models. Meanwhile, the force model errors do not represent a weakness in the observability and tracking geometry, which is a separate issue from the tracking geometry that needs to be further investigated. In addition, except for the factors mentioned above, there are still many issues which need to be taken into consideration when implementing the integrated POD using real GNSS observations. For example, the incorporation of a large number of real multi-GNSS onboard observations may introduce unexpected unmodeled errors into the orbit recovery process. Meanwhile, different from ground receivers, the performance of space-borne receivers can be more easily affected by the space environment due to the high altitude, i.e., 600 km~1000 km, where LEO satellites are flying. As discussed in Xiong et al. [39], large equatorial plasma irregularities can degrade the tracking capability of the Swarm onboard receivers, leading to severe signal loss at low latitudes. This loss of signal for space-borne receivers undoubtedly has negative effects on the integrated POD. Moreover, except for GNSS data, many other types of satellite tracking data are not considered in our study. Once the construction of the navigation augmentation LEO constellation is complete, the new ranging observations from LEO satellites to ground stations can be employed. These new types of satellite tracking data are expected to remove the potential systemic bias in the estimated orbits and further improve the orbit accuracy of both GNSS and LEO satellites. Conclusions This paper is devoted to investigating the integrated precise orbit determination with large LEO constellations. We performed LEO-augmented multi-GNSS precise orbit determination in this study. Based on simulated GNSS observations, several integrated POD schemes are implemented to investigate the performance of integrated POD under different tracking conditions. The potential influence factors of the integrated POD, including LEO satellites number, altitude, and orbit type as well as ground station number and distribution, are analyzed and discussed in detail. The result shows that joint processing ground observations from the global network and the large LEO constellation onboard observations can significantly improve the orbit accuracy for GNSS satellites, especially for BDS GEO satellites. The accuracy improvement percentage with respect to the ground-based POD results can reach over 70% for all the integrated POD schemes with 60-, 66-, 96-LEO satellites. The largest orbit accuracy improvement of over 98% can be recognized for BDS GEO, since the fast motion of LEO satellites brings tremendous variations to the tracking geometry of GEO satellites. By incorporating a large number of LEO satellites, BDS GEO satellites can obtain a centimeter-level orbit. Compared with the 60-and 66-LEO schemes, a slightly better orbit quality is observed in the 96 LEO scheme due to the introduction of more LEO satellites. However, the increase of the involved LEO satellites results in a sharp rise in the computational time of the integrated POD because of more unknown parameters to be solved. We also present the impact of the LEO orbit type on the integrated POD. With the same number of LEO satellites, the sun-synchronous-orbiting constellation presents a stronger enhancement to GNSS orbits than the polar-orbiting constellation. The improvement can be attributed to the more rapid variation of GNSS tracking geometry provided by a sun-synchronous-orbiting constellation. In terms of the LEO altitude, the orbit altitude of LEO satellites is found to have little influence on the enhancement to GNSS orbits except for with BDS GEO satellites. Benefiting from the faster motion of lower LEO satellites, the 600 km scheme achieves a better orbit accuracy for BDS GEO than the 1000 km and 1400 km schemes. In order to assess the impact of station number and distribution, global and regional networks with different numbers of stations are employed. With the inclusion of large LEO constellation, the integrated POD appears to be more sensitive to the distribution of ground stations, rather than the station number. This is because in the integrated POD, the dependence of GNSS orbit determination on ground stations is largely reduced by the onboard GNSS observations when a large LEO constellation is considered. Based on 8 regional stations, the orbit accuracy of the integrated POD with a sun-synchronous-orbiting constellation is at the centimeter level. However, a subcentimeter accuracy can be recognized for the orbits of GNSS satellites when 8 globally distributed stations are adopted, which is comparable to the result of the integrated POD with a denser global network of 65 stations. Although GNSS orbit estimations can greatly benefit from the incorporation of a full LEO constellation, the integrated POD suffers from long computation time due to the addition of a large number of LEO satellites. To balance the conflict between orbit accuracy and computational efficiency, the integrated POD with a partial LEO constellation is investigated. The orbit accuracy of GNSS satellites improves gradually as the LEO satellites increase. The more LEO satellites, the better the orbit accuracy. However, with an increasing number of LEOs, the accuracy improvement of GNSS orbits becomes smaller. Meanwhile, the increase of LEO satellites results in a steep rise in computational time. Considering both orbit accuracy and computational efficiency, we prefer 40 LEO satellites in a sun-synchronous orbit. With the rapid development of the LEO constellations, we expect an improvement of GNSS POD when jointly processing GNSS observation data from reference sites and LEOs. Continued efforts for force modeling, algorithm optimization, and scheme design are required to achieve better performance of the integrated POD.
12,538.4
2019-10-27T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Annotation-Efficient Deep Learning Model for Pancreatic Cancer Diagnosis and Classification Using CT Images: A Retrospective Diagnostic Study Simple Summary In computer-assisted diagnostics for pancreatic cancer, attributes featuring irregular contours and indistinct boundaries on CT images present challenges in acquiring high-quality annotations. In response to this issue, we have devised an innovative self-supervised learning algorithm, engineered to enhance the differentiation of malignant and benign lesions. This innovation obviates the necessity for radiologist intervention, thus facilitating the precise classification of pancreatic cancer. By employing a pseudo-lesion segmentation self-supervised learning model, which capitalizes on automatically generated high-quality training data, we have managed to significantly elevate the performance of both convolutional neural network-based and transformer-based deep learning models. Abstract The aim of this study was to develop a novel deep learning (DL) model without requiring large-annotated training datasets for detecting pancreatic cancer (PC) using computed tomography (CT) images. This retrospective diagnostic study was conducted using CT images collected from 2004 and 2019 from 4287 patients diagnosed with PC. We proposed a self-supervised learning algorithm (pseudo-lesion segmentation (PS)) for PC classification, which was trained with and without PS and validated on randomly divided training and validation sets. We further performed cross-racial external validation using open-access CT images from 361 patients. For internal validation, the accuracy and sensitivity for PC classification were 94.3% (92.8–95.4%) and 92.5% (90.0–94.4%), and 95.7% (94.5–96.7%) and 99.3 (98.4–99.7%) for the convolutional neural network (CNN) and transformer-based DL models (both with PS), respectively. Implementing PS on a small-sized training dataset (randomly sampled 10%) increased accuracy by 20.5% and sensitivity by 37.0%. For external validation, the accuracy and sensitivity were 82.5% (78.3–86.1%) and 81.7% (77.3–85.4%) and 87.8% (84.0–90.8%) and 86.5% (82.3–89.8%) for the CNN and transformer-based DL models (both with PS), respectively. PS self-supervised learning can increase DL-based PC classification performance, reliability, and robustness of the model for unseen, and even small, datasets. The proposed DL model is potentially useful for PC diagnosis. Introduction Pancreatic cancer (PC) is a highly fatal and malignant disease with a dismal prognosis. Despite recent advancements in surgical techniques, chemotherapy, and radiation therapy, the 5-year survival rate remains approximately 11% [1]. Notably, PC has become a common cause of cancer mortality. High mortality primarily results from advanced-stage cancer with metastatic disease at diagnosis [2]. The only hope of long-term survival in PC is if curative resection can be performed. However, PC is asymptomatic until the disease progresses to an advanced stage and, at diagnosis, only about 20% of cases are eligible for surgical resection [3][4][5][6]. Medical imaging plays several important roles in PC screening and early detection, preoperative evaluation and staging, differential diagnosis, follow-up, and treatment evaluation. A computed tomography (CT) scan is the most widely used imaging examination for the detection and staging of pancreatic carcinoma, given that its sensitivity ranges from 76-96%; notably, the sensitivity for larger tumors is higher than that for smaller tumors [7][8][9]. Generally, PC is characterized by abundant fibrous stroma and hypervascularity that account for the poor enhancement of the tumor compared with that of the surrounding pancreatic parenchyma on CT. These lead to poor diagnostic accuracy and sensitivity of tumor detection using CT. Moreover, at present, there is no standard imaging screening procedure, and the accuracy of PC detection and staging critically depends on the appropriate protocol, post-processing technique, and experience of the radiologist. In other words, detecting PC using only a CT scan is a very challenging task for radiologists, especially concerning small tumors in the early stages. Artificial intelligence, particularly deep learning (DL), has demonstrated great promise for prognosis prediction in medical image analysis [10][11][12]. Major DL algorithms, such as the convolutional neural networks (CNNs) [13] and transformer architecture [14], have shown an impressive ability to extract complex visual information from medical images. However, despite the potential of DL, PC diagnosis using DL systems has not yet been actively investigated. Previous studies have demonstrated that DL could reduce the false diagnosis of PC on CT images as a second reader [15][16][17][18][19]. To achieve high-quality results and accurately generalize across multi-centers, CT equipment, and patient ethnicity, a large number of high-quality annotated training datasets are needed to allow deep networks to learn proper visual information for accurate classification. However, collecting a large volume of correctly annotated medical images for DL system development is a complex and expensive endeavor. Moreover, it is impractical to prepare such a well-curated pancreatic dataset, given that the accurate and early identification of PC on CT scans is still challenging, even for radiologists, because of the irregular contours and ill-defined margins of PC [20,21]. Therefore, it is difficult to achieve high prediction accuracy with a small training dataset for PC classification on CT scans, especially for small-sized tumors. In this study, we proposed a novel self-supervised learning algorithm (pseudo-lesion segmentation [PS]) for PC classification using only CT scans. PS was designed to learn the prior visual representations of pancreatic CT scans in a supervised learning manner without requiring a radiologist or expert to annotate the ground truth label to achieve high performance on small training datasets, early stage cancer/small-sized tumors, and cross-ethnicity tests. Related Work DL techniques have exhibited promising results in the field of pancreatic cancer diagnosis using CT imaging [22][23][24]. A key contributor to the success of DL is the availability of extensive training data with manual labels supplied by radiologists. However, the scarcity of annotated medical images is a concern due to the requisite expertise of radiologists and the time-consuming nature of the task. In scenarios involving natural image classification, a prevalent approach is to leverage pre-existing visual representations learned from Ima-geNet, a large dataset of natural images used for classification tasks, employing pretrained weights [11,25]. Nonetheless, the use of ImageNet for learning visual representations in medical image classification is less than optimal because the visual representation learned from the natural image domain might not be suitable for the grayscale medical image domain. This unsuitability arises due to significant differences in feature distribution, spatial resolution, and output labels between the two domains. Another notable technique for addressing the lack of labeled data is self-supervised learning. Self-supervised learning combines supervised and unsupervised learning approaches to learn semantically useful representations from pretext tasks. These tasks involve learning from unlabeled data by creating labels from the data for downstream tasks. Self-supervised learning enables the utilization of unlabeled domain-specific images by solving pretext tasks such as jigsaw puzzles, colorization tasks, and rotation prediction. This allows for the learning of more relevant feature representations for the image domain in downstream tasks like classification and segmentation [26][27][28]. In the field of medical image analysis, self-supervised learning with contrastive learning methods and image distortion pretext tasks has been employed to enhance the performance of various downstream tasks [29]. For example, Li et al. [30] successfully improved the performance of tumor classification by utilizing the feature representation learned through a pretext task of brain tumor segmentation. However, it is important to note that when tumors are not precisely segmented, the accuracy of tumor classification using the learned features is not guaranteed [30]. Moreover, considering the significant variation in annotation tasks between raters [31], which can lead to different conclusions regarding medical diagnoses [32], it becomes challenging to ensure high-quality annotations. In this study, we created a pseudo-lesion using an undefined atypical shape that mimicked the shape of a tumor. Because the atypical shape is composed of a random combination of a plurality of simple shapes, it can be easily generated and a variety of complex and differing types of lesions, such as actual tumors, can be formed. Previously, a pseudo-lesion was created in the form of a simple geometric shape, but this simple shape was very weak in its ability to simulate the complex shape of a real tumor [18]. However, in model observer studies for image quality evaluation, a more realistic tumor or lesion was synthesized and inserted into the CT image. Such realistic tumor shapes were created by synthesizing the actual lesion shapes and organ-specific background textures present in organs such as the breast, liver, or lungs [33,34]. If our proposed selfsupervised algorithm learns feature representation using more realistic tumor shapes, tumor classification performance may be improved. However, it is expected that the complex shape of different tumors for each organ or lesion reflecting the noise characteristics different for each system has poor reproducibility, making it difficult to apply to new organs or systems that are not well known. Ethics All procedures were performed in compliance with the relevant laws and institutional guidelines. The study was approved by the Institutional Review Board of the National Cancer Center (NCC) (2020-0327). The requirement for written informed consent was waived owing to the retrospective nature of the study. Patient and Data Collection In this retrospective diagnostic study, we analyzed CT images acquired from the following datasets: the National Information Society Agency (NIA)-funded Medical Big Data Construction Project, the Medical Segmentation Decathlon, and the Cancer Imaging Archive (TCIA). The NIA-funded project included CT images of PC and normal pancreatic tissues from the NCC and seven general tertiary hospitals in South Korea. PC was defined as histologically or cytologically confirmed pancreatic adenocarcinoma. Benign pancreatic disease included pancreatic cystic lesions and acute or chronic pancreatitis, with a 1-year follow-up period. Images in which no lesions were observed were selected based on the radiologist's report (a negative or unremarkable pancreas) as the normal pancreas set from participants who underwent a health checkup or treatment for anything other than pancreatic disease. In the NIA-funded project, CT images were obtained in the portal venous phase (70 s after intravenous contrast injection) or pancreatic phase (40 s after contrast injection). In the labelling of the lesions, blood vessels were excluded as much as possible. In cases of pancreatitis, the entire lesion, including the peripancreatic infiltration was labelled. The Medical Segmentation Decathlon and TCIA dataset consists of portal venous phase CT images with the resolution of 512 × 512 pixels with varying pixel sizes and slice thickness between 1.5-2.5 mm, acquired on Philips and Siemens MDCT scanners (120 kVp tube voltage). A flowchart describing the research process is presented in Figure 1. Archive (TCIA). The NIA-funded project included CT images of PC and normal pancreati tissues from the NCC and seven general tertiary hospitals in South Korea. PC was define as histologically or cytologically confirmed pancreatic adenocarcinoma. Benign pancreati disease included pancreatic cystic lesions and acute or chronic pancreatitis, with a 1-yea follow-up period. Images in which no lesions were observed were selected based on th radiologist's report (a negative or unremarkable pancreas) as the normal pancreas se from participants who underwent a health checkup or treatment for anything other tha pancreatic disease. In the NIA-funded project, CT images were obtained in the portal venous phase (70 after intravenous contrast injection) or pancreatic phase (40 s after contrast injection). In th labelling of the lesions, blood vessels were excluded as much as possible. In cases of pancre atitis, the entire lesion, including the peripancreatic infiltration was labelled. The Medica Segmentation Decathlon and TCIA dataset consists of portal venous phase CT images wit the resolution of 512 × 512 pixels with varying pixel sizes and slice thickness between 1.5 2.5 mm, acquired on Philips and Siemens MDCT scanners (120 kVp tube voltage). A flowchart describing the research process is presented in Figure 1. For algorithm development, we used data collected from the NCC for the training se and validation set and those from the Medical Segmentation Decathlon and TCIA as th cross-ethnicity external validation set. For the training set and validation set, we utilize the CT images of 4287 patients that were collected between June 2004 and December 2020 All CT scans were carefully reviewed by two experienced radiologists with >5 years o experience in pancreatic imaging. A total of 3010 patients comprised the training set, an 1277 patients comprised the validation set. The CT images of 361 patients from two exter nal sources comprised the cross-ethnicity external validation set. Detailed baseline char acteristics are presented in Table S1. Development of PS Self-Supervised Learning The self-supervised learning algorithm is a technique for solving limited annotate data scenarios in both the natural and medical imaging domains. We developed a nove self-supervised learning algorithm, PS, to overcome a small-size training dataset problem and improve DL system performance in early stage PC and cross-ethnicity tests. Th For algorithm development, we used data collected from the NCC for the training set and validation set and those from the Medical Segmentation Decathlon and TCIA as the cross-ethnicity external validation set. For the training set and validation set, we utilized the CT images of 4287 patients that were collected between June 2004 and December 2020. All CT scans were carefully reviewed by two experienced radiologists with >5 years of experience in pancreatic imaging. A total of 3010 patients comprised the training set, and 1277 patients comprised the validation set. The CT images of 361 patients from two external sources comprised the cross-ethnicity external validation set. Detailed baseline characteristics are presented in Table S1. Development of PS Self-Supervised Learning The self-supervised learning algorithm is a technique for solving limited annotated data scenarios in both the natural and medical imaging domains. We developed a novel selfsupervised learning algorithm, PS, to overcome a small-size training dataset problem and improve DL system performance in early stage PC and cross-ethnicity tests. The proposed PS was designed to learn prior to the semantic representation (i.e., pancreas-related visual representation) from the PC classification data itself via the PS task. Unlike previous research [30], which learned the prior knowledge from the dataset in which the annotated lesion regions were defined by radiologists (Figure 2a), our PS learned the representation from the automatically generated annotation dataset without requiring humans to annotate using labels (Figure 2b). proposed PS was designed to learn prior to the semantic representation (i.e., pancreasrelated visual representation) from the PC classification data itself via the PS task. Unlike previous research [30], which learned the prior knowledge from the dataset in which the annotated lesion regions were defined by radiologists (Figure 2a), our PS learned the representation from the automatically generated annotation dataset without requiring humans to annotate using labels (Figure 2b). Notably, the classification accuracy of previous research depends on the correctness of the tumor region annotation defined by humans [30]; therefore, it cannot guarantee classification performance when the tumor region is not precisely segmented. Moreover, Notably, the classification accuracy of previous research depends on the correctness of the tumor region annotation defined by humans [30]; therefore, it cannot guarantee classification performance when the tumor region is not precisely segmented. Moreover, considering that annotation tasks are often prone to significant variation between raters [31] and that the variation results in different conclusions regarding medical diagnosis [32], it is difficult to secure high-quality annotations. Nonetheless, with an automatically generated annotated dataset, the correctness of the annotation of our proposed PS can be guaranteed. The PS consisted of three steps. We first automatically generated an annotation called pseudo-lesion for prior representation learning via a segmentation task (pretext task) and inserted it into the pancreatic CT scans. The details of the pseudo-lesion generation and insertion are described in Methods S1. An example of the pseudo-lesion inserted CT images is shown in Figure S1. We created a pseudo-lesion using an undefined atypical shape that mimicked the shape of a tumor. Because the atypical shape is composed of a random combination of a plurality of simple shapes, it can be easily generated, and complex and varying types of lesions, such as actual tumors, can be formed. Subsequently, a DL network was trained to learn the pancreas and tumor-related visual representation by segmenting the pseudo-lesion regions in the generated dataset. Finally, we fine-tuned the pretrained DL network for PC classification in a supervised learning manner (Figure 2c). Training of DL Models We incorporated the proposed PS with state-of-the-art DL models, including a CNNbased model named ShuffleNet V2 [13] and a transformer-based model named Pyramid Vision Transformer (PVT) [14] for PC classification using CT scans. Note that, as demonstrated in Table 1, ShuffleNet V2 and PVT showed the best PC classification performance among the latest CNN-based models and transformer-based models tested, respectively, so they were selected as the baseline models. All DL models were trained using CT images collected from 3010 patients and validated on two datasets: an internal validation set (1277 patients) and an external validation set (361 patients). Moreover, we compared the performance of the DL model with and without PS to evaluate the performance of the proposed PS. An implementation detail of the DL models in the experiments is described in the Supplementary Materials (Methods S2 and Table S2). Furthermore, to explore the robustness of the self-supervised learning with PS for small datasets, we performed an experiment involving various training image dataset sizes of 10%, 25%, 50%, 75%, and 100% of the entire training dataset. Statistical Analysis The predictive labels with reference to the ground truth labels were depicted as confusion matrices, which were used to calculate the accuracy, sensitivity, specificity, precision, F1 score, and area under the receiver operating characteristic curve (AUC). Furthermore, the Clopper-Pearson method was used to calculate the 95% confidence interval (CI) for accuracy, sensitivity, specificity, and precision. All computations and statistical analyses were performed using the scikit-learn package, version 0.34, in Python, version 3.7 (Python Software Foundation). These tasks were carried out in an environment equipped with an NVIDIA Titan Xp GPU (NVIDIA Corp., Silicon Valley, USA). State-of-the-Art DL Models for PC Classification In order to select the state-of-the-art DL model to incorporate with our proposed PS, we performed an experiment to evaluate the performance of each DL classification model, including CNN and transformer-based architecture, on the validation set of pancreatic cancer dataset. Table 1 demonstrates that ShuffleNet V2 and PVT achieved the highest accuracy for CNN and transformer-based architecture, respectively, with accuracies of 93.6% (92.1-94.8%) and 90.6% (88.8-92.1%). Impact of the PS on PC Classification on the Internal Validation Dataset We conducted the experiments to evaluate the performance of the proposed PS incorporated with CNN-based and transformer-based DL architecture, i.e., ShuffleNet V2 and PVT, respectively, by comparing DL models with and without PS. As shown in Table 2, the CNN-based DL model with PS achieved a PC classification accuracy of 94.3% (95% CI: 92.8-95.4%), which was 0.7% higher than the accuracy of 93.6% (95% CI: 92.1-94.8%) achieved by the CNN-based model without PS. The CNN-based model with PS demonstrated improved sensitivity, specificity, precision, F1 score, and AUC compared to the model without PS. Additionally, the transformerbased model with PS exhibited even greater enhancements in performance, surpassing the transformer-based model without PS by 5.1% in accuracy, 1.9% in sensitivity, 3.2% in specificity, 15.4% in precision, 0.15 in F1 score, and 0.07 in AUC. From these results, implementing the proposed PS can improve all evaluation metrics on both CNN-based and transformer-based DL models for PC classification. In other words, PS can improve the prediction reliability of the DL models to be more similar to experienced radiologists (ground truth). Furthermore, Figure 3 shows representative CT images overlaid with heat maps produced by the gradient-weight class activation map (Grad-CAM) [42]. The CNN-based model with PS demonstrated improved sensitivity, specificity, precision, F1 score, and AUC compared to the model without PS. Additionally, the transformer-based model with PS exhibited even greater enhancements in performance, surpassing the transformer-based model without PS by 5.1% in accuracy, 1.9% in sensitivity, 3.2% in specificity, 15.4% in precision, 0.15 in F1 score, and 0.07 in AUC. From these results, implementing the proposed PS can improve all evaluation metrics on both CNNbased and transformer-based DL models for PC classification. In other words, PS can improve the prediction reliability of the DL models to be more similar to experienced radiologists (ground truth). Furthermore, Figure 3 shows representative CT images overlaid with heat maps produced by the gradient-weight class activation map (Grad-CAM) [42]. The red and yellow regions on the heat maps represent areas activated by the DL models and have the greatest predictive significance. The results show that incorporating PS with the DL models increases the model's ability to capture the tumor pixel-wise region for CT images with PC and the pancreatic pixel-wise region for CT images for a normal class. External Validation Set Classification A practical DL model should generalize well to the unseen datasets of different ethnic groups obtained from different institutions. We explored the robustness of the DL models to the unseen image source by evaluating the DL models that were trained on the internal training set and validated on the external validation set. The external validation set contained CT images from two different sources and different characteristics (i.e., a patient's ethnicity) from the internal dataset. The experiment results on the external validation set are summarized in The red and yellow regions on the heat maps represent areas activated by the DL models and have the greatest predictive significance. The results show that incorporating PS with the DL models increases the model's ability to capture the tumor pixel-wise region for CT images with PC and the pancreatic pixel-wise region for CT images for a normal class. External Validation Set Classification A practical DL model should generalize well to the unseen datasets of different ethnic groups obtained from different institutions. We explored the robustness of the DL models to the unseen image source by evaluating the DL models that were trained on the internal training set and validated on the external validation set. The external validation set contained CT images from two different sources and different characteristics (i.e., a patient's ethnicity) from the internal dataset. The experiment results on the external validation set are summarized in and 0.57, respectively). In addition, the transformer-based model with PS increased the accuracy by 4.7%, sensitivity by 4.2%, specificity by 4.8%, precision by 0.4%, F1 score by 0.03, and AUC by 0.18, with an accuracy of 87.8% (95% CI: 84.0-90.8%), sensitivity of 86.5% (95% CI: 82.3-89.8%), specificity of 100.0% (95% CI: 90.4-100.0%), F1 score of 0.93, and AUC of 0.80, from the baseline model without PS. These results implied that PS self-supervised learning can enhance the robustness of the DL models to unseen datasets. Table 4 presents the DL models' early stage PC detection performance, which is challenging to visualize in CT images, even for the radiologist, and the accurate diagnosis of early stage PC can increase the survival rate of the patients. The CNN-based model with PS outperformed the model without PS with an accuracy of 54.0% (95% CI: 44.8-57.8%) for PC stage T1 and 76.9% (95% CI: 74.6-79.0%) for PC stage T2. Furthermore, the transformer-based model with PS achieved an accuracy of 55.3% (95% CI: 48.8-61.8%) for PC stage T1 and 75.2% (95% CI: 72.7-77.6%) for PC stage T2, which were higher than those of the model without PS that achieved an accuracy of 50.4% (95% CI: 47.0-56.9%) for PC stage T1 and 67.1% (95% CI: 64.6-69.9%) for PC stage T2. As shown in Figure S2, the DL models with PS are more accurately focused on predicting tumor regions than the models without PS. In other words, the incorporated PS with DL models can increase the prediction accuracy and ability of the model to focus on the tumor regions more accurately compared to the model without PS. Performance Changes Depending on the Size of the Annotated Dataset To evaluate the robustness of the self-supervised learning with PS for small datasets, we randomly sampled 10%, 25%, 50%, and 75% of the entire training dataset and trained PVT with and without PS using these selected datasets. Figure 4 presents performance changes depending on the size of the annotated dataset. PS shows a remarkable increase in the classification performance of the DL model for small datasets (10% and 25% datasets). Performance Changes Depending on the Size of the Annotated Dataset To evaluate the robustness of the self-supervised learning with PS for small datasets, we randomly sampled 10%, 25%, 50%, and 75% of the entire training dataset and trained PVT with and without PS using these selected datasets. Figure 4 presents performance changes depending on the size of the annotated dataset. PS shows a remarkable increase in the classification performance of the DL model for small datasets (10% and 25% datasets). Specifically, by adopting the PS, when the DL model was trained with only 10% of the dataset, the prediction accuracy and sensitivity improved by 20.5% and 37.0%, respectively. This suggests that the implementation of PS could help overcome the problem of low DL model accuracy in situations with limited dataset availability. Discussion Several challenges in PC diagnosis exist. For instance, most PCs have poorly enhanced, ill-defined masses with indistinct borders from the surrounding tissues on CT [20]. Occasionally, there are no apparent lesions, and only pancreatic duct dilatation, distal pancreatic atrophy, abnormal pancreatic contour, and ductal interruption can be observed [43]. Therefore, radiologists' expertise and experience in centers dealing with large numbers of PC cases affect the accuracy of their interpretations [44]. As such, it is challenging to accurately segment PC in CT images, making it extremely difficult to build a large amount of high-quality annotated datasets. This acts as a major barrier to developing a DL-based model for PC diagnosis. In this study, we successfully developed a novel self-supervised learning algorithm (PS), which enhances the PC classification performance of the DL models and generalizes well on new image sources of different patient ethnicities acquired from multiple centers. Specifically, by adopting the PS, when the DL model was trained with only 10% of the dataset, the prediction accuracy and sensitivity improved by 20.5% and 37.0%, respectively. This suggests that the implementation of PS could help overcome the problem of low DL model accuracy in situations with limited dataset availability. Discussion Several challenges in PC diagnosis exist. For instance, most PCs have poorly enhanced, ill-defined masses with indistinct borders from the surrounding tissues on CT [20]. Occasionally, there are no apparent lesions, and only pancreatic duct dilatation, distal pancreatic atrophy, abnormal pancreatic contour, and ductal interruption can be observed [43]. Therefore, radiologists' expertise and experience in centers dealing with large numbers of PC cases affect the accuracy of their interpretations [44]. As such, it is challenging to accurately segment PC in CT images, making it extremely difficult to build a large amount of high-quality annotated datasets. This acts as a major barrier to developing a DL-based model for PC diagnosis. In this study, we successfully developed a novel self-supervised learning algorithm (PS), which enhances the PC classification performance of the DL models and generalizes well on new image sources of different patient ethnicities acquired from multiple centers. The tumor location predicted by the DL models with our PS algorithm showed better correspondence with radiologist labeling than the DL models without PS. This supports the potential usefulness of the DL model with PS, particularly in pre-referral centers or by less experienced radiologists involved in PC diagnosis. Furthermore, the proposed PS demonstrated promising classification performance, even with small, annotated training datasets. Compared to the performance of DL models alone, the DL models with the proposed PS trained with 10% of the dataset showed 20.5% and 37.0% enhanced accuracy and sensitivity, respectively, which means that we are also able to build a successful model with small datasets with this technique. In addition, the DL models with PS self-supervised learning demonstrated the feasibility of using the DL models with PS to detect early stage PC by outperforming the DL models without PS. Generally, patients with an early T1/T2 stage have a poor prognosis compared to those with a late stage [45]. Therefore, the prompt detection of early stage PC is imperative for early interventions and improved prognosis. However, tumors <2 cm are often unremarkable on CT and approximately 40% are undetected at diagnosis [46], with a reported sensitivity as low as 58-77% [47]. We found that the DL model with the proposed PS achieved comparable results to that of previously reported interpretations by experienced radiologists and was superior to other learning models in both T1 and T2 tumors, suggesting that it can reduce overlooked or missed diagnoses of early stage PC, potentially resulting in improved patient outcomes. This work demonstrates the improved robustness of DL models for new image sources of different ethnicities obtained from multi-centers. Compared to the performance of DL models without PS, the DL models with PS achieved higher performance on external validation, which is the combination of CT images from two different open-source datasets from the United States. The lower accuracy and sensitivity of the DL models with PS in the external validation set compared with the internal validation set may contribute to the differences in race/ethnicity and diverse scanners and settings. The participants in our internal dataset were entirely Asian patients, and the external dataset consisted of two different open-source datasets from the United States. The pancreatic content is one of the major factors influencing race/ethnicity differences [16,48,49]. Furthermore, diverse scanners at different institutions may also decrease sensitivity and accuracy. However, since the pancreatic CT protocol is usually recommended for diagnosing pancreatic disease, this difference between centers could be minimized. Rather, CT images with multicenter technical variations from a large number of patients reflect a real clinical practice situation well, suggesting the potential generalizability of our PS model to real-world clinical practice. Additionally, the external performance showed only a modest decrease compared with previous DL algorithms, despite ethnic and technological differences [50]. Our study has some limitations. First, without further cancer prediction results from the variable experience levels of radiologists, the model's ability to reduce the number of overlooked lesions could not be substantiated. Second, the training dataset was collected from seven general tertiary hospitals in only one country (Republic of Korea); thus, there was little ethnic variation. Therefore, the proposed DL method that was tested on the external validation set, acquired from a different distribution in terms of the instrument setting and patient characteristics, such as age and ethnicity, could not achieve as high accuracy as when testing with an internal validation set. Thus, to increase the practical feasibility of the proposed method, we plan to develop a method that increases the robustness of the model for testing with an external dataset. Conclusions In conclusion, we developed a DL-based automatic classification algorithm that increases the performance of state-of-the-art DL algorithms and outperforms other DL algorithms in multiclass, binary, and early stage cancer classifications. Moreover, we demonstrated that our proposed method could potentially increase the robustness of the model when trained with a small dataset. Furthermore, the proposed PS self-supervised learning enhances the ability of the model to classify PC from outside image sources or different scanners. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/cancers15133392/s1, Method S1: Current solutions in limited annotated data scenarios (including self-supervised learning); Method S2: Pretext task training dataset generation; Method S3: Deep learning model implementation/training/validation; Table S1: Demographic and clinical characteristics; Table S2: The network configuration for the DL model; Figure S1: Example of the segmentation pretext task input images (pancreatic cancer computed tomography [CT] scans); Figure S2: Comparison of representative computed tomography images with heat map overlay for tumor regions and pancreatic regions of deep learning models with and without pseudo-lesion segmentation of early stage pancreatic cancer. Reference [51] is found in Supplementary Materials. Institutional Review Board Statement: All procedures were performed in compliance with the relevant laws and institutional guidelines. The study was approved by the Institutional Review Board of the National Cancer Center (NCC) (approval number 2020-0327) on 8 November 2022. Informed Consent Statement: The requirement for written informed consent was waived owing to the retrospective nature of the study. Data Availability Statement: The datasets generated during and/or analyzed during the current study are available at the NIA-funded Medical Big Data Construction Project (https://aihub.or.kr/, accessed on 25 May 2023), the Medical Segmentation Decathlon (http://medicaldecathlon.com/), and TCIA (https://cancerimagingarchive.net/). All computer codes used for modeling and data analysis were stored at https://github.com/Thanaporn09/Cancer_self_transformer.git. The data generated in this study can be made accessible upon a reasonable request directed to the corresponding authors from each respective dataset repository.
7,474.6
2023-06-28T00:00:00.000
[ "Medicine", "Computer Science" ]
A Discrete-Time Fractional-Order Flocking Control Algorithm of Multi-Agent Systems : In this paper, a discrete-time fractional flocking control algorithm of multi-agent systems is put forward to address the slow convergence issue of multi-agent systems. Firstly, by introducing Grünwald-Letnikov (G-L) fractional derivatives, the algorithm allows agents to utilize historical information when updating their states. Secondly, based on the Lyapunov stability theory, the convergence of the algorithm is proven. Finally, simulations are conducted to verify the effectiveness of the proposed algorithm. Comparisons are made between the proposed algorithm and other methods. The results show that the proposed algorithm can effectively improve the convergence speed of multi-agent systems. Introduction A multi-agent system (MAS) consists of multiple autonomous and perceptually-abled agents that interact, communicate, and compete with each other based on specific rules to solve complex problems and accomplish tasks collectively.MASs span various domains, including robotics, network science, sociology, and economics, and have been widely applied in practical scenarios such as consensus control [1], logistics management, and decision support [2]. Flocking control of multi-agent systems has become a research hotspot in recent years.It refers to emergent collective behaviors in a swarm system where agents interact locally and achieve stability, order, and other specific goals [3].Flocking control originates from the study of natural biological swarms which exhibit intelligent characteristics, such as fish schools, bird flocks, and insect colonies [4][5][6].In 1987, Reynolds [7] introduced the Boids model, a discrete-time and space-based multi-agent system used to describe the behavioral interaction in bird-like animal swarms, such as seagulls or flocks of pigeons.The Boids model proposes three basic principles for agent behavior, separation, alignment, and cohesion, which enable collective synergy.On the basis of the Boids model, Vicsek et al. [8] focused only on the alignment principle and proposed the well-known Vicsek model.In this model, each particle adjusts its movement direction by perceiving and responding to the positions and orientations of surrounding particles. Based on these models, flocking control in multi-agent systems has been extensively studied.Olfati et al. formulated the theoretical framework for swarm control in multiagent systems based on Reynolds' three principles, providing a theoretical analysis of the flocking phenomenon from the perspective of nonlinear control theory [9].In 2019, Jia et al. [10] introduced a hierarchical mechanism to the classical Vicsek model, proposing a hierarchical flocking model to explain the collaborative behaviors of specific agents.It is worth noting that the existing research on swarm control in multi-agent systems mainly focuses on integer-order dynamics, such as first-order dynamics [11,12] and second-order dynamics [13,14], while studies based on fractional-order dynamics are relatively scarce.Fractional calculus has been widely applied in natural sciences and engineering fields, such as linear and nonlinear dynamics, intelligent algorithm optimization, and characterizing the internal structures of complex functions.Fractional calculus is a good approach for modeling complex physical systems [15,16] and improving system convergence rates [17,18].Due to the "memory" property of fractional calculus, many complex phenomena in nature that integer-order dynamics cannot explain can be naturally described by the cooperative behaviors of agents with fractional-order dynamics, such as the foraging of microorganisms and the collective movement of bacteria [19].Therefore, it is meaningful to investigate flocking control in multi-agent systems based on fractional-order dynamics.In addition, discrete-time systems are suitable to investigate this, since in continuous systems, in order to execute an implementation, the systems have to be discretized because there are no exact formulas of the solutions. Inspired by the above results, this paper discusses the flocking control of discrete-time multi-agent systems with fractional dynamics.The main contributions are summarized as follows. 1. A discrete-time multi-agent flocking control algorithm was derived based on Grünwald-Letnikov (G-L) fractional derivatives.Compared with existing flocking methods where only integer-order dynamics are considered, our algorithm allows agents to use historical information, which means that the current states of the agents depend on both recent and historical values.Thus, our method conforms more with the reality that individuals always exhibit a time-dependent memory effect in nature; 2. Compared with existing research [18] where only the leaderless condition is taken into account, this paper investigates the fractional-order flocking control of multi-agent systems under the leader-following strategy.Based on the Lyapunov stability theory, the convergence of this algorithm is proven.Experimental results demonstrate that the proposed algorithm achieves consensus among agents and effectively improves the convergence rate. This paper is organized as follows.In Section 2, some notations and basic definitions are given.In Section 3, the model formulation for the discrete-time flocking control algorithm of multi-agent systems is proposed, and the convergence of which is proven.In Section 4, simulations are given to verify our results.Finally, some conclusions are presented in Section 5. Preliminaries In this section, basin graph theory and definitions of the G-L fractional derivative are given.and u i (t) ∈ R 2 be the position vector, velocity vector and control input of agent i at time t, respectively.The topology of agents is described as an undirected graph G = (V, E), where V = {1, 2, • • • , N} denotes n agents and E = {(i, j) ∈ V × V, j ∈ N i (t)} indicates that there is a communication link between agent i and agent j.N i (t) is the set of neighbors of agent i at time t.Let r denote the communication radius, then Graph Theory In order to avoid the collision among agents, the agents need to keep a distance between the safety distance r s and communication radius r; therefore, the expected distance between every two agents is defined as Equation ( 1) is called the α-lattices system [9].However, due to the interaction between individuals, the system cannot reach the ideal system and eventually evolves into the αlattices-like system with error δ, which is shown in Figure 1. Fractional Derivative Fractions calculus, as a branch of mathematics, has evolved over several centuries, resulting in varying definitions.In this paper, we employ the Grunwald-Letnikov fractional derivative, which is defined as follows [20]: where h is a positive real number and k ≥ 0 is an integer.α ∈ (0, 1] is the fractional order and Γ(•) is the Gamma function with Γ(x) = +∞ 0 t x−1 e −t dt.D α (x(t)) is called the G-L fractional derivative of x(t) with order α. In discrete-time implementation, Equation ( 2) is given by where t = eT, e is a positive integer, T is the sampling period and m is the truncation order. Remark 1. Equations ( 2) and (3) demonstrate a vital characteristic of fractional differentiation: it comprises an infinite series of terms, unlike integer differentiation, which consists of a finite series.Consequently, integer differentiation is referred to as a local operator, as it solely relies on the function's value at a single point in time and its finite derivative.In contrast, fractional differentiation incorporates historical information throughout the evolutionary process, metaphorically representing the "memory" of all past events.It can be recognized as a non-local operator with memory effects.This particular property is crucial in accurately depicting the dynamic behavior observed in numerous natural, physical, and engineering systems. Remark 2. t = eT where e is a positive integer is applied for all the following equations.Thus all the time t is in the discrete-time domain in the rest of this paper. The Proposed Flocking Control Method In the current section, the proposed flocking control of multi-agent systems based on the G-L fractional derivative is given.Furthermore, the convergence of the algorithm is analyzed by using the Lyapunov stability theory. Dynamics Model of Multi-Agent Systems We consider that there are N agents in two-dimensional Euclidean space.In the general flocking control models [21][22][23], agents always update their states by integer-order dynamics, that is where T is the time step.Now, let us see the updating speed of the agents.v i (t + T) = v i (t) + u i (t)T can be written as Equation ( 5) is equivalent to The above equation suggests that in the traditional integer-order flocking control, only current information is used to update the agents' states.To apply the historical information to boost the performance of swarm flocking, we replace the plain derivative with the G-L fractional-order derivative with fractional order α ∈ (0, 1], as defined in Section 2.2, then we obtain the following relation: where D 1 (x(t)) denotes the first-order derivative of x(t).Without loss of generality, the first four terms of the fractional derivative, as given by Equation (3), are considered, then we obtain Equation ( 8) reveals that the speed of the agents, based on fractional-order dynamics at time t, is a comprehensive result of historical information from the previous four moments.This contrasts integer-order flocking control, which relies solely on the information of the current state.Importantly, when α = 0, the fractional-order flocking control algorithm regresses to the traditional integer-order flocking control. In order to make all agents move in the desired direction, a virtual leader is added, and the dynamics of which are designed as where x 0 (t) and v 0 (t) represent the position vector and speed vector of the virtual leader at time t. Control Protocol of the Agents Now, we consider the control protocol of the agents.Assume that the control input of agent i at time t is where f g i (t) is the relative distance control term, which is used to achieve aggression and separation.The definition of where with parameter ε > 0, and where the parameters a, b and c satisfy 0 < a < b and c = ∥a − b∥/ √ 4ab.The gradient of σ-norm is ∇∥z∥ σ = z/ 1 + (∥z∥ σ + c) 2 .The term f v i (t) is used to control the agents to achieve speed alignment, which is expressed as is used for the agents to follow the virtual leader, which is defined as where c 1 and c 2 are positive real numbers, representing the feedback parameters. Assumption 1.The initial states of the multi-agent systems are connected, which means that the undirected graph G(0) is connected. Assumption 2. There are no collisions between agents in the initial state of the system.Assumption 3.All the agents can simultaneously receive instructions from the virtual leader. Theorem 1.Consider that Assumptions 1-3 are valid.For the multi-agent system described in Equations ( 8) and (9) with the control input in (10), if the sampling period is small enough such that T → 0, then we can obtain 1. The system will be asymptotic stable, and the agents' positions will eventually tend to lattices; 2. The speed of all agents will tend towards the virtual leader; 3. There will be no collisions among the agents. Proof of Theorem 1.Here, we use the same technique as described in the stability analysis of flocking in [9,24] to prove the stability of our proposed model. The total energy of a multi-agent system is composed of the total potential energy and the relative kinetic energy.According to Equations ( 10)-( 14), the total energy of the system at time t is where U i (t) is the total potential energy at time t [24] where xij (t) = xj (t) − xi (t).For simplicity, let The energy function can be written as Then we obtain lim From the definition of u i (t), invoking Equations ( 10)-( 14), where ṽ(t) = [ṽ 1 (t) ṽ2 (t) • • • ṽN (t)] T , L(t) is the Laplacian matrix of the system at time t, and ⊗ denotes the Kronecker product.Since L(t) is a positive semi-definite matrix, L(t) + c 2 I N is a positive matrix.Then we obtain Q(t + T) − Q(t) < 0, which means Q(t) is decreasing.Therefore, Q(t) < Q(0) for any t > 0, and the system will be asymptotically stable.Assume that agents i and j collide during t c ∈ [t m , t n ].Then U i (t) and Q i (t) will increase, which contradicts that Q(t) is decreasing.Therefore, the agents will not collide.Using the same method, the agents will not collide at each period [t k , t k+1 ].This completes the proof. Flow Chart of Our Proposed Algorithm Figure 2 illustrates the procedure of the discrete-time fractional-order flocking control algorithm of multi-agent systems.First, the agent's speed and position are set up.Next, the interaction force on the agent is calculated using Equations ( 10)-( 14).After this, the agent updates its fractional speed based on the interaction force and Equation (8).Finally, the agent's position is updated using Equation (8). Simulation Results In this section, some numerical simulations based on MATLAB are provided to illustrate the effectiveness of our proposed fractional-order flocking control algorithm. Tests of the Fractional-Order Flocking Control Algorithm The current simulation investigates the speed and position changes of agents based on G-L fractional-order dynamics.The multi-agent system is composed of 100 agents, and the initial location and direction angle are randomly generated in a [0, 100] × [0, 100] area and [−π, π], respectively, which is shown in Figure 3a.The red arrow represents the direction of the agents.Consider the communication radius r = 6, the subsequent expected distance between every two agents is d = 5, and besides c 1 = 0.2 and c 2 = 0.5, the sampling period T = 0.02.The initial position of the virtual leader is (25, 25), and the speed of it is constant (0.5, 0.5).The fractional order is α = 0.8. Figure 3 illustrates the status of the multi-agent system based on our method at different time intervals.Figure 3a shows the initial states of the system, which are complete disorder.Figure 3b,c present the agents' states after 150 and 300 iterations, respectively.Observably, with increasing iterations, the system transitions from disorder to order, and the speed of the agents gradually approach that of the virtual leader.The system's final state is shown in Figure 3d, where all agents converge to a uniform state, exhibiting a lattice-like structure (verifying the first condition in Theorem 1).The motion trajectories of the agents are shown in Figure 4.It can be seen that, at first, the whole system is disordered.With an increase in iterations, the degree of order of the system is improved through interactions between agents; in the end, the system reaches convergence, and all the agents move towards the upper-right direction.We define the speed error between agents and the virtual leader as where e x i (t) and e y i (t) are the speed error in the direction of X-axis and Y-axis at time t, respectively.v x i (t), v x 0 (t) and v y i (t), v y 0 (t) are the speed components in the direction of X-axis and Y-axis at time t, respectively.Then, the global error between the agents and the virtual leader can be expressed as e i (t) = (e x i (t)) 2 + (e y i (t)) 2 .The speed error between the agents and the virtual leader based on fractional-order flocking control is shown in Figure 5. Figure 5a is the speed error in the direction of the X-axis, and Figure 5b is the speed error in the direction of the Y-axis.Figure 5c shows the global error between the agents and the virtual leader.We can see that in the initial time, the speed error between the agents and the virtual leader is significant because the initial speed of the agents is random.Under the control of the fractional-order flocking algorithm, the agent's speed gradually approaches the leader's speed, and finally the speed error tends to 0. At this time, the speed of the agents and the virtual leader is consistent (verifying the second condition in Theorem 1).Above all, the multi-agent systems based on the fractional-order flocking control algorithm proposed in this paper can achieve flocking effectively.The speed of all agents gradually tends to that of the virtual leader over time.In addition, the distance between agents is always greater than 0, and thus Theorem 1 is verified. Performance Test In order to further illustrate the advantages of the proposed fractional-order flocking control method in this paper, comparisons were made among this method and two other commonly used flocking control algorithm, which are flocking control based on integerorder dynamics [9] and betweenness centrality with the influence degree (BCID) [25].The selected system consists of 200 agents in a [0, 50] × [0, 50] area.To avoid randomness and to obtain general results, 50 simulations are conducted, each with a maximum of 300 iterations. Other parameters were the same as those in Section 4.1.The states of the agents can be quantitatively expressed by the velocity direction order parameter Φ, which is defined as Φ describes the degree of order of the agents' movements.When Φ = 0, all agents in the system move in random directions and the system is completely disordered.When Φ = 1, all agents move in the same direction, and the system is ultimately ordered.It was found that the system will show apparent order when Φ = 0.9 [26]. The results are shown in Figure 6.The initial state of the system is strongly disordered; therefore, the degree of order for all three methods is poor in the beginning.With the increase in iterations, the speed of the agents tend towards the virtual leader's speed, and consequently the system gradually becomes ordered.Note that when there are fewer than 120 iterations, the convergence rate of our proposed fractional-order flocking control algorithm is no better than the other two algorithms.This is because the agents with fractional-order dynamics utilize the historical information to update their states; however, the order of the system at each moment in this period (less than 120 iterations) is relatively poor, and the use of such poor information will reduce the order of the system.However, as the order of the system gradually increases, "high-quality" historical information (the degree of order is relatively high) will accelerate the convergence rate of the system.Therefore, when the number of iterations reaches 120, the convergence rate of the multi-agent system based on the fractional-order flocking control algorithm is significantly higher than that of the integer-order method and BCID method, and the consistency of the system is eventually realized much faster. Conclusions A discrete-time flocking control algorithm for multi-agent systems was proposed by introducing a G-L fractional derivative to the speed updating process of agents, which enables agents to utilize historical information.The convergence of this algorithm is proven.The simulation results demonstrate that multi-agent systems can achieve flocking based on this algorithm.Furthermore, our proposed algorithm can effectively improve the convergence rate of multi-agent systems.In future work, we will focus on applying this algorithm to multi-vehicle and multi-robot systems. Figure 2 . Figure 2. Flow chart of our proposed flocking algorithm. Figure 3 . Figure 3.The status of the agents. Figure 4 . Figure 4.The trajectories of the agents. Figure 5 . Figure 5.The speed error between the agents and the virtual leader. Figure 6 . Figure 6.The convergence rate comparison of the algorithms.
4,368
2024-01-27T00:00:00.000
[ "Engineering", "Computer Science" ]
Characterizing and Quenching Autofluorescence in Fixed Mouse Adrenal Cortex Tissue Tissue autofluorescence of fixed tissue sections is a major concern of fluorescence microscopy. The adrenal cortex emits intense intrinsic fluorescence that interferes with signals from fluorescent labels, resulting in poor-quality images and complicating data analysis. We used confocal scanning laser microscopy imaging and lambda scanning to characterize the mouse adrenal cortex autofluorescence. We evaluated the efficacy of tissue treatment methods in reducing the intensity of the observed autofluorescence, such as trypan blue, copper sulfate, ammonia/ethanol, Sudan Black B, TrueVIEWTM Autofluorescence Quenching Kit, MaxBlockTM Autofluorescence Reducing Reagent Kit, and TrueBlackTM Lipofuscin Autofluorescence Quencher. Quantitative analysis demonstrated autofluorescence reduction by 12–95%, depending on the tissue treatment method and excitation wavelength. TrueBlackTM Lipofuscin Autofluorescence Quencher and MaxBlockTM Autofluorescence Reducing Reagent Kit were the most effective treatments, reducing the autofluorescence intensity by 89–93% and 90–95%, respectively. The treatment with TrueBlackTM Lipofuscin Autofluorescence Quencher preserved the specific fluorescence signals and tissue integrity, allowing reliable detection of fluorescent labels in the adrenal cortex tissue. This study demonstrates a feasible, easy-to-perform, and cost-effective method to quench tissue autofluorescence and improve the signal-to-noise ratio in adrenal tissue sections for fluorescence microscopy. Introduction Fluorescence microscopy is an imaging technique that allows the excitation of fluorescent molecules and the detection of the emitted signal over a wide range of wavelengths [1]. Fluorescence microscopy has several advantages over other types of microscopy, including the ability to selectively visualize one or more target molecules in the studied material with high sensitivity and signal-to-noise ratio [2,3]. Advances in fluorescent microscopy have transformed our understanding of biological processes. However, despite the development of numerous new methods for tissue processing and fluorophore imaging, the signal-tonoise ratio remains a recurrent issue in clinical and experimental investigations that use fluorescence microscopy for diagnostic and research applications. One of the primary sources of noise in fluorescence microscopy is autofluorescence (AF). Autofluorescence is the endogenous and exogenous fluorescence that occurs in cells and tissues across a broad range of excitation and emission wavelengths and is unrelated to the specific signal obtained during a fluorescence-based assay [4][5][6]. Exogenous AF results from chemically modified molecules due to tissue processing and fixation procedures [7]. The endogenous AF originates from naturally fluorescent intracellular molecules, such as flavins, flavoproteins, and lipofuscin-like substances [5,8]. Moreover, red blood cells and extracellular tissue 2 of 17 components, mainly collagen and elastin, are common causes of AF [5,6,9,10]. AF can obscure or interfere with the signal from labeled cells and tissue sections [10,11], complicating sample examination and interpretation of results, particularly in quantitative studies. The number of naturally fluorescent molecules varies in different tissue types, making some tissues more autofluorescent than others [12,13]. Adrenal glands are endocrine organs characterized by high lipid content [14,15]. Cells of the adrenal cortex accumulate large amounts of lipid droplets, which serve as storage for cholesterol esters, the precursors of steroid hormones [16][17][18]. Some lipids exhibit AF and complicate the use of fluorescence microscopy in adrenal tissue [19,20]. In addition, the adrenal cortex of various species is rich in autofluorescent pigments. These cortical pigments are typically lipofuscin depositions [15,[21][22][23][24]. Lipofuscin, also known as the age pigment, is a yellow-brownish lipid pigment that accumulates in the lysosomes of the cells as they age. The most characteristic feature of lipofuscin is AF. Due to its broad excitation and emission spectra, the lipofuscin AF spectrum overlaps with those of commonly used fluorochromes [4,25,26]. Therefore, the large amounts of fluorescent molecules in adrenal tissue with broad excitation and emission wavelengths are problematic for fluorescence microscopy. Reducing adrenal tissue AF is necessary to distinguish specific labels from AF and improve the signal-to-noise ratio. Several strategies have been applied to diminish AF of fixed tissues, such as photobleaching, chemical treatments, dyes that stain specific tissue components, and combinations of these treatments [27][28][29]. However, AF reduction efficiency differs depending on tissue type and processing method, and, to date, no general formula for quenching AF in various tissue types is currently available [11,29]. A systematic study to analyze AF reduction methods in mouse adrenal cortex tissue has yet to be performed. In the current study, we characterized the AF in PFA-fixed mouse adrenal cortex tissue sections. We addressed the difficulties of using fluorescent microscopy caused by the observed AF. We compared the effect of several reported treatments for AF reduction on the AF profile of mouse adrenal cortex, such as trypan blue, copper sulfate, ammonia/ethanol, Sudan Black B, TrueVIEW TM Autofluorescence Quenching Kit, MaxBlock TM Autofluorescence Reducing Reagent Kit, and TrueBlack TM Lipofuscin Autofluorescence Quencher. We further evaluated the subsequent tissue immunofluorescence and enhanced green fluorescent protein (EGFP) detection using confocal laser scanning microscopy. Our results show that using TrueBlack TM Lipofuscin Autofluorescence Quencher is the best approach to quench adrenal tissue AF while allowing the detection of target fluorescent labels. Evaluation of Autofluorescence in Mouse Adrenal Tissue We analyzed the autofluorescence spectrum of the PFA-fixed mouse adrenal cortex tissue sections using the Olympus FluoView™ FV3000 confocal microscope in lambda scan mode. We collected the emission spectra of the adrenal cortex at 405 nm and 488 nm excitation wavelengths. The normalized emission intensity showed a broad AF emission at 405 nm and 488 nm excitations, with a central emission peak between 475-485 nm and 545-555 nm, respectively (Figure 1a). The intensity of AF was higher at 405 nm excitation compared to 488 nm (Figure 1c,d). Confocal laser scanning microscope (CLSM) images at 488 nm excitation wavelength showed a bright green AF across the adrenal cortex. The observed AF was brighter in the zona fasciculata of the adrenal cortex than in the zona reticularis and adrenal capsule. The AF originated from the cortical cells rather than extracellular tissue components. The intracellular autofluorescent molecules were widespread throughout the cytoplasm and less in the nuclei of the cortical cells ( Figure 2a). (ECFP), usually excited at 405 nm wavelength (Figure 1b). The AF spectrum at 488 nm excitation showed to interfere with the emission of enhanced green fluorescent protein (EGFP), Alexa fluor 430, and Alexa fluor 514, usually excited at 488 nm wavelength (Figure 1b). Therefore, the broad adrenal cortex AF interferes with detecting and quantifying several fluorescent labels in the blue and green channels of CLSM. Autofluorescence (AF) emission of untreated PFA-fixed mouse adrenal cortex tissue sections and sections treated with AF reduction methods. Emission line graph of normalized AF emission from untreated adrenal cortex at 405 nm and 488 nm excitation wavelengths (a). The interference of adrenal cortex AF spectrum with the spectra of commonly used fluorescent proteins and dyes (b). AF emission intensity of untreated adrenal cortex and tissue section treated with AF reduction methods at 405 nm excitation (c) and 488 nm excitation (d). The means of maximum AF intensity from untreated and treated adrenal cortex sections at 405 nm excitation (e) and 488 nm excitation (f). ANOVA results are indicated by letters in column superscript. Treatments sharing one letter are not significantly different. Treatments not sharing a letter are significantly different (p Figure 1. Autofluorescence (AF) emission of untreated PFA-fixed mouse adrenal cortex tissue sections and sections treated with AF reduction methods. Emission line graph of normalized AF emission from untreated adrenal cortex at 405 nm and 488 nm excitation wavelengths (a). The interference of adrenal cortex AF spectrum with the spectra of commonly used fluorescent proteins and dyes (b). AF emission intensity of untreated adrenal cortex and tissue section treated with AF reduction methods at 405 nm excitation (c) and 488 nm excitation (d). The means of maximum AF intensity from untreated and treated adrenal cortex sections at 405 nm excitation (e) and 488 nm excitation (f). ANOVA results are indicated by letters in column superscript. Treatments sharing one letter are not significantly different. Treatments not sharing a letter are significantly different (p < 0.05). Bars denote the SD. The AF emission of the adrenal cortex was acquired in the λ-spectral mode using the confocal microscope Olympus FluoView™ FV3000. Spectra of fluorescent proteins and dyes in (b) were obtained from the database of fluorescent dyes (www.fluorophores.tugraz.at, accessed on 26 November 2022 Reducing green wavelength autofluorescence in PFA-fixed mouse adrenal cortex tissue sections using various AF reduction methods. The effect of different tissue treatments on the adrenal cortex green wavelength autofluorescence (a), and tissue staining and integrity (b). Images were acquired with Olympus FluoView™ FV3000 using 488 nm excitation and a 500-600 nm detection range. Transmitted images were taken simultaneously at 488 nm excitation. Bar The Effect of Tissue Treatments on Adrenal Cortex Autofluorescence In order to reduce the AF of the mouse adrenal cortex tissue sections, we tested the efficacy of seven treatments previously described to decrease AF in multiple cells and tissue types ( Table 1). The lambda scan showed that all the applied treatments altered the AF profile at both 405 nm and 488 nm excitations. Trypan blue (TRB) treatment decreased the maximum intensity of AF at 405 nm excitation by 12% ± 2% (SE) (Figure 1c,e). At 488 Figure 2. Reducing green wavelength autofluorescence in PFA-fixed mouse adrenal cortex tissue sections using various AF reduction methods. The effect of different tissue treatments on the adrenal cortex green wavelength autofluorescence (a), and tissue staining and integrity (b). Images were acquired with Olympus FluoView™ FV3000 using 488 nm excitation and a 500-600 nm detection range. Transmitted images were taken simultaneously at 488 nm excitation. Bar To assess if the observed AF might complicate the detection of specific fluorescent labels in the adrenal cortex, we compared the AF spectra at 405 nm and 488 nm excitation with those of commonly used fluorophores obtained from the database of fluorescent dyes (www.fluorophores.tugraz.at, accessed on 26 November 2022). The wide spectrum of AF at 405 nm excitation interferes with the emission of 4 ,6-diamidino-2-phenylin-dole (DAPI), enhanced blue fluorescent protein (EBFP), and enhanced cyan fluorescent protein (ECFP), usually excited at 405 nm wavelength (Figure 1b). The AF spectrum at 488 nm excitation showed to interfere with the emission of enhanced green fluorescent protein (EGFP), Alexa fluor 430, and Alexa fluor 514, usually excited at 488 nm wavelength ( Figure 1b). Therefore, the broad adrenal cortex AF interferes with detecting and quantifying several fluorescent labels in the blue and green channels of CLSM. The Effect of Tissue Treatments on Adrenal Cortex Autofluorescence In order to reduce the AF of the mouse adrenal cortex tissue sections, we tested the efficacy of seven treatments previously described to decrease AF in multiple cells and tissue types ( Table 1). The lambda scan showed that all the applied treatments altered the AF profile at both 405 nm and 488 nm excitations. Trypan blue (TRB) treatment decreased the maximum intensity of AF at 405 nm excitation by 12% ± 2% (SE) (Figure 1c,e). At 488 nm excitation, TRB did not reduce AF intensity but shifted the AF emission to longer wavelengths (Figures 1d,f and 2a). Copper(II) sulfate (CuSO 4 ), ammonia/ethanol (NH 3 ), and TrueVIEW TM Autofluorescence Quenching Kit (TrueVIEW) reduced AF maximum intensity by 68% ± 0.8% (SE), 70% ± 2 (SE), and 70% ± 3% (SE), and by 52% ± 1% (SE), 65% ± 2% (SE), and 62% ± 2% (SE) at 405 nm and 488 nm excitations, respectively. These treatments did not shift the AF emission, and we still observed a central peak of AF emission similar to untreated tissue sections at 405 nm and 488 nm excitations (Figure 1c-f). Figure 2a shows that CuSO 4 , NH 3 , and TrueVIEW reduced the overall background AF. NH 3 was the most effective among these treatments in reducing the green wavelength AF. However, it did not eliminate the adrenal cortex AF, and autofluorescent granules were still observed after treatment with NH 3 . Sudan Black B (SBB), TrueBlack TM Lipofuscin Autofluorescence Quencher (TrueBlack), and MaxBlock™ Autofluorescence Reducing Reagent (MaxBlock) further reduced the AF maximum intensity by 88% ± 0.3% (SE), 93% ± 0.1% (SE), and 95% ± 0.03% (SE), and by 82% ± 0.7% (SE), 89% ± 0.04% (SE), and 90% ± 0.07% (SE) at 405 nm and 488 nm excitations, respectively (Figure 1e,f). Emission line graphs showed no central peak of AF emission after treatment with TrueBlack or MaxBlock (Figure 1c,d). SBB reduced AF from the tissue regions showing intense dark staining with SBB. However, AF was still observed in the less stained tissue regions ( Figure 2). In contrast, both TrueBlack and MaxBlock reduced the overall AF from the entire adrenal cortex and produced a more homogeneous background. Both treatments mainly stained the cytoplasm of the cortical cells and reduced the cytoplasmic AF to levels lower than the nucleic AF. This staining pattern resulted in slightly brighter nuclei than cytoplasm in the adrenal cortex treated with TrueBlack and MaxBlock ( Figure 2a). Reduction in Autofluorescence from Pigment Accumulations in the Mouse Adrenal Tissue The autofluorescent lipofuscin accumulates in the adrenal tissue as mice age. The presence of pigment-laden cells containing large amounts of autofluorescent lipoid pigments further complicates the elimination of AF from aged mice's adrenal tissue. We examined the AF of adrenal tissue sections from aged mice. In addition to the high intracellular AF, we observed several irregularly shaped granules with high-density fluorescence across all observed channels. These granules varied in size and scattered in the adrenal cortex and the corticomedullary junction ( Figure 3). We treated the adrenal tissue sections from aged mice with TrueBlack or MaxBlock, which showed the highest efficacy in decreasing AF of the adrenal cortex from younger mice (Figures 1c-f and 2a). CLSM images showed that both treatments quenched the fluorescence from the autofluorescent accumulations. The reduction of this intense AF did not require increased incubation time or working concentration. We noticed a more intense dark staining of these granules and the cortical cells compared to the cells from younger mice (Figure 2b). TrueBlack and MaxBlock specifically stained the cortical cells' cytoplasm and, to a lesser extent, the nuclei and adrenal medullary cells (Figure 3), similar to findings from younger mice's adrenal tissue sections (Figure 2b). The Effect of TrueBlack and MaxBlock Treatments on the Detection of Fluorescent Labels in the Mouse Adrenal Cortex AF is problematic for fluorescence-based assay in tissue sections. Intrinsic fluorescence interferes with or even masks the specific signals from fluorescent labels. Reducing tissue AF without affecting fluorescent tags is necessary to obtain valid data. To evaluate the applicability of AF treatments in immunofluorescence (IF), we treated adrenal tissue sections with TrueBlack or MaxBlock before (pre-treatment) or after applying the antibodies (post-treatment) for indirect IF. We immunostained the 21-hydroxylase typically expressed in the mouse adrenal cortex and used secondary antibodies conjugated to Alexa Fluor 594 to visualize the 21-hydroxylase staining. We detected the fluorescence signals in the 570-670 nm range at 561 nm excitation and used the 500-540 nm detection range at 488 nm excitation to evaluate the efficacy and stability of AF reduction treatments throughout the immunostaining procedure. CLSM images obtained using the same acquisition settings showed that Alexa Flour 594 signals were detectable in adrenal tissue sections pre-treated with TrueBlack or Max-Block (Figure 4a,e). In contrast, the post-treatment of immunostained sections with either TrueBlack or MaxBlock masked most of the fluorescence signals from the conjugated antibodies. Post-treatment with MaxBlock had the most negative effect on the IF in the ad- The Effect of TrueBlack and MaxBlock Treatments on the Detection of Fluorescent Labels in the Mouse Adrenal Cortex AF is problematic for fluorescence-based assay in tissue sections. Intrinsic fluorescence interferes with or even masks the specific signals from fluorescent labels. Reducing tissue AF without affecting fluorescent tags is necessary to obtain valid data. To evaluate the applicability of AF treatments in immunofluorescence (IF), we treated adrenal tissue sections with TrueBlack or MaxBlock before (pre-treatment) or after applying the antibodies (post-treatment) for indirect IF. We immunostained the 21-hydroxylase typically expressed in the mouse adrenal cortex and used secondary antibodies conjugated to Alexa Fluor 594 to visualize the 21-hydroxylase staining. We detected the fluorescence signals in the 570-670 nm range at 561 nm excitation and used the 500-540 nm detection range at 488 nm excitation to evaluate the efficacy and stability of AF reduction treatments throughout the immunostaining procedure. CLSM images obtained using the same acquisition settings showed that Alexa Flour 594 signals were detectable in adrenal tissue sections pre-treated with TrueBlack or MaxBlock (Figure 4a,e). In contrast, the post-treatment of immunostained sections with either True-Black or MaxBlock masked most of the fluorescence signals from the conjugated antibodies. Post-treatment with MaxBlock had the most negative effect on the IF in the adrenal cortex. (Figure 4c,g). We stained tissue sections with DAPI after AF treatments. The treatment of adrenal tissue with TrueBlack or MaxBlock did not mask the fluorescent signals from DAPI in the adrenal cortex. However, DAPI fluorescence was slightly brighter in sections treated with TrueBlack than those treated with MaxBlock (Figure 4a,c,e,g). These findings suggest that in comparison to MaxBlock, TrueBlack treatment has a less adverse effect on the specific fluorescence in IF. Nevertheless, both treatments quenched the adrenal cortex AF at 488 nm excitation when applied before or after immunostaining. Lastly, we evaluated the effect of TrueBlack treatment on the native fluorescence of enhanced green fluorescent protein (EGFP). We applied TrueBlack to PFA-fixed frozen adrenal tissue sections from mice injected with recombinant adeno-associated virus vectors carrying the EGFP gene (rAAV-EGFP). The treatment of tissue sections with True-Black did not quench the fluorescence of EGFP. We detected EGFP native fluorescence in a number of cells stained with TrueBlack (Figure 5a,b). We enhanced the EGFP fluorescence by indirect immunostaining after TrueBlack treatment and applied secondary antibodies conjugated to Alexa Fluor 488. We detected the fluorescence of stained EGFP from various cells stained with TrueBlack. In most cells, the fluorescence intensity was higher in the cells' nuclei than in the cytoplasm (Figure 5c,d). Altogether, these results suggest that treatment with TrueBlack eliminates AF of the adrenal cortex while having a minimal effect on the fluorescence of fluorophore-conjugated antibodies and EGFP. Lastly, we evaluated the effect of TrueBlack treatment on the native fluorescence of enhanced green fluorescent protein (EGFP). We applied TrueBlack to PFA-fixed frozen adrenal tissue sections from mice injected with recombinant adeno-associated virus vectors carrying the EGFP gene (rAAV-EGFP). The treatment of tissue sections with TrueBlack did not quench the fluorescence of EGFP. We detected EGFP native fluorescence in a number of cells stained with TrueBlack (Figure 5a,b). We enhanced the EGFP fluorescence by indirect immunostaining after TrueBlack treatment and applied secondary antibodies conjugated to Alexa Fluor 488. We detected the fluorescence of stained EGFP from various cells stained with TrueBlack. In most cells, the fluorescence intensity was higher in the cells' nuclei than in the cytoplasm (Figure 5c,d). Altogether, these results suggest that treatment with TrueBlack eliminates AF of the adrenal cortex while having a minimal effect on the fluorescence of fluorophore-conjugated antibodies and EGFP. Discussion In this study, we demonstrated that the mouse adrenal cortex emits intense AF in the commonly used channels in CLSM. Adrenal tissue AF was widespread in the cortical cells and had broad excitation and emission spectra. The observed AF spectrum interferes with the spectra of many fluorescent proteins and dyes commonly used in fluorescence microscopy (Figures 1a-d and 2a), complicating the detection of fluorescent labels and possibly leading to false positive results. The adrenal cortex AF was brighter at 405 nm and 488 nm excitations compared to longer excitation wavelengths, as shown in Figure 3. Therefore, we used 405 nm and 488 nm excitations to evaluate the efficiency of different tissue treatments against adrenal cortex AF. Several previously described AF-reducing agents decreased the AF of the adrenal cortex. MaxBlock and TrueBlack were the most effective treatments for quenching the intracellular AF at different excitation wavelengths. Other treatment methods, including SBB, NH3, CuSO4, and TrueVIEW, reduced the AF to a certain extent but did not eliminate the AF of the adrenal cortex (Figures 1e,f and 2a). Moreover, TrueBlack and MaxBlock treatments quenched AF of pigment accumulations, exhibiting bright AF across multiple CLSM channels (Figure 3). The pre-treatment of adrenal tissue sections with TrueBlack for IF masked the intrinsic AF but not the specific fluorescent signals from the immunolabels (Figure 4a). Similarly, treatment with True-Black did not interfere with detecting EGFP in adrenal tissue sections (Figure 5a,c). The adrenal cortex AF originated mainly from the cortical cells' cytoplasm ( Figure 2a). Cortical cells are known to accumulate lipid droplets containing cholesterol esters for steroid biosynthesis [14,[16][17][18]. Some lipid compounds exhibit AF depending on their molecular properties [30]. In addition, the adrenal cortices are rich with intracytoplasmic Discussion In this study, we demonstrated that the mouse adrenal cortex emits intense AF in the commonly used channels in CLSM. Adrenal tissue AF was widespread in the cortical cells and had broad excitation and emission spectra. The observed AF spectrum interferes with the spectra of many fluorescent proteins and dyes commonly used in fluorescence microscopy (Figures 1a-d and 2a), complicating the detection of fluorescent labels and possibly leading to false positive results. The adrenal cortex AF was brighter at 405 nm and 488 nm excitations compared to longer excitation wavelengths, as shown in Figure 3. Therefore, we used 405 nm and 488 nm excitations to evaluate the efficiency of different tissue treatments against adrenal cortex AF. Several previously described AF-reducing agents decreased the AF of the adrenal cortex. MaxBlock and TrueBlack were the most effective treatments for quenching the intracellular AF at different excitation wavelengths. Other treatment methods, including SBB, NH 3 , CuSO 4 , and TrueVIEW, reduced the AF to a certain extent but did not eliminate the AF of the adrenal cortex (Figures 1e,f and 2a). Moreover, TrueBlack and MaxBlock treatments quenched AF of pigment accumulations, exhibiting bright AF across multiple CLSM channels (Figure 3). The pre-treatment of adrenal tissue sections with TrueBlack for IF masked the intrinsic AF but not the specific fluorescent signals from the immunolabels (Figure 4a). Similarly, treatment with TrueBlack did not interfere with detecting EGFP in adrenal tissue sections (Figure 5a,c). The adrenal cortex AF originated mainly from the cortical cells' cytoplasm ( Figure 2a). Cortical cells are known to accumulate lipid droplets containing cholesterol esters for steroid biosynthesis [14,[16][17][18]. Some lipid compounds exhibit AF depending on their molecular properties [30]. In addition, the adrenal cortices are rich with intracytoplasmic lipofuscin, which builds up in the lysosomes of cells as they age [15,24]. AF is a distinctive feature and is regarded as a reliable marker of lipofuscin [31][32][33]. Lipofuscin fluorescence emission spectra show considerable heterogeneity due to differences in chemical composition [34]. Hence, the lipofuscin and high lipid content are possible causes of the broad intracellular AF observed in the mouse adrenal cortex. In addition to the endogenous AF sources, fixation with paraformaldehyde can contribute to the AF observed in the adrenal cortex. Crosslinking fixatives such as formaldehyde and glutaraldehyde are major sources of exogenous AF [6,10,27]. During tissue fixation, aldehydes react with the amine groups of proteins and amino acids to form fluorescent complexes known as Schiff bases [7,35]. The reduction in adrenal cortex AF with TrueBlack TM Lipofuscin Autofluorescence Quencher (TrueBlack) and SBB further suggests that lipofuscin and lipids are potential AF sources. TrueBlack is a commercial AF quenching reagent that reduces tissue sections AF from lipofuscin and less efficiently from other sources like red blood cells and extracellular components. TrueBlack has been used for AF reduction in a wide range of human and mouse tissue types, such as the brain [36][37][38][39], retina [40,41], heart [42,43], lung [44,45], and liver tissue [46,47]. SBB is a superlipophilic diazo dye used for staining a wide variety of lipids [48,49] and some proteins [50]. SBB shows a high affinity to lipofuscin in frozen and paraffin-embedded tissue sections [51,52]. Therefore, it has been used to reduce AF from lipids, lipofuscin, and lipofuscin-like substances in various tissue types [4,6,11,13,27,29,[53][54][55]. The proposed mechanism is that SBB masks the autofluorescent structures without chemically interacting with the components of these structures [4,29,53]. Similar to TrueBlack and SBB, MaxBlock TM Autofluorescence Reducing Kit (MaxBlock) primarily stained the cytoplasm of cortical cells (Figure 2b), quenching the AF of the adrenal cortex. MaxBlock is a commercialized AF treatment designed to reduce background AF on paraffin-embedded and frozen tissue sections. It has been used to diminish AF in several tissues, such as the liver [56], heart [57], lung [58,59], pancreas [60], and skin [61] tissues. Although SBB is a more cost-effective AF treatment than TrueBlack and MaxBlock, SBB preparation is laborious and requires longer incubation time to reduce AF of the adrenal cortex. TrueBlack and MaxBlock are ready-to-use reagents that showed more efficiency in reducing AF of the adrenal cortex, resulting in a more homogeneous nonfluorescent background (Figures 1c-f and 2a). Moreover, SBB introduces some AF in the red and far-red channels. SBB is incompatible with antifading agents that preserve the fluorescent labels for long-term storage and analysis [53] and may have adverse effects on the fluorescence of specific labels in tissue IF [4,27]. The commercial TrueVIEW Autofluorescence Quenching Kit (TrueVIEW) reduces tissue AF via treatment with an aqueous solution of nonfluorescent, hydrophilic molecules. These negatively charged molecules bind electrostatically and diminish AF from nonlipofuscin sources such as red blood cells, collagen, elastin, and aldehyde fixation [62]. TrueVIEW has been used to reduce AF in various tissues [63][64][65][66]. TrueView reduced the AF of adrenal cortex cells (Figures 1c-f and 2a), suggesting that lipofuscin and other hydrophobic molecules are not the only sources of AF in the adrenal cortex. Copper(II) sulfate (CuSO 4 ) was reported to reduce tissue AF from lipofuscin [4,67], red blood cells [68,69], and other sources [28,[69][70][71]. The chemical mechanism of Cu 2+ quenching of AF is not precise. It is suggested that Cu 2+ acts as an electron scavenger that receives electrons from the autofluorescent molecules by collisional contact and circumvent the fluorescence emission [72]. In our study, CuSO 4 treatment did not eliminate the adrenal cortex AF (Figure 2a), similar to previous reports in some tissue types [27,73]. CuSO 4 may also have a negative effect on IF signals when used at high concentrations [4,27]. However, we did not investigate the impact of this treatment on IF in adrenal tissue. NH 3 reduces tissue AF by dissolving negatively charged lipid derivatives and phenols, hydrolyzing weak esters, and deactivating pH-sensitive fluorochromes [6,29]. In previous reports, NH 3 reduced AF in bone marrow, kidney, and placenta tissue sections [6,13,28]. Meanwhile, it failed to quench AF in the brain [29], liver, and pancreas tissue [13]. NH 3 was ineffective against the AF of lipofuscin granules in the myocardial tissue [6]. In our study, NH 3 reduced the general background AF but did not eliminate the AF of fluorescent granules in the adrenal cortex (Figures 1c-f and 2a). Trypan Blue (TRB) diffuses into permeabilized cells and distributes uniformly in the cell nucleus and cytoplasm. If used in optimized concentration, TRB reduces AF when dye molecules are at a proper orientation and distance to autofluorescent molecules or when bound to autofluorescent molecules [74]. Contrary to previous reports [28,74,75], TRB did not diminish the AF of the adrenal cortex (Figures 1c-f and 2a), possibly due to the use of a suboptimal concentration or the omission of permeabilization step in order to process all the tissue sections equally before applying the AF treatments. However, consistent with the previous reports [74,75], TRB shifted the AF spectrum of the adrenal cortex to longer wavelengths. We did not further optimize the TRB treatment protocol as the induced AF in the longer wavelengths makes TRB a less suitable option than other tested treatments. The lipofuscin accumulates in the lysosomes of aged cells because it cannot be eliminated by degradation or exocytosis [76]. Adrenal cortices of aged mice and rats accumulate large amounts of lipofuscin [15,24]. Excessive lipofuscin accumulations can result from the degeneration of cortical cells, dietary and steroid imbalances, and administration of some exogenous chemicals [14]. In addition, mouse adrenal tissue may contain pigmentladen cells. These pigmented cells are usually scattered in the adrenal cortex or at the corticomedullary junction. They are thought to be a consequence of the regression of the transient cortical X-zone [14,77]. The pigment-laden cells cluster as the mouse ages and can coalesce to form multinucleated giant cells. The pigmented cells from animals of different ages show similar histochemical staining properties and exhibit orange-yellow primary fluorescence [78]. Thus, the presence of these highly autofluorescent aggregates with various sizes, pigment concentrations, and localizations decreases the signal-to-noise ratio and complicates the interpretation of fluorescence microscopy results. We tested whether TrueBlack or MaxBlock, which eliminated the AF from the smaller intracellular autofluorescent structures, may reduce the AF from pigment accumulations. Both TrueBlack and MaxBlock quenched the AF from the bright autofluorescent accumulations in the adrenal cortex and in the corticomedullary junction ( Figure 3). The adrenal cortex and the pigment accumulations showed more intense cytoplasmic staining with both TrueBlack and MaxBlock than in younger mice (Figure 2b). The staining intensity is possibly proportional to intracellular lipofuscin content, which increases as the mice age. These findings highlight the effectiveness of both TrueBlack and MaxBlock in quenching the AF from dense AF aggregates in the mouse adrenal glands of different ages that might be challenging to eliminate using other AF treatments. The major limitation of tissue AF reduction treatments is their effect on the specific fluorophores used to visualize target molecules. Several AF quenching methods may quench assay-specific signals [4,27,28,74]. TrueBlack and MaxBlock have been used to treat AF in various tissue types. However, in most tissues, AF originates from specific intracellular or extracellular tissue components that have a defined localization within the tissue, and masking this AF with non-fluorescent dyes does not affect the IF signals from other parts of the tissue. In contrast, autofluorescent molecules are widespread across the adrenal cortex, with high density in the cytoplasm of most cells. This wide distribution causes intense dark staining across the adrenal cortex after treatment with AF-reducing dyes (Figure 2b). We investigated the effect of the staining with TrueBlack and MaxBlock on IF signals from the cortical cells. The pre-treatment of adrenal tissue sections with TrueBlack before IF had a mild effect on the fluorescence of conjugated antibodies (Figure 4a). Conversely, posttreatment with MaxBlock had the most adverse impact on IF signals (Figure 4g). We also tested if TrueBlack treatment interferes with the fluorescence of enhanced green fluorescence protein (EGFP). EGFP is a versatile biological marker for visualizing protein localization, monitoring transgenic expression, and tracking specific cell types in the adrenal cortex. The broad excitation and emission of AF in the adrenal cortex interfere with detecting EGFP (Figure 1b). We visualized the fluorescence of both native and immunolabeled EGFP in cortical cells stained with TrueBlack in fixed frozen adrenal tissue sections ( Figure 5). These findings suggest that TrueBlack treatment is compatible with IF and EGFP fluorescence in adrenal tissue sections, as it diminishes tissue AF without masking the fluorescent signals from fluorescent labels. Lastly, it is worth noting that the signal of fluorescent labels from treated adrenal cortical cells may be inversely related to the staining intensity of these cells with AFreducing dyes. The concentration of the dye for AF reduction, incubation time, and tissue content of autofluorescent materials possibly determine the staining intensity of treated tissue sections and, hence, the interference with the detection of specific fluorophores. A significant advantage of TrueBlack over other ready-to-use reagents is the ability to optimize both working concentration and incubation time. TrueBlack is provided as a 20X solution that can be diluted in ethanol according to the required dilution, usually 1X, according to the manufacturer's instructions. However, TrueBlack has been used in different concentrations [10,79,80] and incubation times [41,81,82] for AF reduction. The ability to optimize TrueBlack treatment is helpful when treating tissues rich with autofluorescent material, such as the adrenal glands, to avoid IF signal masking by intensive dark tissue staining. As mentioned before, the lipofuscin levels in the adrenal cortex can vary with age, diet, hormonal imbalances, and other factors. We recommend testing different concentrations and incubation times with TrueBlack to reach the best balance between tissue AF reduction and target fluorophore visualization and to achieve the required signal-to-noise ratio. Animals The animals were housed and utilized for experimental procedures in compliance with Directive 2010/63/EU and the recommendations of the local Bioethical Committee of Lomonosov Moscow State University. Archival 4% PFA-fixed frozen tissue sections from aged mice and mice injected with recombinant AAV vector carrying the gene of enhanced green fluorescent protein (rAAV-EGFP) were previously prepared similarly and stored in the dark at −20 • C till usage. Tissue Treatment for Reducing Autofluorescence Frozen mouse adrenal tissue sections were thawed for 30 min at RT and washed with PBS two times, 10 min each, to remove O.C.T. The tissue sections were then treated separately with tissue AF treatment methods (Table 1) at room temperature. Sudan Black B (SBB, Dia-m, Moscow, Russia) was prepared as 0.1% (W/V) in 70% ethanol, as described previously [27]. Sections were incubated with the SBB solution sealed airtight in the dark for 20 min, and then dipped briefly in 70% ethanol once before washing with PBS. A solution of 10 mM copper(II) sulfate (CuSO 4 ) in 50 mM ammonia acetate, pH 5, was prepared and applied to sections for 90 min [27]. Ammonia/ethanol (NH 3 ) was prepared as 0.25% (V/V) ammonia (PanReac AppliChem ITW Reagents, Barcelona, Spain) in 70% ethanol and applied to tissue sections for 1 h [29]. A fresh 0.05% (W/V) trypan blue (Paneko, Moscow, Russia) in PBS solution was prepared and applied to slides for 15 min [28] Autofluorescence Emission Spectra and Images Acquisition For each treatment and control group, the AF emission spectra were acquired in a λspectral mode from the adrenal cortex tissue sections (n = 4) using the confocal microscope Olympus FluoView™ FV3000 (Olympus, Tokyo, Japan) with a UPLXAPO40X, 0.95 NA dry objective (Olympus) using the following settings: OneWay capture; 512 × 512 pixel format; Airy disk, 1 AU; line averaging, 2. Tissue sections were excited with a 405 nm laser diode (OBIS 405 nm LX 50 mW, Coherent, Singapore) and a 488 nm laser diode (OBIS 488 nm LS 20 mW, Coherent, Singapore) at 100% laser power while using the excitation dichroic mirror (ExDM) BS10/90. ExDM BS10/90 allows approximately 10% of the power of the selected laser to pass through the mirror, and 90% of the emitted light back through. Emission data were collected using the gallium arsenide phosphide (GaAsP) photomultiplier tube (PMT) high-sensitivity spectral detector. The PMT settings were as follows: detector voltage, 650 V; gain, 1; offset, 3. We used detection ranges of 415-745 nm and 515-745 nm at 405 nm and 488 nm excitations, respectively. The detection bandwidth and the detection step size were 10 nm. The Olympus FluoView TM FV3000 'Series analysis' tool was used to analyze the emission data [84], and the average intensity values were exported for further analysis. Images demonstrating AF levels in the green wavelengths and transmitted images were acquired at excitation of 488 nm and detection range 500-600 nm, with a UPLXAPO40X, 0.95 NA Dry objective using the following settings: OneWay capture; 2048 × 2048 pixel format; Airy disk, 1 AU; line averaging, 5. The images were exported without enhancements or manipulations. Immunofluorescence Adrenal tissue sections were thawed at RT for 30 min and washed with PBS two times, 10 min each, to remove O.C.T. After that, the sections were permeabilized with PBS containing 0.1% Triton X-100 (AppliChem GmbH, Darmstadt, Germany) for 10 min, and then blocked with PBS containing 10% goat serum (Abcam, Cambridge, UK) for 2 h at RT in a humidified chamber. The tissue sections were incubated with the primary antibodies diluted in PBS containing 1% BSA (Dia-m, Moscow, Russia), 0.25% Triton X-100, and 0.25% Tween-20 (Bio-Rad, Hercules, CA, USA) for 16 h at 4 • C. After incubation, the sections were washed twice with PBS containing 0.1% Triton X-100 and Tween-20 for 10 min each. The tissue sections were incubated with the secondary antibodies diluted in PBS containing 1% BSA, 0.25% Triton X-100, and 0.25% Tween-20 for 2 h at RT in a humidified chamber, and then washed two times with PBS containing 0.1% Triton X-100 and Tween-20, 10 min each. Tissue sections were stained with 0.5 µg/mL 4 ,6-diamidino-2-phenylindole (DAPI) for 10 min, washed with PBS for 10 min, mounted onto coverslips with polyvinyl alcohol mounting medium with DABCO TM antifading, dried overnight, and stored in the dark at 4 • C until examination. Images of IF were acquired with the confocal microscope Olympus FluoView™ FV3000 UPLXAPO60XO, 1.42 NA Oil Immersion Objective. TrueBlack and MaxBlock AF reduction treatments were applied either after the blocking step (pre-treatment) or after the incubation with secondary antibodies (post-treatment). For pre-treated sections, Triton X-100 and Tween-20 were excluded from all the solutions after applying TrueBlack or MaxBlock without changing the buffers' other ingredients. Data Analysis The average intensity values at 405 nm and 488 nm in λ-spectral mode were imported into GraphPad Prism software version 8.0.1. (GraphPad Software, Inc., San Diego, CA, USA) In order to examine the adrenal cortex AF spectral shape, the emission data for untreated control were normalized, and the mean normalized intensity and the standard deviation were plotted. The mean normalized intensities of untreated control at 405 nm and 488 nm excitation were compared with emission spectra of fluorescent proteins and synthetic fluorescent dyes publicly available in the database of fluorescent dyes (www.fluorophores.tugraz.at, accessed on 26 November 2022) to assess the degree of AF interference with the commonly used fluorophores in fluorescence microscopy. After each AF treatment, the means of average emission intensity values were plotted at 405 nm and 488 nm excitations and visually examined for differences in emission intensity and spectrum shape. In addition, the mean, standard deviation, and standard error for the maximum emission intensities at 405 nm and 488 nm excitations were calculated for each AF treatment. Maximum intensities were compared using one-way ANOVA with Tukey's multiple comparisons test for pairwise comparisons of treatments. Statistical significance was indicated by letters in superscript. Treatments that share the same letter are not different from each other, while treatments not sharing a letter are significantly different. The percentage difference of maximum intensity at 405 nm and 488 nm excitation between each treatment and the untreated control with the standard error of the mean was calculated to evaluate each treatment's efficiency in reducing mouse adrenal cortex tissue AF. Conclusions In this study, we assessed the characteristics of adrenal cortex tissue AF and examined several treatments for diminishing the observed AF. We found TrueBlack to be efficacious in quenching AF in the mouse adrenal cortex while preserving the signals of specific fluorescent labels. This study provides a practical method for identifying and eliminating AF during fluorescence-based assays in mouse adrenal tissue sections.
9,042.6
2023-02-01T00:00:00.000
[ "Biology" ]
Interactive comment on “ Physical properties of High Arctic tropospheric particles during winter ” R1: “The authors present a winter climatology of four particle types at 80N, 86W (Eureka): aerosols, mixed-phase clouds, ice clouds and boundary-layer ice clouds. My recommendation of rejection is based on their apparent lack of an objective classification for the 4 particle types considered. We are given a one-day example in section 3, from which the manuscript immediately moves into the results of a 3 or 4 year climatology.” Introduction The climate of the Arctic troposphere is known to be sensitive to change (Serreze et al., 2009), but a detailed understanding of infrared radiative transfer central to the problem remains limited by the availability of suitable measurements.Experimental progress has historically been impeded by accessibility barriers to the remote North and the harsh environmental conditions.This is particularly true for the High Arctic during winter, when 24 h darkness leads to mean surface temperatures in the vicinity of −40 • C (Lesins et al., 2009b).Passive satellite sensors also encounter difficulties owing to the unique radiative character of the polar regions (Curry et al., 1996), and so the environmental impediments must be overcome if all of the observational gaps are to be filled. An experimental effort to provide comprehensive yearround measurements in the High Arctic has been undertaken by the Canadian Network for the Detection of Atmospheric Change (CANDAC), who established the Polar Environment Atmospheric Research Laboratory (PEARL) at Eureka, Nunavut Territory (80 • N, 86 • W) in collaboration with Environment Canada (EC).The site is co-located with the Eureka Weather Station on the coast of Ellesmere Island (Fig. 1), and is the most northern permanent civilian research facility in Canada.A suite of remote-sensing and insitu instruments was installed by CANDAC and the National Oceanic and Atmospheric Association (NOAA) Study of Environmental Arctic Change (SEARCH) programme.Measurements are being collected on an ongoing basis, and span from the surface to 100 km in altitude.Several different instruments can characterize tropospheric particles, which are known to play a key role in the Arctic radiative exchange (Curry et al., 1993). We present a climatology of tropospheric particle scattering properties obtained with a lidar and cloud radar at Published by Copernicus Publications on behalf of the European Geosciences Union.PEARL during the three complete and consecutive winters between 2005 and 2008.March is included as a wintertime month because it is similar climatologically to December through February at Eureka (Lesins et al., 2009a).The lidar and radar are operated continuously, and the data coverage is reasonably uniform across the period of interest. The measurements provide information on particle vertical distributions and sizes that was previously lacking for the winter months.Four different categories of particles are considered: boundary-layer ice crystals, ice clouds, mixed-phase clouds, and aerosols.Each of these can be expected to impact radiative transfer and wintertime climate. Early studies proposed a role in the radiative exchange for "diamond dust" ice crystals that nucleate in the very cold conditions found at high latitudes during winter (e.g., Curry, 1983), but subsequent measurements indicated they have a negligible impact (Intrieri and Shupe, 2004).Ice precipitation from thin water clouds can be mistaken for diamond dust (Intrieri and Shupe, 2004), as can residual blowing snow lofted from mountainous terrain (Lesins et al., 2009a).Clouds in general play a major role in the radiative transfer (Curry et al., 1996).Aerosols, on the other hand, have a small impact on scattering and visibility (Hoff, 1988;Trivett et al., 1988;Leaitch et al., 1989), but may promote dehydration and so play a key role in the radiative exchange nonetheless (Blanchet and Girard, 1995). Several Arctic campaigns have been conducted to measure particle optical, macro-and microphysical characteristics: the Canadian Arctic Haze Study and Arctic Gas and Air Sampling Program (AGASP; see Leaitch et al., 1989 and references therein), the FIRE Arctic Clouds Experiment (FIRE-ACE; Curry et al., 2000), the Surface Heat Budget of the Arctic Ocean campaign (SHEBA; Uttal et al., 2002), and the Mixed-Phase Arctic Cloud Experiment (MPACE; Verlinde et al., 2007).Such activities are most often conducted during the spring and summer and are typically of short duration, rarely longer than a few months to a year.Multi-year statistical data sets are needed, particularly for particle sizes, shapes and phases, which are directly related to radiative properties (Curry et al., 2000).Of the aircraft campaigns listed above, not one was conducted during the winter months.The SHEBA experiment stands out from previous studies in that it collected a year of data from a ship frozen into the Arctic Ocean.Year-round remote sensing measurements from the North Slope of Alaska -Adjacent Arctic Ocean (NSA-AAO) site near Barrow, Alaska (71.3 • N, 156.6 • W) have also been used to investigate particles (Zhao and Garrett, 2008).The PEARL experiment was designed to build upon these earlier activities, and provides an opportunity to obtain comprehensive long-term data sets in the High Arctic. This paper is structured as follows.Section 2 describes the active remote sensors used: a High Spectral Resolution Lidar and a Millimeter-wave Cloud Radar.Section 3 explains the categorization process and describes the lidar-radar colour ratio and its conversion to particle effective radius using Mie theory.Results are presented in Sect.4, and then discussed in Sect. 5. Distributions of particle sizes, altitude ranges, and depolarization values are reviewed and compared with other measurements.Of particular interest will be Table 4, which summarizes the observed scattering properties of ice, water and aerosol particles, and their vertical distributions.Section 6 reviews the results and discusses future research possibilities. The Zero-altitude PEARL Auxiliary Laboratory The measurements were obtained at the Zero-altitude PEARL Axillary Laboratory (ØPAL), which is co-located with the Eureka Weather Station at 10 m elevation above sea level.ØPAL is one of three PEARL facilities, which include the PEARL Ridge Laboratory and the Surface and Atmospheric Flux, Irradiance and Radiation Extension (SAFIRE).Table 1 lists the ØPAL instruments and their respective capabilities. Figure 2 shows ØPAL and highlights the two instruments of interest here: the Arctic High Spectral Resolution Lidar (AHSRL) and the Millimeter-wave Cloud Radar (MMCR).The instruments are housed in climate-controlled shipping containers which are powered by a diesel generating station that is 215 m to the south.The measurements from each instrument are transmitted from Eureka by satellite link, and Arctic High Spectral Resolution Lidar The Arctic High Spectral Resolution Lidar (AHSRL) was developed at the University of Wisconsin and is supported at PEARL by NOAA's SEARCH program.It has collected quasi-continuous data at Eureka from August 2005 to present, with occasional down time due to maintenance requirements.The instrument is an Internet appliance designed for unattended operation.Technical specifications for the AHSRL are presented in Table 2. The transmitter consists of a frequency-doubled diodepumped Nd:YAG laser emitting at a 4 kHz repetition rate and 532 nm wavelength.The laser is seeded and locked using an iodine vapour cell so that the frequency of the light is stable and the line width is narrow.The outgoing beam is circularly polarized and is transmitted at a 4 • zenith angle to avoid specular reflections from horizontally-oriented ice crystals.A 40 cm telescope is used by both the transmitter and receiver.The receiver's 45 µrad field of view significantly reduces the background light level and contributions from multiple scattering.Incoming photons are separated according to polarization state, and are filtered using a 0.35-nm bandpass interference filter and a 8-GHz bandpass pressure-tuned etalon.Signal detection is performed using Geiger-mode avalanche photodiodes and photon-counting electronics.Additional technical details are given by Razenkov et al. (2002). The lidar measures the particle backscatter cross-section (β lidar ) and circular depolarization ratio (δ).The depolarization may be used to differentiate between spherical liq- uid droplets and crystalline particles, which have low and high depolarization values, respectively (e.g., see Intrieri and Shupe, 2004).Appropriate thresholds for the interpretation are established in this study. The AHSRL data employs 2.5 s temporal and 7.5 m spatial resolutions and can measure volume backscatter crosssection profiles up to an optical depth of approximately 4, beyond which the transmitted beam is too attenuated.In this work, 30 s and 15 m averaged measurements are used.The standard deviation for each average (determined from the intrinsic resolution data) is employed to filter out data with excessive atmospheric variability or noise.Compared to wind profiling and precipitation surveillance radars (wavelengths from 3 to 600 cm), millimeter wavelength radars have the advantage of increased sensitivity to smaller particles, but the disadvantage of strong attenuation from rainfall.This disadvantage is not relevant during the cold High Arctic winter since the only precipitation is frozen.Snowfall and ice crystals attenuate the radar signal minimally.Water vapor also has negligible impact since the wintertime Arctic atmosphere is relatively dry. The MMCR has been collecting data since August 2005 and is designed for remote operation with an intended lifetime of at least 10 years.It provides information on Doppler velocity, spectral width and radar reflectivity.The latter can be related to the backscattering cross-section of the atmospheric particles, which allows direct comparison with the AHSRL backscatter measurements. Technical specifications for the MMCR are given in Table 3.The general setup of the system is similar to that of the lidar with a coaxial, vertically pointing transmitter and receiver.The instrument employs a frequency converter which produces 34.86 GHz microwaves from the internal 60 MHzfrequency waveforms.Pulses are emitted vertically by a 2 m diameter high-gain antenna.The antenna also acts as the receiving apparatus.The measured return signal is converted to 60 MHz frequency and analyzed by a commercial The millimeter-wave pulses can be compressed to improve the instrument's sensitivity and power.This has the disadvantage of creating sidelobe artifacts, especially in regions where reflectivity is strong.In order to get the best data product possible, the MMCR cycles through four signal acquisition modes with different pulse width and pulse-encoding.Combination of these modes allows for optimization of the signal through increased sensitivity while accounting for the artifacts.Details are given by Moran et al. (1998). The MMCR measures reflectivity from 90 m to 20 km in altitude, and is sensitive to volume backscatter cross-sections greater than 10 −14 m −1 sr −1 .Data are recorded with a temporal and spatial resolution of 10 s and 90 m, respectively.For the purposes of this study, the data are interpolated to the same resolutions used by the averaged lidar measurements (30 s and 15 m). Categorization process and statistical analysis Particle observations in the measurement record were visually divided into four categories based upon the identification of structural features in time and height with some attention to optical properties.Visual inspection was used because it is normally straight-forward to identify features "by eye" and also because the elimination of category cross-contamination is required.Scattering events that could not be readily identified were excluded from further analysis.The approach is illustrated below using the example measurement given in Fig. 3. Following particle classification, the statistics of scattering properties in each category are determined and compared with the same statistics for the entire measurement record. Figure 3 shows a sample 24 h measurement on 4-5 March 2007 from the AHSRL and MMCR, selected from the hundreds of such measurements used in this study.The image reveals clouds and aerosols that are well-observed throughout their entire vertical extent.Such complete viewing requires the optical depth to be relatively low ( 2), which is generally the case at Eureka during winter.Clouds in the other seasons often have greater liquid water content and so are more optically thick. From 14:00 to 24:00 UTC, ice crystals (high backscatter, high depolarization) were detected in the lowest 0.5 km by both the radar and the lidar.Surface-based ice crystals are frequently observed, and are readily distinguished from the other ice crystal types; see Lesins et al. (2009a) for some case studies.Ice clouds (high backscatter, high depolarization) with vertically-aligned fall streaks were present in the middle troposphere from approximately 16:00 to 08:00 UTC, and ended by precipitating to the surface.These cirrus-like clouds occur in the same temperature range (approximately 210-250 K) found in the upper half of the midlatitude troposphere.An aerosol event (low backscatter, low depolarization) between 1 and 4 km altitude began at 03:00 UTC, and can be distinguished from ice clouds by the horizontallyaligned striations, or sometimes homogenous haze-like character.Note that the aerosol event is largely unseen by the radar, as aerosols are relatively weak scatterers in comparison to clouds.A mixed-phase cloud appeared within the aerosol layer at approximately 10:00 UTC and persisted through the end of the measurement."Mixed-phase" is the term used to describe a geometrically thin water cloud with ice precipitation below.The water component is identified by enhanced lidar backscatter with low depolarization, whereas the ice precipitate has high depolarization and verticallyaligned fall streaks.The altitude of the thin water cloud corresponds to the top of the inversion layer as measured by the 12:00 UTC radiosonde (not shown).Ice clouds were seen again above 4 km from 09:00 to 14:00 UTC, with similar character as before.Noise is evident at the upper altitudes in the depolarization ratio measurement between 23:00 and 03:00 UTC, and to a lesser extent in the backscatter crosssection measured by the lidar.Such noise is removed from further analysis by a filtering process described later. As in the example, scattering events from all measurements in the three complete winters between 2005 and 2008 were visually partitioned into the same four categories: aerosols, mixed-phase clouds, ice clouds and boundary-layer ice crystals.The categories were determined after careful consideration of the entire measurement record."Boundary layer" here refers to the lowest few kilometers of troposphere influenced by the thermal inversion.No distinction is made between a cloud and its precipitate. Events identified on each image were visually selected using a mask with 1 km vertical and 1 h time resolution.The low resolution used for the mask was deliberate, and ensures that there is enough vertical distance and time gap between different events to avoid cross-contamination.For example, cases with mixtures of different ice crystal types (e.g., precipitation into boundary-layer ice crystals) were removed from the analysis.Mixtures with aerosol particles, however, were always included.Aerosol events are very common and, L. Bourdages et al.: High Arctic tropospheric particles in winter more importantly, the presence of aerosols is difficult to ascertain when other atmospheric phenomena (e.g., blowing snow residuals, ice crystal precipitation) are present.Finally, mixed-phase clouds that were observed to fully glaciate were not re-interpreted as boundary-layer ice crystal events. The lidar and radar data were filtered to ensure low atmospheric variability and noise within an averaging volume.The lidar data were used to establish the filtering criterion for both instruments given the lidar's higher intrinsic spatial and temporal resolution.Measurements in an averaging volume with a relative standard deviation in the 532 nm backscatter cross-section greater than 25% were excluded from further analysis. The individual high-resolution data points (i.e., averaging volumes) of measurements in each collection were then used to compile two-dimensional histograms of scattering properties.It is from the histograms that the particle statistics are determined.All histograms were normalized so that the total probability is one.Histograms with logarithmic horizontal axes have uniform bin sizes in the logarithmic space so that scattering at the full range of available scales is represented. The colour ratio The ratio of the radar and lidar volume backscatter crosssections gives the colour ratio, a derived quantity that is is a proxy for particle size (see Sect. 3.3).The volume backscatter cross-section (β) is related to the particle backscatter cross-section σ π (r) and number density n(r) of particles with radius r by The mean particle cross section σ π is defined by Using these, the volume backscatter cross-section can be rewritten as The backscatter cross-section depends on wavelength and so is different for the lidar and the radar.Taking the ratio between the two backscatter cross-sections gives the colour ratio as Equation ( 1) has no explicit dependence on number density, and so colour ratio is an average property for particles in a measurement volume. Practical application of Eq. ( 1) requires that β radar and β lidar exceed the minimum thresholds of detection for each instrument.Figure 3 demonstrates that this requirement is not satisfied for aerosols, which are generally invisible to the radar.The radar's insensitivity to very small particles follows from Fig. 3b, which shows a maximum lidar volume backscatter cross-section for aerosols is 10 −5 m −1 sr −1 .Using Mie theory (see Sect. 3.3), the colour ratio for the submicron particles in Arctic haze (e.g., Leaitch et al., 1989) is about 10 −12 , leading to an expected radar volume backscatter cross-section of 10 −17 m −1 sr −1 .This is three orders of magnitude less than the edge of detectability by the radar.In other words, the aerosol cloud concentrations would need to be at least 1000 times greater than the maximum observed under normal circumstances to be seen by the radar.Since aerosol particles cannot be detected with both instruments, an interpretation in terms of particle size is not possible.Aerosols will, however, continue to be considered in terms of optical scattering properties and their potential for mixing with other scatterer types.Note that although there are similar detectability issues for polar cloud particles with the CloudSat radar and CALIPSO lidar satellite instruments (e.g., Grenier et al., 2009), Fig. 3 suggests that is not the case here. Figure 4 shows the colour ratio for the measurement given in Fig. 3.The colour ratio measurement reveals substantial temporal and spatial structure.For mid-tropospheric ice clouds, fall streaks with relatively large particle sizes (i.e., colour ratios) are apparent.Of particular interest is that the particles are often larger near cloud bottom, an observation that is established statistically in Sect.5.2.1.Note that the aerosol event that occurred between 2 and 4 km altitude after 03:00 UTC is largely absent. Particle effective radius and interpretation Mie scattering theory can be used to convert the colour ratio to spherical particle effective radius r eff defined by where r is the radius, and n(r) is the number density.The interpretation of effective radius in terms of actual particle dimensions will be discussed shortly.We employed the algorithms from Mishchenko et al. (2002) to determine mean particle backscatter cross-sections σ π for distributions of spherical particles characterized by r eff and effective variance v eff given by The effective radius and variance are used in our analysis because the results for colour ratio are relatively insensitive to the specific distribution of particles used.For example, although our calculations have assumed a gamma distribution of particles, we have verified that a modified power-law distribution with the same parameters produces similar results. The parameters required for the Mie calculations include the particle refractive index and the wavelength of the incident light.Table 5 gives the choices used in this study.The refractive index n i is wavelength dependent, and in the case of radar waves has a large imaginary part for water droplets which implies a strong absorption component. Figure 5 presents the results of the Mie computations.Curves are given for ice and water particles for a selection of v eff values.The plot allows conversion of colour ratios to effective radii ranging from 1 to more than 100 microns once a choice of particle type and v eff is made. The particle effective radii discussed hereafter were computed using v eff =0.1.This choice of effective variance is relatively narrow and produces colour ratio curves that smoothly interpolate those obtained with monodisperse distributions.The choice of narrow distributions is appropriate given the high spatial and temporal resolutions in use, and the corresponding high degree of variability revealed in the colour ratios of Fig. 4. In the case that the actual distributions have greater effective variance, say 0.3, the maximum systematic error expected in our effective radius estimates is less than +25%. Computations may also be performed for non-spherical particles, but were not pursued here because we have no information on particle habit.Depolarization cannot be used as an unambiguous detector of ice particle shape, and so the incorporation of depolarization into the present analysis is not possible.Furthermore, the range of particle morphologies in ice clouds is varied and rarely pristine (Korolev et al., 1999), and so choosing any one particle type is arbitrary. Instead, we focus on the interpretation of effective radius measurements in terms of observed particle shapes.Mahesh et al. (2001) demonstrated that effective radii determined from infrared remote sensing agree with radii for equivalent volume-to-area (V/A) spheres from sampled ice crystals.In a series of papers, Grenfell et al. (1999), Neshyba et al. (2003) and Grenfell et al. (2005) showed that the equal-V/A sphere diameter is characteristic of the smallest dimension for a variety of realistic particle types.For example, the equal-V/A diameter is comparable to a column particle's width rather than its length.They also argued that the equal-V/A radius is of high importance for radiative transfer.In the comparisons that follow, we will therefore consider our effective radius determination to be associated with the smallest particle dimension. Our approach differs from some others found in the literature, and takes advantage of the unique capabilities of the AHSRL and MMCR in a low optical depth and dry environment.More complicated methods are necessary when the backscatter cross-sections cannot be so directly measured Donovan et al. (2001), for example).Alternative derivations of particle sizes can be made in terms of cloud extinction using the Raman lidar technique (e.g., Wang and Sassen, 2002), but much longer integrations are required to achieve an appropriate signal-to-noise level, and so they are not of interest here. 2005-2008 Data set Figure 6 shows two-dimensional histograms of occurrence probability against scattering parameters and altitude for the full data set spanning the wintertime months of December through March of 2005 to 2008.7772 h of measurements over 351 days were used.The distributions contain the signatures of liquid water droplets, aerosols, ice crystals, and particle mixtures.Dissimilar scatterers occupy separate regions in each histogram, as will be shown using the categorized data sets.The full histograms can be used to determine the relative contribution from each scatterer type. Figure 6a shows the probabilities for depolarization against altitude.There are features in the distribution found below 10% and above 20% linear depolarization, which correspond to predominantly liquid and ice scatterers, respectively.Ice scatterers extend from the ground up to at least 8 km in altitude.Liquid scatterers are mostly found near the ground, except for a small population peaking near 2 km altitude with linear depolarization less than 3%.The near-ground low-depolarization events are associated with aerosols, and the linear depolarizations less than 3% represent droplets in thin water clouds, as will be shown in the sections that follow. Figure 6b shows the probabilities for colour ratio (a proxy for particle size) against altitude.There is a trend toward smaller particles (i.e., smaller β radar /β lidar ) with increasing height.Figure 6c shows the probabilities for colour ratio against depolarization.The strong maximum between 15 and 50% linear depolarization is due to ice scatterers, and the low-depolarization "tail" (below 10%) is for mostly liquid scatterers.Notice that ice scatterers (high depolarization) are larger than the liquid scatterers (low depolarization). Figure 7 shows separate histograms for each category in columns: Aerosols, mixed-phase clouds, ice clouds, and boundary-layer ice crystals.Each category is discussed in respective subsections that follow. Aerosols 1199 h of measurements over 103 days were categorized as aerosol scattering.Before analyzing the histograms in Fig. 7, we briefly consider Fig. 8 which provides a histogram for aerosols using all available detections by the lidar.Aerosols are observed to occur mostly below 2 or 3 km in altitude, with depolarization values ranging from 0 to 20 or 30%.The larger depolarization values are due to the presence of ice crystals.Vertical discontinuities in the figure are an artifact of the categorization process, which used 1 km resolution. Figure 7a provides histograms using the subset of aerosol measurements for which both lidar and radar detections are available.Since aerosols are nearly ubiquitous in the Arctic atmosphere, distributions in this column will help to identify aerosol contamination in the ice particle categories.Although aerosols are generally not detected by the radar, the presence of a small quantity of ice crystals can easily elevate the backscatter cross-section above the detection threshold.Ice crystals are relatively large, and radar reflectivity is proportional to the sixth power of individual particle diameters.In the case of particle size distributions, empirical relations show reflectivity is proportional to the 3.8th power of the median volume diameter (Brown and Francis, 1995;Matrosov et al., 2002).In any event, relatively few ice crystals can add substantially to the aerosol backscatter. The top panel of Fig. 7a shows most of the dual lidar-radar aerosol detections occur near ground.This is different from what is seen in Fig. 8, and reflects the fact that ice scatterers are also ubiquitous at the lowest altitudes.Fewer detections were made higher up, and the coarseness of the distribution reflects low-probability statistics.Surface-based aerosols have linear depolarization between 0 and 20%, which is similar to that seen in Fig. 7. Relatively high depolarizations away from the surface indicate a dominant contribution from ice crystals.The middle panel of Fig. 7a indicates that aerosol contamination of the other plots in Fig. 7 can be expected primarily at the surface.Similarly, the lower panel indicates that aerosol contamination for depolarizations below 10% and for colour ratios between 10 −8 and 10 −7 will be an issue.This corresponds to the "tail" region mentioned in the description of Fig. 6.Note that depolarization values larger than 10% in the bottom panel of Fig. 7a are representative of ice scattering, and so would not be considered a contamination in the other columns. Mixed-phase clouds 894 h of measurements over 86 days were used to compile the histograms for mixed-phase clouds shown in Fig. 7b.The distinction between water droplets and ice crystals is evident in the top panel, which shows the depolarization versus altitude.Near-zero values of depolarization extending from 500 m to 3.5 km in altitude are due to droplets in thin water clouds.High linear depolarizations of 20 to 50% are from the frozen condensate, and are found largely below the liquid component heights as would be expected for precipitate.There is a region of intermediate linear depolarization between 3 and 20% which corresponds to the transition region from liquid to ice.The local maximum near ground at 10% depolarization represents a contribution from the aerosols, as established in the top panel of Fig. 7a. The vertical distribution of probabilities against colour ratio in the middle panel has horizontal streaks for colour ratios between 10 −9 and 10 −6 which are from the thin water clouds.The dominant scattering maximum is for ice crystals, which at high colour ratio have much larger sizes than the water droplets.Note that the population has constant colour ratio with height, which indicates uniform size.The bottom panel, which presents the probabilities against the depolarization and colour ratios, confirms the size comparison between the two scatterer types.The water droplet population (linear depolarization <3%) has much smaller colour ratio values compared to the ice crystal populations (linear depolarization with the greatest part between 20 and 50%).There is a transition region between the two peaks which can be attributed to the phase change from water to ice. Ice clouds 1424 h of measurements over 134 days were used to compute the histograms for ice clouds given in Fig. 7c.The top panel shows that linear depolarizations range from about 15 to 45%.The mean linear depolarization decreases with height, from approximately 40% at 2 km to 25% at 5 km altitude.Note that horizontal discontinuities at 1 and 2 km altitude in this panel are an artifact of the selection process resolution described earlier. In the middle panel, the mean colour ratio decreases with height.This indicates a decrease of mean ice crystal size with height, which is similar to the depolarization trend.Decreasing colour ratio with height is also evident in the example given in Fig. 4. It is interesting to note that small ice particles are evident in approximately equal quantities between 3 and 8 km altitude.A low colour-ratio contribution from aerosols below 1 km is apparent. The bottom panel gives the probabilities against depolarization and colour ratio.The tail at low depolarization extending from the main population is due to aerosols.Although the upper two panels might suggest that depolarization is a function of particle size, the bottom panel indicates that this is not the case.Small ice crystals tend to have high depolarization (40%-50%), whereas larger particles span a wide range of depolarization values. Separate histograms against depolarization and altitude for small and large ice crystal sub-populations are given in Fig. 9.The low-depolarization detections in Fig. 9a represent a contribution from aerosols.The modal depolarization for small ice particles is fairly constant with altitude, although the range of depolarization values become larger with height.This contrasts with large particles that have decreasing depolarization with height (Fig. 9b).The same trend is evident when particles of all sizes are included (Fig. 7c, top panel).Further comparison of Fig. 9a and b reveals that small ice particles tend to depolarize more than large ones at a given altitude.This result confirms that decreasing depolarization with height cannot be associated with decreasing size.The middle panel shows colour ratios that are in the same range as determined for aerosol layers only (Fig. 7a).The calculations were performed again considering data with depolarization values greater than 25% (i.e., ice crystals only), and a very similar plot was obtained.This confirms that the colour ratio values determined for aerosols (Fig. 7a) are biased by the ice crystals contained within.The bottom panel shows that there is no trend in depolarization with colour ratio, except for at very low depolarization where an aerosol tail is apparent. Scattering properties Comparison of the histograms in Figs.7 and 8 for the different particle types reveals significant differences in colour ratios, optical properties and vertical distribution.However, inspection of the figures also shows that their superposition effectively reproduces the histograms for the complete data set given in Fig. 6.We can conclude that the most important scatterers above Eureka are well represented by the chosen categories, and that Fig. 6 gives their relative contribution to scattering. Representative values of properties taken from the histograms for each category are given in Table 4, with effective radius estimates deriving from the appropriate curves in Fig. 5.The values represent estimates corresponding to 10% of the peak level or greater in each histogram.The estimates were obtained visually from each panel because interpretation to account for height variations and interference from aerosols is required. The largest particles observed above Eureka are contained in lower-tropospheric ice clouds and the precipitation from mixed-phase clouds.Ice particles in the middle to upper troposphere are somewhat smaller, and boundary-layer ice crystals are smaller yet.Water droplets in mixed-phase clouds are in general smaller than ice particles, as expected.The size of aerosol particles could not be determined because they are not detected by the radar.Table 5.Values for wavelengths (λ) and complex refractive indices (n i ) used in the Mie scattering computations for the lidar and radar. Refractive indices are from Warren and Brandt (2008). Lidar Radar λ 532 nm 8.6 mm n i (ice) 1.31 1.8+0.0003in i (water) 1.33 5+2.5i Water droplets have linear depolarizations less than 3%.Aerosol haze typically has linear depolarization less than 20%, which is different from what is found for liquid droplets alone due to ice content.Ice crystals have linear depolarizations greater than 20%.Water droplets have lower colour ratios than do ice crystals, which indicates the water droplets are smaller.Boundary-layer ice crystals, ice clouds, and ice precipitation from thin water clouds occupy partly overlapping ranges of colour ratio values.Ice crystals precipitating from mixed-phase clouds and lower tropospheric ice clouds generally have greater colour ratios (and therefore sizes) than are observed for boundary-layer ice crystals.Middle tropospheric ice particles have comparable colour ratios to those found in the boundary layer. Ice clouds have depolarization decreasing with altitude (Fig. 7c, top panel).This trend is associated with large particles (Fig. 9b), and contrasts with the nearly constant modal depolarization for small particles (Fig. 9a).The measurements indicate that the large-particle morphology changes with altitude, perhaps in response to particle breaking or sublimation (see, for example, Whiteway et al. (2004).Small particles have greater depolarization than large particles, for unknown reasons. Particle effective radii Comparison of our measurements with values from the literature is complicated by differences in measurement techniques.We have provided histograms of occurrence probability for effective radii measured at high spatial and temporal resolution, whereas in-situ measurements typically provide distributions of number concentration against particle size averaged over a long period of time or distance.However, if our assumption of narrow particle size distributions in each measurement volume is correct, a comparison of effective radii measured using the radar and lidar against the range of particle sizes observed with in-situ techniques is appropriate.Where possible, comparisons are made with the smallest dimension of the sampled particles, which is the approach proposed in Sect.3.3. For mixed-phase clouds, Arctic aircraft measurements by Curry et al. (2000) show water droplet sizes that range from about 2 to 47 µm.Frozen precipitation below the liquid stratus ranges from a few tens to several hundred microns in length (widths were not given).Although the measurements were taken during summer between Barrow, Alaska (71.3 • N) and the SHEBA experiment site (76-78 • N), their results are in reasonable agreement with our values at Eureka (80 • N) of 5 to 40 µm and 40 to 220 µm, respectively.The smaller maximum size we measured for ice crystals can be attributed to the sensitivity of our technique to particle widths rather than lengths.Year-round measurements of water droplet and ice particle precipitation with an MMCR at the NSA-AAO site near Barrow yield characteristic radii of 25 to 500 µm from terminal fall speeds (Zhao and Garrett, 2008), and our ice crystal sizes fall within this range. Comparison data for the ice cloud measurements are difficult to find.Wintertime in-situ aircraft campaign data are not available, presumably due to the difficulties of flying experiments in dark conditions.Summertime clouds in the middle troposphere are often of the mixed-phase variety, and are observed in much warmer conditions.High altitude in-situ cirrus measurements in the Arctic are not available.Lawson et al. (2001) show one case of a mid-tropospheric cirrus cloud sampled at Barrow Alaska on 29 July 1998 during FIRE ACE.They reported that small particles were found in clumps with very high local concentrations that are interspersed with regions of larger particles in low concentrations.This is consistent with the structured ice clouds seen at Eureka and illustrated by the example in Fig. 4, and justifies our use of relatively narrow particle size distributions.Their insitu measurements revealed particles ranging from less than 10 to hundreds of microns, which is consistent with our measured values of 25 to 220 µm. A comparison can also be made with mid-latitude cirrus, which are observed at temperatures similar to the wintertime Arctic mid-troposphere.Aircraft sampling of mid-latitude cirrus by Whiteway et al. (2004) revealed particle sizes ranging from less than 10 to a few hundred microns.Radar/lidar inversions by Donovan and van Lammeren (2002) also yield effective particle sizes up to a few hundred microns in radius.They showed that mid-latitude cirrus have effective radius increasing with increasing temperature, which is con-sistent with our observations of decreasing size with increasing height above the temperature inversion.Whiteway et al. (2004) showed that particle size variations with height are determined by a competition between the growth and sedimentation of large particles with crystal breakage into smaller particles and evaporation.The preponderance of larger crystals at low altitudes in our observations is therefore likely due to growth and/or sedimentation.Turner (2005) measured crystal sizes in both mixed-phase and pure ice clouds using a ground-based remote-sensing AERI system at the SHEBA site.The effective radii for ice cloud particles ranged between 8 and 100 µm, with a mode radius of about 26 µm.This is at the lower end of what we observed (25-220 µm).Effective radii for ice crystals in mixedphase clouds were even smaller.This was not considered to be realistic, and was attributed by the authors to differing particle shapes between the two cloud types.They assumed a droxtalhexcolumns model in their retrievals; however, ice crystals in Arctic clouds are known to be highly irregular (Korolev et al., 1999).It seems likely that the smaller crystal size they detected in general is due to the particle-shape assumption.In any event, our results suggest that the effective radii of ice crystals in mixed-phase and boundary-layer ice clouds are similar (40-220µm). Surface-based in-situ measurements of ice crystal sizes for residual blowing snow are not available for the High Arctic.Walden et al. (2003) obtained measurements with a microscope at the South Pole station of many different ice crystal types during winter.They showed that residual blowing snow crystals have a mean effective radius of 11.9 µm .Their technique relied on ice crystals landing on a gridded microscope slide, and the larger particles were found to blow off in high winds.Given that blowing snow requires wind, it seems likely that there is a bias toward smaller particle sizes in this particular result.Measurements at the Mizuho Antarctic station with a snow particle counter, which is only sensitive to particles larger than 25 µm diameter, reveals blowing snow particles as large as 60 µm at 9.6 m altitude, with even larger ones at lower altitudes (Nishimura and Nemoto, 2005).In contrast, the mean effective radius for diamond dust at South Pole Station is 12.2 µm and presumably unbiased.Mahesh et al. (2001) showed excellent agreement between remote sensing estimates of small ice crystals and surface-based in-situ sampling. Boundary-layer ice crystals measurements at Eureka yield effective radii (15-70 µm), which are more consistent with blowing snow than diamond dust.It is possible that diamond dust crystals in the Arctic are larger due to the higher temperatures -see, for example, the diamond dust images provided by Intrieri and Shupe (2004) -but there is very little data that can be used to properly assess this, and observations are complicated by the difficulty of distinguishing the different ice crystal sources.The fact that boundary-layer ice crystal events extend to about the same height as the surrounding topography supports our contention that residual blowing snow www.atmos-chem-phys.net/9/6881/2009/Atmos.Chem.Phys., 9, 6881-6897, 2009 is the source.Boundary-layer ice crystals contribute a significant portion of the overall particle burden above Eureka (Fig. 6).Our measurements suggest that residual blowing snow lofted from mountainous terrain is likely more important to the overall radiative balance than diamond dust.The radiative impact of blowing snow residuals is explored by Lesins et al. (2009a). In-situ samples of ice crystals in the Arctic are rare, and observations are complicated by contributions to the ice crystal population from the different sources described in this paper.A rigorous study of ice crystals at the surface like that of Walden et al. (2003) does not exist for the High Arctic, and should be considered a priority for future research. Ice crystal altitudes Boundary-layer ice crystals are found predominantly below 750 m altitude, which is comparable to the height of the mountain ridges near Eureka.Lesins et al. (2009a) showed four case studies of topographic blowing snow residuals that share the same vertical distribution.This indicates that blowing snow residuals are the dominant contributor to high optical depth boundary-layer ice crystal populations at Eureka.This result likely extends to other land locations in the rugged High Arctic. Ice clouds are observed throughout the troposphere during winter.At times these ice clouds, which are generated in the same range of temperatures as cirrus clouds at mid-latitudes, can precipitate to the ground. The altitude range for thin water stratus (0.5-3.5 km) is smaller than is observed during other seasons (e.g., Curry et al., 1996).The wintertime range corresponds with the observed variability for wintertime surface thermal inversion layer depths given by Lesins et al. (2009b).This suggests that thin liquid water stratus are connected to the development of wintertime surface inversion layers. It has been known since Wexler (1936) that cold surface temperatures in the Arctic winter are due to radiative cooling by surface snow and ice.More advanced models of the radiative transfer process (e.g., Curry, 1983) showed that in clear air the surface temperatures should be much lower than is observed, which suggests an important role for particles in determining boundary layer temperatures.Curry (1983) proposed a variety of mechanisms that could contribute to the process.These included the radiative impact of diamond dust and liquid condensate, and mixing by turbulence.A role for diamond dust has been discounted by the measurements of Intrieri and Shupe (2004) and Lesins et al. (2009a), and a role for turbulence has yet to be experimentally investigated.Our measurements support the contention of Intrieri and Shupe (2004) that liquid condensate plays an important role.Although the model of Curry (1983) did not produce the kind of thin liquid water clouds discussed here, the water clouds it simulated suggested the same basic mechanism.Thin water cloud dynamics and microphysics are further explored by Shupe et al. (2008). Depolarization The depolarization of aerosol layers is greater than what is found for liquid droplets.Hoff (1988) showed that ice crystals are responsible for the elevated depolarizations in Arctic haze, and we make the same interpretation here. Figure 9 established that the depolarization of small particles in ice clouds was greater than for large particles at a given altitude.High depolarization in contrails, which also contain very small particles, was found by Sassen (1997).Young cirrus were shown to have linear depolarization values in excess of 50%.The reasons for this result was unknown to Sassen (1997), as is the case here. Classification chart The histograms of Fig. 7 reveal that different scatterer types occupy different regions in particle size-depolarization space.Figure 10 provides a classification chart from the compiled information.The thresholds are approximate, and lead to relatively large regions occupied by particle mixtures.Note that areas with only aerosol particles cannot be interpreted in terms of size or colour ratio because the radar's sensitivity is too low to detect such small particles. The lidar volume backscatter cross-section can be used to isolate locations where aerosol layers dominate.The range in lidar volume backscatter cross-section for aerosols is relatively narrow, indicating that variations in sizes and number densities are small.Lidar backscatter cross-sections β lidar that are smaller than 2×10 −5 m −1 sr −1 are characteristic of aerosol layers and this threshold can be used to distinguish aerosol layers from the mixtures. In Fig. 10, mixed-phase cloud ice precipitation and boundary-layer ice crystals occupy distinct regions.Ice clouds, however, overlap with both.Some differentiation can be made on the basis of altitude, as shown in Table 4: small ice crystals below 2 km altitude are generally classified as boundary-layer ice particles, whereas small ice crystals in ice clouds are found predominantly higher up.Ice crystals originating from mixed-phase clouds and ice clouds are indistinguishable on this basis. Conclusions A combined radar-lidar technique was used to study particle properties in the High Arctic troposphere during winter.Different particle types were compared in terms of depolarization, colour ratio, effective radius, and vertical distribution.Colour ratios and effective radii could not be determined for aerosols because they are not detected by the radar, except in mixtures. Particle effective radii determined using Mie scattering theory are consistent with others found in the literature.Water droplets are small (effective radii of a few tens of microns) while ice particles can be much larger (effective radii up to a few hundred microns).In the boundary layer, mixedphase precipitation and ice cloud snow provide the largest ice crystals whereas residual blowing snow particles lofted from mountain ridges are smallest.Ice cloud crystal sizes have a strong gradient in altitude with the largest particles at the lowest heights.The size ranges for each particle type are summarized in Table 4. Depolarization is highly dependent on the particle type.Particle scattering dominated by aerosols has linear depolarization less than 20%, whereas ice crystals scattering has linear depolarization greater than 20%.Much of the depolarization in aerosol layers likely originates from ice crystals mixed in.Water droplets, in contrast, have linear depolarizations less than 3%.Ice clouds in the middle troposphere have depolarization decreasing with altitude, and this trend is reflected in the large particle sub-population.Small particles in ice clouds have greater depolarization than large ones at any given altitude, and almost constant modal depolarization with height.The measurements indicate that particle morphology changes with altitude. Boundary-layer ice crystals contribute significantly to the overall particle burden above Eureka.Their sizes and observed vertical extent indicate that blowing snow residuals lofted from the surrounding mountainous terrain is a more likely source than nucleation of diamond dust.Lesins et al. (2009a) presented case studies that established these blowing snow residuals can have a significant radiative impact.Given that much of the Arctic is similarly mountainous, the regional impact of blowing snow residuals on the infrared radiative transfer will need to be assessed. Thin water layers associated with mixed-phase clouds are observed from 500 m to 3.5 km altitude, which is the same range as is seen for thermal inversion layer depths (Lesins et al., 2009b).This correlation suggests that mixed-phase clouds are connected to the development of wintertime thermal inversion layers.Radiative transfer will be very sensitive to the vertical distribution of water clouds, and these new data should be taken into account in any future models of Arctic climate. A classification chart was produced which allows for the identification of ice crystals, aerosols and water droplets from a combination of depolarization and colour ratio values.The chart allows a deeper understanding of the particles found above Eureka's by associating them with a shape and size-related parameter. Future efforts are needed to improve our understanding of particle microphysics and optical properties.In-situ measurements of particle morphologies are required to understand the relationship between size, shape and depolarization.This may be partly possible from ground level since each ice crystal type is observed to precipitate to the surface, and no comprehensive study of this kind in the Arctic currently exists.However, there is also need for a wintertime aircraft campaign for ice particle sampling to resolve some of the depolarization and particle size issues identified here.Such a campaign should be performed in tandem with comparisons between different remote sensing techniques in order to form a comprehensive picture of particulates in the Arctic troposphere. Fig. 1 . Fig. 1.Polar map of the Arctic.The location of Eureka (80 • N, 86 • W) is marked with a red dot. Fig. 2 . Fig. 2. A photograph of the Zero-altitude PEARL Auxiliary Laboratory (ØPAL) with pointers to the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter-wave Cloud Radar (MMCR) locations. Fig. 5 . Fig.5.The relationship between effective radius and colour ratio determined using Mie scattering theory for ice and water spheres. Fig. 6 . Fig. 6.Average occurrence probability histograms during winter (2005-2008) for (a) the vertical distribution of depolarization, (b) the vertical distribution of the colour ratio, and (c) the colour ratio and depolarization.Detections by both lidar and radar were required. ig. 7 . Occurrence probability histograms arranged in columns for (a) aerosols, (b) mixed-phase clouds, c) ice clouds, and (d) boundary-layer ice crystals.Detections by both lidar and radar were required. 40 Fig. 8 . Fig. 8. Occurrence probability histogram for aerosols against height and depolarization, using all available lidar detections.The step changes apparent in the figure at 1 km intervals are a result of the event masking process, and cannot be eliminated due to the ubiquity of aerosols in the Arctic atmosphere. 1338 h of measurements over 107 days were used to compute the histograms for boundary layer ice clouds.Results are shown in Fig. 7d.The top panel shows that ice crystals are observed mostly below 750 m altitude.The depolarization spans a large range, including values too low to indicate solid phase.The low-depolarization values are attributed to the presence of aerosols. Table 4 . Ranges for linear depolarization δ lin , colour ratio, effective radius r eff and altitudes z for aerosols, mixed-phase (M-P) cloud water droplets (WD) and ice crystals (IC), ice cloud ice crystals, and boundary-layer (B-L) ice crystals.The ice crystal and water droplet values are from Fig.7and the aerosol values are from Fig.8 Fig. 10 . Fig.10.Classification chart for the different atmospheric particles and their mixtures.The vertical axis shows the linear (left) and circular (right) depolarizations and the horizontal axis is in terms of the color ratio β radar /β lidar (bottom) and the particle effective radius (top) for ice crystals (IC) and water droplets separately.Effective radii can only be attributed to regions with no aerosol content.The dash-dot line separates the boundary layer ice crystals on the left and precipitation from thin water clouds on the right.Ice clouds span the full region for ice crystals. Table 1 . Instrument complement and measurement capabilities at ØPAL. CIMEL Sun photometerAerosol optical depth and column size distributionStar photometerAerosol optical depth are archived by CANDAC.Data from both the AHSRL and MMCR can be accessed through the University of Wisconsin's lidar group web site: http://lidar.ssec.wisc.edu/.
11,274.2
2009-09-21T00:00:00.000
[ "Environmental Science", "Physics" ]
Preparation and properties of SiC honeycomb ceramics by pressureless sintering technology SiC honeycomb ceramics with parallel channels and macroporous walls were prepared by combining extrusion molding with pressureless sintering technology in the presence of starch. The extrusion molding constructed honeycomb structure with parallel channels, while the starch formed spherical macropores on the channel walls. The density and bending strength of SiC honeycomb ceramics decreased with the increase of starch content, while the phase compositions did not vary with the starch content. The control in starch addition could adjust the pore structures on the channel walls of SiC honeycomb ceramics. Introduction SiC honeycomb ceramics have attracted considerable attentions owing to its high wear resistance, high corrosion resistance, high coefficient of thermal conductivity, and thermal stability [1][2][3][4][5][6][7][8][9]. At present, SiC honeycomb ceramics are truly prepared by combining extrusion molding with recrystallization method [1], although some pore-creation processes such as polymeric precursor, and foam template have been developed to prepare SiC porous ceramics. The so-called recrystallized SiC (RSiC) honeycomb is a pure porous SiC material that is produced by heating up shapes consisting of a mixture of bimodal SiC powders at temperatures exceeding 2300 ℃ in a protecting gas atmosphere (Ar). However, the application of RSiC honeycomb is limited by its low mechanical properties (compression strength: 2.5-5 MPa [2]) and high recrystallization temperature. In particular, it is difficult to control the size and distribution of pores in the RSiC honeycomb walls. In our previous research, we have reported the preparation and pore structures of SiC honeycomb ceramics with macroporous walls by using iron oxide as pore former [10]. In this paper, we demonstrate the novel and facile method of preparing SiC honeycomb ceramics by combining extrusion molding with pressureless sintering technology in the presence of starch. The honeycomb structure with 70 cells per square inch and a wall thickness of about 400 μm was obtained by extrusion molding technology, and spherical macropores with a size of 10-30 μm distribute on the channel walls by means of pore-formation of the starch. A lot of benefits are expected to arise from the macroporous structure integrated in the honeycomb ceramics. All of the raw materials were mixed by ball milling for 0.5 h, and kneaded by vacuum kneading machine for 3-4 times. The kneaded materials were trapped for 24 h and extrusion-molded to obtain the green bodies with honeycomb structure and parallel channels. After dried at 120 ℃ for 24 h, the honeycomb green bodies were pressureless sintered at 2160 ℃ for 1 h in Ar atmosphere. As a result, the SiC ceramics with honeycomb structure and macroporous walls were prepared. The bulk density of the sintered samples was measured through the conventional water-displacement method. The three-point bending strength of the sintered samples was performed by electronic universal test machine (CMT5205) with a span of 30 mm and cross-head speed of 0.5 mm/min. The dimension of testing samples is 3 mm × 4 mm × 36 mm, and the testing number of each batch sample is 10. The phase compositions were analyzed by the X-ray diffractomer (XRD, Rigaku D/max-RA). The fracture morphologies of the honeycomb walls were observed by scanning electron microscope (SEM, HITACH S-4800). The pore structure characteristics of macropores in the honeycomb walls were evaluated by mercury porosimetry (Poremaster 60-GT, Quantachrome Instruments, USA). 1 Overview of honeycomb ceramics Extrusion molding The additive agents, such as binders, lubricants, plasticizer, dispersing agents and solvents, are crucial for the extrusion molding of SiC honeycomb ceramics. We choose 5 wt% HPMC and 3 wt% PVA as binders, 5 wt% oleic acid as lubricant, 3 wt% glycerol as plasticizer, and 2 wt% polyethylene glycol and 21 wt% water as solvents. As a result, a smooth flat green body with honeycomb structure and parallel channels can be obtained after the extrusion molding. The green body is dried at 120 ℃ for 24 h to remove the water in the body absolutely. Sintering The sintering is a vital procedure for the honeycomb ceramics, and the sintering conditions such as temperature, time, atmosphere and heating rate will determine the microstructure and properties of the honeycomb ceramics. We adopt various heating rates in the sintering stage, that is, a slow heating rate of 5 ℃/min from room temperature to 600 ℃, followed by a heating rate of 10 ℃/min from 600 ℃ to 1000 ℃ which can remove the additive agents absolutely. When the heating temperature is in the range of 1000-1800 ℃, a rapid heating rate of 50 ℃/min is performed, and then the heating rate changes to 30 ℃/min in the temperature range of 1800-2160 ℃. The sintering time is 1 h at the sintering temperature of 2160 ℃, and the sintering atmosphere is Ar during the whole sintering stage. Figure 1 shows the honeycomb channels and macroporous walls of a SiC honeycomb ceramic after sintering. The resultant SiC ceramics have a honeycomb structure with cell channels and macroporous channel walls, although there is little The amount, size and shape of cell channels are related to the extrusion molds, while the pore structures such as porosity, pore size and pore volume of the channel walls are somewhat different and depend on the pore formation process. Figure 2 shows the sintering behaviors of SiC honeycomb ceramics containing different starch contents. With the increase of starch content, the shrinkage ratio of honeycomb sintered body decreases and the mass loss increases, while the density decreases obviously. High mass loss indicates high volatilization of the additive agents and the pore-forming agent. When the starch content increases from 12.5 wt% to 20 wt%, the bulk density of SiC honeycomb ceramics decreases from 2.42 g/cm 3 to 2.20 g/cm 3 . The removal of starch will result in a lot of macropores on the walls, which inevitably deteriorate the density of channel walls. Figure 3 shows the bending strength of SiC honeycomb ceramics containing different starch contents. The bending strength of SiC honeycomb ceramics obviously decreases with the increase of starch content. The increase in starch content will result in the increase of macropore amount in channel walls, which inevitably deteriorates the mechanical property of SiC honeycomb ceramics. From the load-deformation curve of the honeycomb ceramic with 15 wt% starch, it is seen that the curve shows a slow upward trend with the increase of load, and a relative large peak occurs and keeps for some time when the load reaches a certain level, and then rapidly declines with the further increase of starch content. It indicates that the deformation of honeycomb ceramics is relative slow with the load, which is attributed to the elasticity of honeycomb structure. When the load reaches to a high level, the honeycomb structures will be broken or damaged, resulting in the decline of curve. The main crystalline phases do not vary with the variation of starch content obviously. It indicates that the pore-forming agent has no effect on the phase compositions of SiC honeycomb ceramics. Figure 5 shows the SEM images of surface of channel walls at different starch contents. The micrographs provide evidence of open macropores on the channel walls regardless of the starch content. The macropores of the sample are spherical with the size of 10-30 m at 12.5 wt% of starch. When the starch content increases to 20 wt% and above, the macropore amount increases obviously, and becomes interconnected, while the pore size changes a little with the starch content. These results suggest that the pore structure can be adjusted to some extent by controlling the addition of starch. Figure 6 shows the pore size distributions determined by mercury porosimetry for the channel walls at different starch contents (15 wt%, 17.5 wt% and 20 wt%). The cumulative pore volume is relatively low and the pore size distribution is a little wide and flat at 15 wt% of starch, which do not change much at 17.5 wt%. It indicates that most of created macropores in the two samples are still closed or isolated, although the two samples seem to have a lot of open macropores as shown in Fig. 5. When the starch content increases to 20 wt%, the sample possesses a sharp pore size distribution, and the pores are distributed roughly between 0.08 m to 0.4 m, which is much lower than the pore size (10-40 m) of RSiC honeycomb walls. It indicates that some small interconnected pores with a regular shape distribute on the large isolated macroporous interiors. Figure 7 shows the median pore size and porosity of channel walls at different starch contents (15 wt%, 17.5 wt% and 20 wt%). The median pore size and porosity of the samples increase with an increase starch content. The median pore size and porosity increase from 105 nm and 19.8% to 220 nm and 37% respectively, when the starch content increases from Conclusions SiC honeycomb ceramics with macroporous walls were prepared by pressureless sintering technology. The extrusion molding constructed the honeycomb structure with 70 cells per square inch and a wall thickness of about 400 μm, while the starch as pore-forming agent formed spherical macropores with a size of 10-30 μm on the walls. The pore formation of starch on the channel walls weakened the bulk density and bending strength of honeycomb ceramics, while did not change the phase compositions. The SiC honeycomb ceramic containing 20 wt% starch had a macrostructure consisting of large spherical macrospores and small interconnected pores in interiors, with a porosity of 37%. The honeycomb ceramics with macroporous walls are promising for wide applications such as filtration, separation, catalysis and so on.
2,221.8
2014-06-10T00:00:00.000
[ "Materials Science", "Agricultural And Food Sciences" ]
Privacy-Aware Early Detection of COVID-19 Through Adversarial Training Early detection of COVID-19 is an ongoing area of research that can help with triage, monitoring and general health assessment of potential patients and may reduce operational strain on hospitals that cope with the coronavirus pandemic. Different machine learning techniques have been used in the literature to detect potential cases of coronavirus using routine clinical data (blood tests, and vital signs measurements). Data breaches and information leakage when using these models can bring reputational damage and cause legal issues for hospitals. In spite of this, protecting healthcare models against leakage of potentially sensitive information is an understudied research area. In this study, two machine learning techniques that aim to predict a patient's COVID-19 status are examined. Using adversarial training, robust deep learning architectures are explored with the aim to protect attributes related to demographic information about the patients. The two models examined in this work are intended to preserve sensitive information against adversarial attacks and information leakage. In a series of experiments using datasets from the Oxford University Hospitals (OUH), Bedfordshire Hospitals NHS Foundation Trust (BH), University Hospitals Birmingham NHS Foundation Trust (UHB), and Portsmouth Hospitals University NHS Trust (PUH), two neural networks are trained and evaluated. These networks predict PCR test results using information from basic laboratory blood tests, and vital signs collected from a patient upon arrival to the hospital. The level of privacy each one of the models can provide is assessed and the efficacy and robustness of the proposed architectures are compared with a relevant baseline. One of the main contributions in this work is the particular focus on the development of effective COVID-19 detection models with built-in mechanisms in order to selectively protect sensitive attributes against adversarial attacks. The results on hold-out test set and external validation confirmed that there was no impact on the generalisibility of the model using adversarial learning. INTRODUCTION COVID-19 has impacted millions across the world.Its early signs cannot be easily distinguished from other respiratory illnesses and hence an accurate and rapid testing approach is vital for its management.RT-PCR assay of nasopharyngeal swabs is a widely accepted gold-standard test, which has several limitations, including limited sensitivity and slow turnaround time (12-24h in hospitals in high and middle-income countries).Several other techniques, including qualitative rapid-antigen tests ('lateral flow'; LFTs), point-of-care PCR, and loop mediated isothermal amplification have been proposed and are in various stages of validation and implementation (Assennato et al., 2020;Wolf et al., 2021).Among these techniques, lateral flow tests are favoured as they are inexpensive and do not require specialised laboratory equipment which allow for decentralised testing and faster results.However, sensitivity results for lateral flow testing vary greatly amongst groups, with reported values ranging from 40% to 70%.Dinnes et al. (2021); Wolf et al. (2021).There are also numerous studies based on radiological imaging, including CT Khuzani et al. (2021).Such tests are less widely available, involve a longer turnaround time, and expose patients to ionising radiation. There are a number of research studies on the deployment of machine learning techniques to detect COVID-19 from various widely available features, including demographic and laboratory markers (Goodman-Meza et al., 2020;Zoabi et al., 2021).Inclusion of demographics in learning might lead to the development of biased tests, and even when they are not explicitly included in the feature representation, these attributes can potentially confound the model through their correlation with other features.We recently introduced a machine learning test based on vital signs, routine laboratory blood tests and blood gas (Soltan et al., 2021).A strength of our test is the use of clinical data which is typically available within 1h, much sooner than the typical turnaround time of RT-PCR testing (up to 24h in hospitals in high-and middle-income countries).Current tests that employ machine learning are promising as they alleviate the need for specialised equipment, can potentially be more sensitive, and are faster than existing tests.Nonetheless they suffer from several shortcomings: 1.Most approaches that have appeared in the literature so far are based on basic machine learning techniques that require a complete retraining anytime a new batch of data is available.However, in a dynamic situation like a pandemic where new streams of data need to be processed, it is vital to incrementally learn from data without the need to start over and retrain the system using all the seen instances. 2. ML-based models explored in the COVID-19 literature are not equipped with an inherent mechanism to guard against possible issues that might arise due to the presence of demographic features.For example, models could easily get biased to a certain demographic group causing incorrect associations and overfitting. 3. Another issue is preserving the privacy of the patients and robustness against adversarial attacks.Most basic models can easily 'leak' information, making it easy for an adversary to recover sensitive information contained in the hidden representation.As blood tests are known to include features which typically correlate with demographic features, such as sex and ethnicity, exclusion of demographics does not necessarily solve the problem.For example, health issues like Benign Ethnic Neutropenia (Haddy et al., 1999) or Sickle Cell Disease (Rees et al., 2010) are predominantly found in a certain number of ethnic groups and much less likely to occur in others.As an additional example, healthy men and women have different reference ranges for blood tests (Park et al., 2016). This work aims to address the above-mentioned shortcomings in existing research.The proposed adversarial architectures (Section 4) are designed to prevent the learning model from potentially encoding unwanted demographic biases and protect its sensitive information during the learning process.In the first architecture (Section 4.1), protection of attributes is explicit, with the option to select the attributes for guarding against adversarial attacks.We will investigate in Section 5.3.1 whether these direct protective measures would hurt generalisibility to unseen data.In the second architecture (Section 4.2), protecting attributes is based on a general adversarial regularisation and is not tied to any specific subset of selected attributes. Several recent studies in the field of natural language processing (NLP) have shown that textual data carries informative features regarding authors' race, age and other social factors.This makes embedding and predictive models susceptible to a wide range of biases that can negatively affect performance and severely limit generalisability.This kind of bias also raises concerns in areas where fairness and privacy are important.Numerous works have focused on the different ways representation learning can be biased to or against certain demographics and different countermeasures have been proposed to counteract bias (Gonen & Goldberg, 2019).Most of these studies, however, are done using text and image data.Currently, there is limited research on the application of representation learning and adversarial models for healthcare applications. The proposed models in this study are designed to preserve sensitive information against adversarial attacks, allow incremental learning, and reduce the potential impact of demographic bias.However, the main focus of the work is in privacy preservation.The contributions of this work are as follows: • We introduce two adversarial learning models for the task of COVID-19 identification based on Electronic health records (EHR) that perform satisfactorily on a real COVID-19 dataset and in comparison with strong baselines.Unlike conventional tree-based methods, these architectures are well-suited for transfer learning, multi-modal data, and other advantages of neural models without a significant performance trade-off. • The models use adversarial regularisation to make them robust against leakage of sensitive information and adversarial attacks, which makes them suitable for scenarios where preservation of privacy is important or classification bias is costly. • We run a series of tests to quantitatively demonstrate the efficacy of the proposed architectures in protecting sensitive information against adversarial attacks in comparison with a neural model that is not adversarially trained. • We perform several tests to observe the effect of this type of training on generalisability across different demographic groups. • We externally validate the models using data from other hospital groups. PRIVACY ATTACKS IN MACHINE LEARNING AND HEALTHCARE There are various ways a trained model can be attacked by an adversary.The goal in most of them is to infer some kind of knowledge that is not originally meant to be shared or is unintentionally encoded by the model.At least three different forms of attack are known, namely, membership inference, property inference, and model inversion (Shokri et al., 2017).In this work, we focus on property inference, in which an adversary who has access to model's parameters during training, tries to extract information about certain properties of the training data that are not necessarily related to the main task.Figure 1 shows the general overview of privacy attacks according to Rigaki & Garcia (2020).The adversary, in our case, can see the model and its parameters and wants information about the data to which they do not have direct access to.Attacks of this kind are possible in any scenario where the model is stored and trained on an external server.Protecting an ML model against property inference attacks is especially useful in the context of collaborative and federated learning, where models locally train on different portions of the dataset and share their parameters over a network that might or might not be fully secure against eavesdropping (Melis et al., 2019). Within the context of healthcare, such attacks can reveal sensitive personal data and prove disastrous for hospitals.GDPR defines personal data as 'any information relating to an identified or identifiable natural person'.Article 9(1) of the GDPR declares the following types of personal data as sensitive: data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, genetic and biometric data, and data concerning health or sex life or sexual orientation of the subject (Voigt & Von dem Bussche, 2017). Sensitive information such as age, gender, location, or ethnicity are usually quantised or anonymised in large healthcare datasets.However, as we will see in Section 5.3, this information can be easily recovered by a simple attack model because of the implicit associations that exist between such information and other features in the dataset. Property inference attacks are not limited to recovering any specific type of data and can predict both categorical and numerical values.For instance, they can be used to train attacker models that learn to identify both demographic features (implicitly present in the data) and blood test features (explicitly present) that highly correlate with certain diseases.It is then possible to use this trained model to re-identify some patients based on their demographic features and possible combination of diseases (Jegorova et al., 2021). TASK DEFINITION In our binary classification setting, each neural network f is trained to predict labels y 1 , y 2 , ..., y n from instances x 1 , x 2 , ..., x n .Each instance x i contains a set of sensitive (in this case demographic) discrete features z i ∈ 1, 2, ..., k which we intend to "protect"1 .These sensitive features are called protected attributes. In the context of classification, any neural network f (x) can be characterised as an encoder, followed by a linear layer W : f (x) = W × h(x).W can be seen as the last layer of the network (i.e.dense + softmax) and h is all the preceding layers (Ravfogel et al., 2020). Suppose we have an attacker model f att that is trained on the encoder h(x) of a neural classifier in order to predict z i .If this trained adversary is able to predict z i based on the encoded representation from the model, the model has leaked and privacy of the model has been compromised. It is unlikely that h(x) would be completely guarded against an attack.If it encodes sufficient information about x i it might reveal some information to a properly trained f att .We say that the trained model f is private with regards to z i if an attacker model f att that has access to f 's encoder (h(x)) cannot predict z i with a greater probability than a majority class baseline. If we perturb h(x) too much, it will not be informative to f att but would also fail in accurately predicting the main task label y i .Therefore, we would like to ensure privacy against potential attackers with regards to the protected attributes while achieving a reasonably good result in the main task. METHODOLOGY We follow a standard supervised learning scenario where each training instance x i represents information from blood tests and vital signs for each patient seen at the hospital and y i is the corresponding Boolean value denoting the result of the PCR test for that patient.The task is to train a model to predict the correct label for each patient. ADVERSARIAL TRAINING BASED ON GRADIENT REVERSAL The first adversarial architecture we explore is comprised of one main part and a number of secondary networks: I. A main classifier M that is the central component of the model.It consists of a stack of n fully connected layers with dropout and batch normalisation, followed by a softmax layer at the end.II.d networks with auxiliary objectives separate from the main task.Supposing we have d categorical features, each of these secondary networks (henceforth referred to as discriminators) predict the value for that feature given each training instance.Assume h i is the representation of an instance at the ith layer within M .This is the point of interception where the auxiliary networks get access to the contents of M .All these components then train in tandem with the following loss function: Each D i corresponds to a separate discriminator network that predicts one of the d different categorical features of interest.λ is a weighting factor and can control the contribution of each individual auxiliary loss.Formula 1 is set up so that after backpropagation, the contents of h be maximally informative for the main task, and minimally informative for prediction of the protected features.Loss of the main task is computed using binary cross entropy. If x and y are the features and labels, ŷ and ẑ the predictions for the main target and protected features, θ M and θ Di the parameters of the main classifier and its d discriminators, and L is the joint binary cross entropy loss function, we can formulate the training objective as finding the optimal parameters θ such that: As discussed in Section 4.1, during training, the objective is to jointly minimise both of the following terms2 : where each x i is an instance of the data which is associated with the protected attribute z.D is the discriminator (the adversarial network), and c is the classifier used to predict the labels for the main task from representation h.L denotes the loss function. Using an optimisation trick called the Gradient Reversal Layer (GRL), we can combine the above terms into a single objective.This idea was first introduced in the context of domain adaptation (Ganin & Lempitsky, 2015) and was later also applied to text processing (Elazar & Goldberg, 2018;Li et al., 2018).GRL is easy to implement and requires adding a new layer to the end of the Discriminator's encoder. During forward propagation, GRL acts as an identity layer, passing along the input from the previous layer without any changes.However, during backpropagation, it multiplies the computed gradients by −1.Mathematically this layer can be formulated as a pseudofunction with the following two incompatible equations: Using this layer, we could formulate the loss function into one single formula, and perform a single backpropagation in each training epoch.For the trivial case of having only one protected attribute, we can consolidate equations 3 and 4 with the following: The objective is to minimise the total loss, and for the case of the discriminator, the gradients are reversed and scaled by λ.We can generalise this to the case where we have multiple (in our case 3; namely, age, gender, and ethnicity) protected attributes and corresponding D i s: ADVERSARIAL TRAINING BASED ON FAST GRADIENT SIGN METHOD As the second adversarial architecture, we develop another model in which the adversarial component can perturb the representation during training with some added noise.The direction of this noise (i.e.whether the added noise is a positive or negative number) is dependent on the signs of the computed gradients. This adversarial method is based on linear perturbation of inputs fed to a classifier.In every dataset, the measurements enjoy a certain degree of precision, below which could be considered negligible error .If x is the representation of an instance, it is likely that the classifier would treat x the same as x = x + η, as long as η ∞ < . However, this small perturbation grows when it is multiplied by a weight matrix w: The perturbation is maximised when we set η = sign(w), predicated on the assumption that it remains within the max-norm constraint defined above.In the context of deep learning, the method can be formulated in the following way: If θ is the parameters of the model, and J is the cost function, during training, for each instance a perturbation of η is added to the representation of the instance such that: This procedure is known as the fast gradient sign method (FGSM), originally introduced in a seminal 2015 paper by Goodfellow et al. (2015).It can be viewed either as a regularisation technique or a data augmentation method that includes unlikely instances in the dataset.For training, the following adversarial objective function can be used: This method can be seen in terms of making the model robust against worst case errors when the data is perturbed by an adversary (Goodfellow et al., 2015).Because of this regularisation, our expectation is that hidden representations would become less informative to an attacker network that attempts to retrieve demographic attributes.Following the original paper, α is usually taken to be 0.5, which turns the equation into a linear combination with equal weights given to both terms in the objective function. In our implementation (Figure 3), alongside the main component, there is an attacker that intercepts the model at a certain step during each training epoch, makes a copy of the pre-attack parameters in the intercepted layer, and injects noise into the model.Based on this information, an adversarial loss is computed and backpropagation is applied. Training data Cost function Backprop 1 restore Backprop 2 Figure 3: Overall structure of FSGM.y is the predicted label.η is added noise at the point of interception h. After this step, a restore function is executed, returning the parameters of the intercepted layer back to its pre-attack values.A regular loss is then computed and backpropagation is applied for a second time.This added noise is computed based on equation 9.If h is the representation of a training instance at the time of interception by the attacker, the perturbation is calculated by h = h + η. DATASET For the experiments in this study we use a hospital dataset which we refer to as OUH.OUH is a de-identified EHR dataset, covering unscheduled emergency presentations to emergency and acute medical services at Oxford University Hospitals NHS Foundation Trust (Oxford, UK).These hospitals consist of four teaching hospitals, which serve a population of 600, 000 and provide tertiary referral services to the surrounding region.At the time of model development, linked deidentified demographic and clinical data were obtained for the period of November 30, 2017 to March 6, 2021. For each presentation, data extracted included presentation blood tests, blood gas results, vital sign measurements, results of RT-PCR assays for SARS-CoV-2, and PCR for influenza and other respiratory viruses.Patients who opted out of EHR research, did not receive laboratory blood tests, or were younger than 18 years of age have been excluded from this dataset. For OUH, hospital presentations before December 1, 2019, and thus before the global outbreak, were included in the COVID-19-negative cohort.Patients presenting to hospital between December 1, 2019, and March 6, 2021, with PCR confirmed SARS-CoV-2 infection, were included in the COVID-19-positive cohort.This period includes both the first and second waves of the pandemic in England3 .Because of incomplete penetrance of testing during early stages of the pandemic and limited sensitivity of PCR swab tests, there is uncertainty in the viral status of patients presenting during the pandemic who were untested or tested negative.Therefore, these patients were excluded from the datasets. There are 3081 instances of COVID-19-positive in the original dataset and 112121 negative instances.For the experiments with OUH, we subsampled the majority class to reach a more balanced dataset with prevalence 0.5 (i.e.6162 positive labels).Age, gender, and ethnicity information were binarised during preprocessing.For gender, the average age is 64, which is taken as cut-off point for binarisation4 .The ethnicity information, which were encoded using NHS ethnic categories, were divided into white and non-white.While quantising features in this way involves oversimplification and loss of detail, it keeps the values binary across all the protected attributes making comparisons easier in our experimental setup.Table 4 shows the distribution of demographic labels in the OUH dataset.We will use the entire test sets in their original label distribution within the pandemic timeframe to make sure the evaluation is fair and that it mirrors the highly imbalanced data used in hospitals.Table 1 shows the statistics for the Covid-19 Positive cases in the datasets. EXPERIMENTS AND RESULTS We performed a series of experiments in order to test the proposed models and compare them against baselines.The baseline non-adversarial model that we use as the basic structure to start from, consists of 3 fully connected dense layers with batch normalisation and dropout.We refer to this model as Base.During 10-fold cross-validation, the best hyperparameters were chosen using random search.We empirically found that heavy hyperparameter optimisation had at best mixed results and adding more layers to the model did not consistently boost performance.We chose a set of parameters that seemed to work well across all the models during cross-validation (Table 2)5 .We also kept the Base model simple with only a few layers so we could have direct and straightforward comparisons with the adversarially trained models.The demographic-based adversarial model is referred to as ADV and its main component is the same as Base.Since after training, only the Base part will be tested (i.e.discriminators will detach), the ADV model ends up having the exact same number of parameters as Base.The perturbation-based adversarial model, which also has the same number of parameters as Base, is referred to as Adv per .All the reported results on the test set are the median of three consecutive runs.In what follows we explain the feature sets used, the train and test procedure and finally report the main task and attacker results under different scenarios. FEATURE SETS Two sets of clinical variables were investigated (Table 3): presentation blood tests from the first blood draw on arrival to hospital and vital signs.Only blood test markers that are commonly taken within existing care pathways and are usually available within 1 hour in middle and high-income countries were considered here.The models are trained and tested in a binary classification task in which the labels are confirmed PCR test results.As the first step, the model is evaluated on the TRAIN set in a stratified 10-fold cross-validation scenario during which a threshold is set on the ROC curve to meet the minimum recall constraint6 .Consequently, the model is trained on the TRAIN set and tested on the holdout TEST data and results are computed using the previously set threshold. During training of the ADV model, the expectation is that the accuracy of the main classifier increase over subsequent epochs, and since the learning setup is such that discriminators are constantly misled, performance is intended to be kept below or around 50% accuracy.To test this assumption, we plotted the changes in the trajectory of accuracy for the main and three auxiliary tasks in the first 15 epochs.This is when the ADV model is being trained on TRAIN set and before it is tested on holdout TEST.As can be seen in Figure 5, accuracy for the main task keeps growing steadily while discriminator accuracy drops below 50% and plateaus afterwards. In Table 4 we report the results on the main task of predicting PCR results for all the models.The results demonstrate the models perform well at the main task, namely, predicting the outcome of the PCR test.In order to asses how much privacy each model can provide against an adversarial attack, we perform a series of experiments in which 3 different non-adversarial Base models are trained on the training data, with each corresponding to the prediction of a different demographic attribute.In other words, instead of predicting the PCR test result, a protected attribute is provided as the label to train and test on.We perform the experiments under the same conditions as the main task.The attacker is first trained in a 10-fold cross-validation scenario and a threshold is set based on the ROC curve with the minimum recall constraint of 0.8 ± 0.07. Subsequently, the attackers are trained on TRAIN set and tested on the TEST portion of the dataset and predict the same values given the obtained threshold set during 10-fold CV.These results are important to the final interpretations of the model privacy because they determine the upper bound for the most amount of leak the proposed models can have.In Table 5, we report the results for trained attackers on the TEST portion of the dataset given each protected attribute that was predicted. The lower bound is the the majority class baselines in which the attacker simply relies on some prior information about the distribution of the protected attributes to predict these features and does not make use of the obtained hidden representations.For instance, if a dataset is obtained in Scotland, relying on the known fact that the predominant ethnic category is British White, the attacker would simply assign the same label to all of the instances.Statistics about majority classes for each attribute is given in Table 6 in both TRAIN and TEST sets.As can be seen, ethnicity is the most unbalanced category in comparison with gender and age in which class labels are more equally distributed.As the next step, we trained our baseline and proposed adversarial models on the TRAIN set and saved the weights of the neural networks.We then loaded our trained attackers and tested the attackers, not on the feature directly this time, but on the output of the encoder of the baseline and adversarially trained models.The idea is that, if an adversarially trained model is indeed protecting demographic attributes, it should make it harder for an attacker to predict those values from its encoded representations in comparison with a baseline model that is not specifically designed for preservation of privacy.Results shown in Table 7 already show a degree of privacy provided by the non-adversarial encoder, as they indicate a noticeable decrease in performance compared to Table 5.The most marked decrease is visible in prediction of gender, in which performance drops from AUC of 0.9104 to 0.6926.In the case of age, however, the attacker seems more robust.The results in Tables 8 and 9 confirm the assumption that an adversarial learning procedure, either with separate discriminator networks for each protected attribute or using perturbation-based regu- The application of an adversarial learning procedure to protect selected attributes involves a training setup with competing losses which is intended to weaken undesirable implicit associations contained in the hidden representations of the network.This is expected to result in a certain amount of performance drop compared to the non-adversarial baseline.As long as this drop is not massive, the performance-privacy trade-off is justified.However, a more general concern is whether a model like ADV, with its 3 different discriminators and the direct and specific manipulation of its hidden representations would generalise poorly when tested on certain demographic sub-populations of the dataset.Since ADV per applies its regularisation without specifically targeting any protected attributes, it is less likely to suffer from this issue. In order to investigate whether protecting demographic attributes damages generalisability of the ADV, we performed a series of experiments with the aim to train and test our Base and ADV models only on one demographic group and tested it on the other.We compare the adversarial model with the baseline to make sure that generalisability of the ADV model is not hurt.Since we have 3 different binary attributes, there are 6 possible ways to cross-test the models.We denote these subgroups with f (female), m (male), w (white), n (non-white), o (old), and y (young)7 .To restructure the dataset for these experiments, in each case we combine all the data and filter TRAIN and TEST based on the targeted demographic.For example 'm2f' would mean that our TRAIN set only contains females and the TEST set only males.The results in Table 10 clearly indicate that adversarial learning has not damaged generalisability in any of scenarios in which the Base and ADV models were tested. EXTERNAL VALIDATION OF THE MODELS In order to validate the models on external data, we trained Base, ADV, and ADV per on the OUH dataset (as described in Section 4.3) and tested it on the entirety of the UHB, BH, and PUH datasets.We performed the same procedure as the previous experiments: First we ran a 10-fold CV on the OUH dataset and set a threshold and then tested the models on the external test data with the previously obtained threshold.The hyperparameters were kept the same for these experiments with the exception of ADV per which seemed to converge better after 30 epochs during 10-fold CV.Tables 11, 12, and 13 show the results of this experiment on the UHB, BH, and PUH test sets, respectively.In our experiments, we addressed the issue of leakage of potentially sensitive attributes that are implicitly contained in the dataset, and demonstrated how an attacker network can successfully retrieve this information under different circumstances.Information like age seem to be easily inferred with high accuracy from the features or from the hidden representation of the Base model.In this case, ADV and ADV per models significantly reduced this vulnerability, which highlights the protective power of these adversarial methods in hiding such implicit information against invasive models that are specifically trained to infer this knowledge. The same pattern was seen in the case of the other two demographic attributes, namely, gender and ethnicity.For ethnicity, the representation was less informative to the attacker network for the following two reasons: I.A certain percentage of the patients had preferred not to state their ethnicity.Since we wanted to keep all the tasks binary, we treated this category as non-white which is clearly sub-optimal.This further complicates ethnicity prediction for the attacker. II.There are limitations in the accuracy of documenting ethnicity by hospital staff during data collection, which may increase the amount of noise in the data. However, even though the overall results are lower for the case of ethnicity, the ADV model still shows better privacy compared to the baseline.In such cases, the adversary is likely to rely on prior knowledge of the dataset or general information about the prevalence of ethnicity groups in the data, rather than the output of the encoder. Our adversarial setup came with only a minimal performance cost (Table 4) and proved robust both in the generalisability tests (Table 10) and in external validation on highly imbalanced datasets (Section 5.3.2).More experiments (both at the level of data and model) are needed to ascertain whether the same general patterns can be seen under different conditions.Nonetheless, these methods are not tied to the specifics of the Base model and can be applied to any neural architecture.Furthermore, To conclude, in this paper we introduced two effective methods to protect sensitive attributes in a tabular dataset related to the task of predicting COVID-19 PCR test result based on routinely collected clinical data.We demonstrated the effectiveness of adversarial training by assessing the proposed models against a comparable baseline both in the context of the main task where it showed performance scores that were by and large at the same level with the baselines and also in the context of privacy preservation where a trained attacker was employed to retrieve sensitive information by intercepting the content of the models' encoder.In the second scenario, the adversarially trained models consistently showed superior performance compared to the non-adversarial baseline. Figure 1 : Figure 1: Schematic view of privacy attacks for a machine learning model.Dashed lines represent information flow, and full lines signify possible actions. Figure 2 : Figure 2: Overall structure of the proposed model.Each D i is a discriminator that aims to predict any of the d categorical features z i Figure 4 : Figure 4: Distribution of labels for each demographic attribute in TRAIN(-Tr) and TEST(-Ts) sets in OUHIn Section 5.3.2,we will externally validate our models on three NHS Foundation Trust datasets(Soltan et al., 2022), namely Bedfordshire Hospitals NHS Foundation Trust (BH), University Hospitals Birmingham NHS Foundation Trust (UHB), and Portsmouth University Hospitals NHS Trust (PUH).We will use the entire test sets in their original label distribution within the pandemic timeframe to make sure the evaluation is fair and that it mirrors the highly imbalanced data used in hospitals.Table1shows the statistics for the Covid-19 Positive cases in the datasets. Figure 5 : Figure 5: Accuracy scores for the main and each of the three discriminators for each epoch Table 1 : Label distributions for PCR (along with percentage of each label) for UHB, BH, and PUH datasets used for external validation of the models Evaluation at BH considered all patients presenting to Bedford Hospital between January 1, 2021 and March 31, 2021.BH provides healthcare services for a population of around 620, 000 in Bedfordshire.Confirmatory COVID-19 testing was performed by point-of-care PCR based nucleic acid testing [SAMBA-II & Panther Fusion System, Diagnostics in the Real World, UK, and Hologic, USA]. ingham, between December 01, 2019 and October 29, 2020.The Queen Elizabeth Hospital is a large tertiary referral unit within the UHB group which provides healthcare services for a population of 2.2 million across the West Midlands.Confirmatory COVID-19 testing was performed by laboratory SARS-CoV-2 RT-PCR assay. Table 2 : Hyperparameter values used for all the experiments learning rate λ batch size hidden dimension (Base) hidden dimension (disc) dropout epochs Table 3 : Clinical parameters included in each feature set Table 5 : Attacker results on the TEST set when trained and tested on features directly.This serves as the upper bound for information leakage Predicted Attribute Recall Precision F1-Score Accuracy Specificity PPV NPV AUC Table 6 : Percentage of majority class labels to the whole data for each demographic attribute Table 7 : Attacker results on the TEST set when trained and tested on the output generated by the encoder of the nonadversarial Base model Since we want to keep the attackers blind to the encoding strategy used by the adversarially trained model, in order to test the attackers on the ADV and ADV per models, we have to use the same threshold set during 10-fold CV on the encoded representation of the Base model.Therefore, we load the attacker which is trained on the non-adversarial encoder on the TRAIN set and test it on the ADV/ADV per model's encoder to predict the three attributes. Table 8 : Attacker results on the TEST set when trained on the encoder of the Base model and tested on the encoder of the ADV model Predicted Attribute Recall Precision F1-Score Accuracy Specificity PPV NPV AUC Table 9 : Attacker results on the TEST set when trained on the encoder of the Base model and tested on the encoder of the ADV per model Table 10 : Results of demographic cross-tests to assess the effects of adversarial training on generalisability across different subgroups of the dataset. Table 11 : Results for the models when trained on OUH and tested on the UHB dataset In this work, we introduced and tested two adversarially trained models for the task of predicting COVID-19 PCR test results based on routinely collected blood tests and vital signs.The data was processed in the form of tabular data. Table 12 : Results for the models when trained on OUH and tested on the BH dataset Table 13 : Results for the models when trained on OUH and tested on the PUH dataset of the ADV model, the protected attributes need not be demographic and theoretically any categorical feature of interest (or any feature that can be meaningfully quantised) can be used during training.Future work can also include experimenting with continuous features, in which the attacker would have to guess the features in a regression task.
8,464.4
2022-01-09T00:00:00.000
[ "Computer Science" ]
Estimation of Handheld Ground-Penetrating Radar Antenna Position with Pendulum-Model-Based Extended Kalman Filter : Landmines and explosive remnants of war are a significant threat in tens of countries and other territories, causing the deaths or injuries of thousands of people every year, even long after military conflicts. Effective technical means of remote detecting, localizing, imaging, and identifying mines and other buried explosives are still sought and have a great potential utility. This paper considers a positioning system used as a supporting tool for a handheld ground penetrating radar. Accurate knowledge of the radar antenna position during terrain scanning is necessary to properly localize and visualize the shape of buried objects, which helps in their remote classification and makes demining safer. The positioning system proposed in this paper uses ultrawideband radios to measure the distances between stationary beacons and mobile units. The measurements are processed with an extended Kalman filter based on an innovative dynamics model, derived from the model of a pendulum motion. The results of simulations included in the paper prove that using the proposed pendulum dynamics model ensures a better accuracy than the accuracy obtainable with other typically used dynamics models. It is also demonstrated that our positioning system can estimate the radar antenna position with the accuracy of single centimeters which is required for appropriate imaging of buried objects with the ground penetrating radars. Introduction The presence of landmines and explosive remnants of war (ERW), such as artillery shells, grenades, rockets, bombs, and cluster munition remnants, poses a significant worldwide threat in the areas of current and past military conflicts.It results in deaths and injuries of mostly civilian victims even many years after the wars. According to the yearly reports of the Landmine Monitor [1,2], providing a global overview of the landmine situation, tens of millions of landmines are still buried underground in at least 60 countries and other territories.Only a single year 2021 brought 7073 casualties of mines/ERW (2492 killed and 4561 injured) in 54 different countries, and 80% of the victims were civilians [1][2][3]. Considering the significance of the problem, efficient methods of mine clearance are still tough.Currently, various metal detectors (MD) are often used for this purpose, and contemporary MDs offer excellent parameters, enabling the detection of even very small and deeply buried metal objects [4][5][6][7][8][9].Paradoxically, this high sensitivity can be also their drawback leading to many false detections which lengthen the time necessary for demining.Moreover, MDs do not offer any way to initially identify or classify the detected objects and every detection must be carefully examined.What is even more problematic and dangerous, not all contemporary landmines and ERWs contain metal elements, which limits the usefulness of MDs in mine clearance operations. In military applications, GPRs can be installed on large armored manned vehicles with enhanced immunity to nearby explosions [31][32][33][34][35].For increased safety of the crew, the radar antennas are usually attached to the end of long arms in front of the vehicle.A good alternative is mounting GPRs on remotely controlled unmanned wheeled vehicles [36] or tracked vehicles [30,35,37,38], which eliminates the risk for the crew, and reduces the costs of the purchase and the exploitation of such systems.The GPRs on vehicle platforms, however, have limited utility in difficult terrain: mountainous areas, forests, dumps, urban surfaces covered with debris, or interiors of buildings, where landmines and other explosives can be typically found.A good solution applicable in such areas is a handheld version of the ground penetrating radar (HH-GPR) [39][40][41][42].The problem of estimating the antenna position of such type of radar is addressed in this paper. The GPR operation requires emitting electromagnetic energy in the direction of the ground.The transmitted radio waves penetrate near-surface layers of the soil and encounter on their way various objects and layers of different permittivity ε and conductivity σ, which results in reflecting and scattering back a portion of the transmitted energy.The echo signals are received, collected, and processed to detect and create images of buried objects. Most contemporary GPRs are pulse radars [15,16,22,24,43,44], transmitting repeatable, very short, high-amplitude pulses and receiving strongly attenuated echo signals reflected or scattered back from layers' boundaries and buried objects [3,45,46] as shown in Figure 1.Time delays of subsequent peaks in the received echo signals are proportional to the depth of the detected objects or layers of different permittivity.Collecting and joint processing multiple echo signals, so-called echograms or radargrams, for a GPR moving along a predefined scanning path enables locating and imaging those objects and layers [3,15,16]. Three types of visual presentations of GPR radargrams are used in practice [3,14,16,21].A single echogram was obtained for only one GPR antenna position with coordinates (i, j) is a one-dimensional signal representation, called an A-scan (Figure 2a).Time delays of the signal peaks in the A-scan are usually converted into respective depths and the Z-axis is scaled in the distance units [21,23,39,46,47].An analysis of GPR data is typically based on a two-dimensional signal representation, called a B-scan, which is a dataset created from many A-scans acquired for various antenna locations along a usually linear scanning path, as shown in Figure 2b.It represents a radar image of a vertical surface intersecting the scanned terrain volume below the scanning path.Due to a relatively large GPR antenna beamwidth, the same buried objects are illuminated many times from different antenna locations and consequently from different distances.Therefore, the echo signals form hyperbolic structures visible in the B-scans [14,23,39,46,47].An example of such a structure for a single-point object is shown as a red hyperbole in Figure 2b. Collecting A-scans for multiple antenna locations in the nodes of a grid span onto the OXY surface, one can create another type of GPR signal visual presentation, called a C-scan (Figure 2c).This is a three-dimensional signal representation, which is very useful in visualizing, identifying, and classifying buried objects. The C-scans are often presented as a set of two-dimensional greyscale or color images, created as horizontal sections through the C-scan volume on various depths [3,21,48].An example of such a single image is shown in Figure 3.The presented scans were made using a pulse radar produced by IDS GeoRadar company, containing a DAD K2 control unit and an antenna with a central frequency equal to 900 MHz.This radar made 850 soundings per second, the duration of the probing pulse was about 1 ns, and the obtained spatial resolution was about 5 cm. Knowledge of accurate positions of a GPR antenna moving along a scanning path is necessary to properly assemble all the acquired radargrams and create high-quality GPR B-scans or C-scans.Several scientific papers [44,49] and patents [50] suggest that the GPR antenna positioning accuracy should be better than one eight of a radar signal wavelength [51].As typical GPRs work at a frequency range between 400 MHz and 4 GHz (wavelengths from 7.5 to 75 cm) [49,51], the antenna positioning accuracy should be of the order of single centimeters which requires using very high-accuracy navigation systems. As most of the listed above devices or systems are not adequate for HH-GPRs, due to their large size, weight, specific installation requirements, vulnerability to jamming or signal shadowing, and too low accuracy, the authors of this paper proposed a system based on several ultrawideband (UWB) radio modules.This concept was first described in an authors' conference paper [54], where physical models of a mobile unit and UWB beacons were presented.The mentioned paper also contained a description of an autocalibration procedure, used for self-locating the UWB beacons for quickly establishing a frame of reference before the scanning process, and presented an initial assessment of the system's accuracy which in the scanning zone reaches desired level of 2-3 cm. In another authors' conference paper [41], it was claimed and demonstrated that the accuracy of the UWB positioning system can be further improved with a properly chosen estimation algorithm.In that paper, using an extended Kalman filter (EKF) based on a GPR antenna motion model, derived from the mathematical pendulum motion model, was proposed.The mentioned paper, however, contained a proof of concept rather than a complete and applicable positioning solution, as the proposed pendulum-based dynamics model used in the EKF was oversimplified to present the main idea only.It assumed that the attachment point of the "pendulum", which is the position of a GPR operator's arm, is initially known and that the angle of orientation of the main axis of the scanning section always equals zero degrees.These assumptions can hardly be met in practice.Moreover, the mentioned conference paper contained only a sketch of the system's model and very limited results of its simulative testing. This paper can be considered a significantly extended version of the above-mentioned conference paper.It presents an elaborated, practically applicable version of the GPR antenna positioning system using UWB radio modules and includes a complete description of its extended mathematical model and detailed results of its simulative testing.The main novelty of this paper includes: 1. Elaboration and detailed presentation of an advanced and practically applicable dynamics and observation model of the UWB-based GPR antenna positioning system, with relinquished simplifying assumptions of the model presented in [41]; 2. Elaboration and detailed presentation of the estimation algorithm used in the proposed GPR antenna positioning system; 3. Presentation of new and detailed results of simulative tests of the positioning system for various realistic system configurations. This paper is organized as follows.A general concept of the ground penetrating radar, types of GPR data visualizations, accuracy requirements for GPR antenna positioning, technologies used for GPR positioning, previous authors' works in this field, and a discussion of the novelty of this paper are presented in Section 1.The system's description, its mathematical model, and the estimation algorithm elaborated by the authors are presented in Section 2. The methodology and the results of simulative testing of the GPR antenna positioning system are presented in Sections 3 and 4 contain a discussion. Scanning Profiles As has already been mentioned, creating B-scans requires moving a GPR antenna over the ground, ideally along a linear scanning path (profile) with constant velocity, to collect linearly arranged and uniformly separated radargrams.Creating C-scans requires repeating such scanning (profiling) for many equidistant lines in one direction, as shown in Figure 4a, or bi-directionally, as shown in Figure 4b, where the antenna position is marked as a letter A [7,15,16].In multichannel GPRs, with several equidistant antennas, the profiling can be realized quicker, unidirectionally (like in Figure 4a) for several scanning paths at a time.Although in favorable conditions the profiling shown in Figure 4 can be at least approximately realized with GPRs installed on vehicles (carefully driven or remotely controlled, in non-demanding terrain and with the use of an accurate supporting navigation system), this can hardly be achieved with HH-GPRs.The elements of the scanning path, in this case, are shown in Figure 5, where the letters A and S represent the positions of the antenna and the sapper. UWB Positioning System The structure of the HH-GPR antenna positioning system proposed in this paper is shown in Figure 6.It is composed of four stationary modules M 1 ÷ M 4 serving as radio beacons and two mobile modules M A and M S .The M A module is installed over the GPR antenna and the M S module over the sapper's shoulder.All the modules contain UWB transceivers.Distance measurements realized by these transceivers are collected and processed using estimation algorithms described in the further part of the paper.The following variables are used in Figure 6: d Aj -distance between a j-th beacon and the antenna module M A , d Sj -distance between a j-th beacon and the sapper module M S , x j , y j -coordinates of a j-th beacon position, x A , y A -coordinates of the M A module position, x S , y S -coordinates of the M S module position, l-length of the HH-GPR handle (horizontal distance between M S and M A ), θ-angle between the horizontal projection of the GPR antenna handle and the central axis of the scanning section. We assumed that the UWB radios used in our system are PulsON P440 modules from TDSR [55].They use the two-way time-of-flight (TW-TOF) method for ranging and offer an operating range between 300 and 1100 m and a ranging accuracy of about 2 cm in line of sight (LOS) conditions.Such parameters give the potential to build a positioning system with the desired centimeter-level accuracy, required in the considered application of the HH-GPR antenna positioning. The placement of beacons outside a potentially hazardous area, as shown in Figure 6, is only one of the possible options, suggested for quick and easy deployment of the system in terrain.Other beacons' locations are also possible, and their relative positions with respect to the mobile units M A and M S influence the accuracy of the UWB positioning system, which will be discussed in detail in the Results section of the paper. Mathematical Model As can be seen in Figure 5, the scanning profiles are composed of fragments that resemble arcs rather than straight sections.Moreover, the velocity of the HH-GPR antenna is more changeable than in GPRs installed on vehicle platforms, as typically an operator (sapper) performs a swinging motion, initially accelerating and finally decelerating the antenna.Therefore, the collected radargrams are not linearly arranged nor uniformly separated.Nevertheless, the acquired A-scans can be used to create two-or three-dimensional GPR visualizations of buried objects provided that the antenna positions are known for all the collected radargrams [3,15,16]. A single arc belonging to the scanning profile is shown in Figure 7.If we consider the changeable angular velocity of the antenna motion (initially accelerating and finally decelerating), such a trajectory resembles the motion of a mathematical pendulum [56], and can be described by the following formula: where: θ-angle between the horizontal projection of the GPR antenna handle and the central axis of the scanning section, a-acceleration forcing the HH-GPR antenna (M A module) motion, l-length of the HH-GPR handle (horizontal distance between M S and M A ).The acceleration a is analogous to the gravity acceleration g in the mathematical pendulum motion model.Contrary to g, which can be considered a constant, the acceleration a is more changeable and to large extent depends on the operator's strength, fatigue, style of HH-GPR operation, etc., thus we treat it as an additional variable to be estimated and we model it as a Wiener stochastic process [57][58][59].Considering the geometrical relationships shown in Figure 7, the equations describing the antenna and the sapper's arm motion can be formulated as follows: where: x A , y A -coordinates of the HH-GPR antenna (M A module) position, x S , y S -coordinates of the sapper's arm (M S module) position, u x S , u y S -Gaussian white noises representing random components of the sapper's arm (M S module) motion, l-length of the HH-GPR handle (horizontal distance between M S and M A ), θ-angle between the horizontal projection of the GPR antenna handle and the central axis of the scanning section, γ-angle between the horizontal projection of the GPR antenna handle and the OY axis of the frame of reference, ω-angular velocity of the HH-GPR antenna (M A module) motion, a-acceleration forcing the HH-GPR antenna (M A module) motion, u a -Gaussian white noise representing random changes of a. Rewriting Equation (2) to fit it into the standard form of a nonlinear continuous dynamics model [60][61][62][63]: one obtains the following detailed version of this model, which has been further used in our estimation algorithm: The nonlinear observation model in the following standard form [60][61][62]: has been formulated assuming that at every step k the UWB positioning system realizes four distance measurements between a j-th beacon and the antenna module M A : and four distance measurements between a j-th beacon and the sapper module M S : where: d Aj -distance between a j-th beacon and the antenna module M A , d Sj -distance between a j-th beacon and the sapper module M S , x j , y j -coordinates of a j-th beacon position, x A , y A -coordinates of the M A module position, x S , y S -coordinates of the M S module position, h-sapper's arm height, v Aj , v Sj -distance measuring errors for M A and M S modules. A detailed version of the observation model, which has been further used in our estimation algorithm, is as follows: As the antenna module M A is kept close to the soil during scanning, and the differences between slant distances d Aj and their horizontal projections are very small, we assumed that their altitude over the ground can be omitted in the observation model.On the other hand, the M S module is placed over the ground on the sapper's arm, and its altitude h is non-negligible.In our model, we assumed that it is constant, as its changes in the order of centimeters during the system's operation can be neglected for typical distances from the UWB beacons, which are in the order of tens of meters.In a real system the altitude h can be a settable constant, adjusted before using the system, based on the sapper's height. Estimation Algorithm An extended Kalman filter for HH-GPR antenna position estimation was designed based on the previously described dynamics and observation models and its flowchart is shown in Figure 8. After initialization of the EKF at step k = 0 or after closing each subsequent filter's loop at steps k > 0, the filter alternately performs prediction and correction steps.The prediction step requires previous calculations of the fundamental matrix F, the transition matrix Φ, and the covariance matrix of disturbances Q at every step k.The method of calculating the F matrix (more precisely it is F k−1 but to shorten the notation the index k − 1 will be omitted in further equations) is explained in Appendix A. where: and ∆t is a period between two successive time steps k − 1 and k. The Q c matrix from Equation (11) represents the covariance matrix of continuous disturbances which is a 3-by-3 diagonal matrix containing power spectral densities S x S , S y S and S a of the noises u x S , u y S and u a composing the disturbances vector u(t) in Equation ( 4): The predicted state vector xk|k−1 is calculated in accordance with the following general equation [64,65]: but in practical calculations we use Heun's numerical integration method [65][66][67][68] which leads to the following formulae: where xk−1|k−1 is the final state vector estimate from the previous step k − 1. Apart from the predicted state vector xk|k−1 , the covariance matrix of prediction errors P k|k−1 is calculated based on the covariance matrix of filtration errors P k−1|k−1 from the previous step k − 1 as follows [60][61][62]: where we use the mentioned matrices Φ and Q.The correction step requires previous calculations of the observation matrix H at every step k, and the method of its calculation is explained in Appendix B. This step involves a calculation of the Kalman gains matrix K k , a correction of the predicted state vector xk|k−1 based on the current measurement vector z k , which produces the final estimate xk|k at the step k, as well as calculation the covariance matrix of filtration errors P k|k , and these operations are realized as follows [60][61][62]: The R k matrix in Equation ( 16) is the covariance matrix of measurement errors [60,61] which is formed as an 8-by-8 diagonal matrix, containing the variances of all eight distance measurements performed between pairs of UWB modules in the positioning system: where σ 2 Aj and σ 2 Sj represent variances of distance measurement between a j-th beacon and the antenna module M A or the sapper module M S . Alternative Positioning Algorithms Apart from the proposed pendulum-model-based EKF, simpler algorithms can also be used to estimate the HH-GPR antenna position.One possible solution is a non-linear least squares (NLS) algorithm [57,69,70] which processes a vector z(k) of distance measurements collected at each step k without using the previously estimated state vector and without filtration.Such an algorithm does not use any dynamics model either.The NLS requires an initialization by assigning at least coarse values to the antenna coordinates x A and y A and subsequently, it improves their estimates iteratively.This algorithm is simple but due to lack of filtration, its accuracy is not high. Better estimation results can be obtained by using EKF filters based on nearly-constantvelocity (CV) or nearly-constant-acceleration (CA) dynamics models [57,[71][72][73][74], which are typically applied in navigation and radiolocation.The CV model (20) assumes a rectilinear uniform motion, whereas the CA model ( 21) assumes a uniformly accelerated motion, and, in both cases, small disturbances of these ideal movements are modeled by the vector u(t): 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 where: x A , y A -coordinates of antenna position, v x , v y -components of antenna velocity, a x , a y -components of antenna acceleration, u v x , u v y -Gaussian white noises representing random disturbances of CV motion, u a x , u a y -Gaussian white noises representing random disturbances of CA motion. Clearly, the CV and CA models do not fit ideally the actual HH-GPR antenna motion but nevertheless, they can be used for prediction in EKFs.Such filters are not optimal, but they are simpler than the EKF presented in Figure 8 because both dynamics models are linear, and the prediction of the state vector is realized like in a linear Kalman filter [60][61][62]: Moreover, the transition matrix Φ and the covariance matrix of disturbances Q can be calculated in advance before the filter implementation using the simple formula [62] and they remain constant during the filters' operation.Thus, such EKFs do not require in-run calculation of the Jacobian matrix F and the matrices Φ and Q. All the mentioned algorithms, NLS and EKFs based on the CV and CA models, have been implemented by the authors and tested to compare them with the previously described pendulum-model-based EKF.Further in the paper, the following acronyms will be used for these algorithms: NLS, EKF-CV, EKF-CA, and EKF-PND. Although the EKF-CV and the EKF-CA are not optimal, their accuracy can be maximized by choosing appropriate power spectral densities of disturbances S v x , S v y of u v x , u v y noises in the CV model or S a x , S a y of u a x , u a y noises in the CA model.Their choice affects the values of the Q matrices and consequently influences the information quality [75], notably estimation errors of the filters.The process of choosing filters' parameters and minimizing their estimation errors is called "tuning Kalman filter" [76] and it was realized in the case of the EKF-CV and EKF-CA designed in our research. Results The described HH-GPR antenna positioning system and the proposed pendulummodel-based EKF were implemented and simulatively tested in MATLAB ® version R2022b.The simulations included an assessment of the dependence of the system's accuracy on the positions of UWB stationary modules M 1 ÷ M 4 deployed around the area of interest, where the demining process is going to be performed.The results of these analyses are presented in Section 3.1.In further experiments, the accuracy of the EKF-PND filter was analyzed for chosen scanning sections.This accuracy was also compared with the accuracy of the NLS, EKF-CV, and EKF-CA algorithms.The results of these tests and accuracy comparisons are presented in Section 3.2. Influence of UWB Beacons' Locations on System's Accuracy Possible locations of the UWB stationary modules M 1 ÷ M 4 are to large extent dependent on the terrain characteristics and the obstacles present around the scanning area.The sapper usually cannot place it freely when he deploys the system's elements in a previously unsearched and potentially hazardous terrain.When approaching a minefield, he usually knows which part of the terrain is free of explosives and where the search should start.Thus, the most typical and safe locations for placing the UWB beacons lay in front and on the sides of the minefield, as shown in Figure 6.Such a system's geometry is certainly not optimal from the accuracy point of view, but even under the mentioned limitations, the actual placement of M 1 ÷ M 4 may significantly influence the positioning accuracy in various areas of the minefield. To verify the mentioned dependence of the positioning accuracy on the locations of the UWB stationary modules, three system configurations with different locations of the M 1 ÷ M 4 modules were considered.The assumed positions of the modules are given in Table 1 and are graphically presented in Figure 9. Table 1.Locations of the UWB stationary modules for different system configurations. Configuration Number Coordinates of the UWB Modules We assumed that an area of 500 m × 500 m lying in front of the UWB beacons is divided by a grid with cells of 10 m × 10 m each.For every node of this grid, a set of ten thousand UWB measurements was generated in MATLAB ® , and its position was estimated using a simple iterative NLS algorithm, without any Kalman filtration.Based on the parameters of P440 modules, declared by their producer [55], we assumed that UWB measurement errors have zero-mean Gaussian distribution with a standard deviation of 2 cm.Next, RMS errors (RMSE) for each node were calculated and the obtained results for the three system's configurations are shown as colormaps in Figure 10.As the RMSE is very large in the vicinity of the UWB modules, the colormap is presented for the area where the y coordinate is larger or equal to 50 m.In practice, it means that the actual placement of the UWB beacons should be in the foreground of the minefield, far enough ahead of its border, to ensure that the positioning accuracy in the planned search zone is high.As can be seen, the smallest positioning errors are achievable in front of the place, where the UWB stationary modules M 1 ÷ M 4 are located.The high-accuracy zone is wider and deeper for a more extended baseline of the positioning system. Mean and maximal RMSE values for the whole area of 450 m × 500 m and for smaller areas, limited to the nearest 100 m × 100 m and 50 m x 50 m respectively, in front of the UWB stationary modules M 1 ÷ M 4 are given in Table 2.As can be seen, the proposed positioning system can provide a centimeter level of accuracy in areas large enough for practical demining tasks and for reasonable and practically realizable systems' configurations.The high-accuracy zone could certainly be extended if the UWB stationary modules were more distributed around the area to be scanned, however for safety reasons it cannot always be achieved. Positioning Accuracy The accuracy of the EKF-PND filter was assessed and compared with the accuracy of EKF-CV and EKF-CA filters as well as with the accuracy of an NLS algorithm in MATLAB ® for the C1 configuration of the system.The dynamics and observation models given by Equations ( 4) and (8) were used to generate the antenna trajectories and the parameters chosen during the simulations are given in Table 3.The choice of these parameters was done in such a way that the shape and duration of the resulting antenna trajectory resemble typical HH-GPR antenna trajectories.The same standard deviations of all the distance measuring errors σ Aj = σ Sj = σ and power spectral densities S x S , S y S , S a given in Table 3 were used in the EKF-PND for setting the values of the R and Q matrices.The EKF-CV and EKF-CA also use σ Aj = σ Sj = σ as given in Table 3, but as their dynamics models are different, their Q matrices required finding power spectral densities of different noises S v x , S v y or S a x , S a y .This was done during the mentioned tuning process and the obtained values are as follows: and S a x = S a y = 6.1•10 −3 m 2 s 5 .Firstly, the results of the antenna position estimation with the EKF-PND and the NLS algorithm were compared for various orientations of scanning sections and chosen results of these tests are presented in Figure 11.These experiments confirmed that the EKF-PND filter works properly and achieves a similar accuracy for various orientations of the central axis of the scanning section. Next, a closer inspection of the estimation results was done for all the implemented algorithms for a chosen orientation of the central axis of the scanning section equal to 45 • . A comparison of HH-GPR antenna positions estimated with NLS, EKF-CV, EKF-CA, and EKF-PND is presented in Figure 12.As can be seen, all these algorithms are capable of properly estimating the antenna position, however, their accuracy is noticeably different and required further analysis, which will be presented further. At this step of the simulations, the results of the estimation of other elements of the state vector x from the dynamics model given by Equation ( 4) were analyzed and they are presented in Figures 13-15.The angle θ between the horizontal projection of the antenna handle and the central axis of the scanning section, estimated with the EKF-PND filter, is shown in Figure 13.The results of angular velocity estimation are presented in Figure 14. Figure 15 contains an estimate of the acceleration a forcing the HH-GPR antenna movement.All these figures contain only results for the EKF-PND, as other algorithms do not estimate variables such as θ, ω, and a.Based on the above results we created a histogram of RMS antenna position errors for NLS, EKF-CV, EKF-CA, and EKF-PND and it is presented in Figure 17.From this and the previous figure, one can conclude that the EKF-PND is more accurate than all other tested algorithms and the EKF-CV and EKF-CA perform similarly, but still better than the NLS algorithm.The EKF-CA is slightly more accurate than the EKF-CV.A comparison of numerical values of average RMS antenna position errors for all the realizations and for the NLS, EKF-CV, EKF-CA, and EKF-PND algorithms are given in Table 4.This table also presents percentage improvements of accuracy for EKF-CV and EKF-CA vs. NLS and EKF-PND versus all other algorithms.As can be seen, in the chosen simulation scenario, the EKF-PND provides positioning results about 40% more accurate than other tested EKFs and about 60% better than NLS. Discussion In this paper, an accurate positioning system dedicated as a supporting tool for a handheld ground penetrating radar was presented.The system uses ultrawideband radio technology for accurate distance measurements and processes them to estimate the GPR antenna position.Various estimation algorithms were used for this purpose, from NLS, through simple EKFs (EKF-CV and EKF-CA), based on those typically used in radiolocation and navigation CV and CA dynamics models, to the EKF-PND, based on the proposed by the authors' dynamics model derived from the model of a pendulum motion. The results of simulations included in the paper have demonstrated that the proposed positioning system can provide a desired centimeter level of accuracy in areas large enough for practical demining tasks.They also have shown how the actual placement of UWB beacons influences the system's accuracy.It occurs that the smallest positioning errors are achievable in some distance in front of the area where the beacons are located and that the high-accuracy positioning zone is wider and deeper for a more extended baseline of the system. Further experiments have confirmed that the EKF-PND filter works properly for various orientations of the central axis of the scanning section and have proved that using the proposed pendulum dynamics model ensures a better accuracy than the accuracy obtainable with other typically used dynamics models CV and CV.The simulations have shown that the EKF-PND provides positioning results about 40% more accurate than other tested EKFs (EKF-CV and EKF-CA) and about 60% better than NLS.The final shape of the fundamental matrix F k−1 can be obtained by placing all its individual elements given by the Equations (A2)-(A10) at appropriate positions in (A1) and it is given below as Equation (A11).The final shape of the observation matrix H k , obtained by placing all its individual elements given by the Equations (A13)-(A16) at appropriate positions in (A12) is given below as Equation (A17). Figure 1 . Figure 1.General idea of GPR operation. Figure 3 . Figure 3. Examples of a horizontal section through a C-scan: (a) color image; (b) grayscale image. Figure 7 . Figure 7. Part of HH-GPR antenna trajectory (a single arc of the scanning profile). Figure 8 . Figure 8. Flowchart of the Extended Kalman Filter used for HH-GPR antenna position estimation.Using the calculated F matrix and the G matrix from the equation (4), Φ k,k−1 and Q k−1 matrices are obtained as follows [60-62]: Figure 9 . Figure 9. Locations of the UWB stationary modules for different system configurations. Figure 13 . Figure 13.Angle between the horizontal projection of the antenna handle and the central axis of the scanning section estimated with EKF-PND. Figure 14 . Figure 14.Angular velocity of the antenna motion estimated with EKF-PND. Figure 15 . Figure 15.Acceleration estimated with EKF-PND.To better compare the accuracy of estimation with various algorithms, we conducted a series of ten thousand simulations and calculated average RMS antenna position errors for the whole scanning sections for each realization of the simulations.The obtained RMSE values are shown in Figure16.Single points in various colors are RMS antenna position errors obtained with NLS, EKF-CV, EKF-CA, and EKF-PND.Although they are changeable in various simulation runs, they form bands on noticeably different levels. Table 2 . Mean and maximal RMSE values for areas of various sizes. Table 3 . Parameters used in dynamics and observation models during simulations.
7,911.4
2023-01-27T00:00:00.000
[ "Engineering" ]
Digital Peer-Tutoring: Early Results from a Field Evaluation of a UX at Work Learning Format in SMEs . Digital Peer-Tutoring is a new learning format that enables production workers in Small to Medium Sized Enterprises (SMEs) to co-design their interaction with assistive technologies such as collaborative robots.The video-based learning format is based on design thinking and helps shop floor workers create and document solutions to robot interaction problems, and share their how-to knowledge with their colleagues. Early field evaluation results indicated that workers benefit from the Digital Peer-tutoring learning format and produced how-to videos for their colleagues. Furthermore, the Digital Peer-tutoring learning format was also found useful by the company management and ownership as means of documentation and customer communication. Thus, the learning format can also support SMEs on their path to digitalization Introduction In this paper we propose a new learning format 'Digital Peer-tutoring' as a means to design and share solutions to worker-technology interaction problems in small to medium sized enterprises1 (SMEs).Peer tutoring has long been suggested as a way to help students deal with design problems [13].Design, understood here as design thinking [4], is an iterative process consisting of generative and evaluative stages, which eventually converge on a solution to the design problem.Design thinking is typically applied to solve non-routine, wicked problems in an organization, when there is a need for novel how-to knowledge.To engage increation and sharing of new how-to knowledge requires hands-on experience, which is where peer tutoring becomes very helpful.The new learning format Digital Peer-Tutoring aims to help workers interacting with collaborative robots on the shop floor touse digital media to engage in teaching and learning with colleagues about their user experiences.We ask the questions: Can a Digital Peer-Tutoring learning format enable shop floor workers to design positive UXs for them-selves and their colleagues?What kind of ethical stance is implied by the use of Digital Peer-Tutoring? The paper reports from the initial part of a research project aiming to develop a Digital Peer-Tutoring learning format for shop floor workers in SMEs.The project aims to develop capabilities among shop floor workers to design and document, with short videos, solutions to operational and collaboration issues related to collaborative robots. The research is situated within the KomDigital regional development project that brings together 18 of the Copenhagen Capital Region's companies, unions, employer associations, and educational institutions.The project aims to improve digital competencies in a broad sense among the employers and employees in SMEs thereby enabling the companies to adopt and implement digital technologies.The target companies come from all sectors, including construction and building, small scale production, product development, and finance, and the technologies include data mining and analysis, collaborative robots and other forms of production automation, AI based financial advice, and more. KomDigital achieves its goals through new digital learning formats, which can be made available to target companies and organizations.The formats are tailored to the working conditions and needs of companies and employees, so that both employees, managers, companies and organizations can use new digital technologies to expand and grow. 2 Related work Digitalization in SMEs SMEs depend on their workers' knowledge and innovative capabilities to create new ways of working with technology, and they generally lack the capability and capacity for comprehensive digital transformation [6,9].Collaborative robots that work alongside a human worker can be integrated into the production without radical reconfiguration or automation of established workflows.A human worker can program a collaborative robot to perform tasks such as lift, pick and place, move, or otherwise process physical objects [5,12,14].Thus, worker designed interaction with collaborative robots and other assistive technologies is a useful first step towards digitalization in an SME. Peer tutoring Peer tutoring [7,13]overlaps somewhat with other notions of providing informal technical help between colleagues, such as over-the-shoulder-learning [17], over-theshoulder-guidance in tertiary education [2], peer-assisted learning [8] and peer teaching [15] in the medical domain, andover-the-shoulder appropriation [1] and peer interaction [10] in software development. In this paper we build primarily on the approach put forward by Twidale [17] in that we aim to support the provision ofinformal technical help between colleagues.Similar to Schleyer et al. [13] we acknowledge the role of peer tutors at various levels towardsdeveloping problem solving skills among colleagues.Specifically, we introduce a new role of digital competence facilitator, a 'Digital Coach', as explained below. Digital Peer-Tutoring What distinguishes 'digital peer-tutoring' from traditional peer-tutoring is that the concept builds entirely on the use of video.The idea is that workers learn from creating and redesigning videos while sketching [11]as part of applying design thinking to design their own and their colleagues' work flow and interactions with collaborative robots.Ørngreen et al. [11] suggested to link sketching techniques and creativereflection processes to video productions, and we extent this proposal to cover linking all parts of design thinking (problem definition and user needs finding, sketching, prototyping hypotheses, evaluation) to workers' video production.Secondly, we propose that video-based reasoning, instead of paper or verbal exchange, empower workers to explore and take ownership of their work.Vistisen et al. [18] proposed to support ethical userstances during the design process of products and services, andproposes using animation-based sketching as a design method.We follow that line of thought, though we are less interested in professional designers, and more interested in workers' own production (and consumption) of videos-as-digital-peer-tutoring. Case setting: A collaborative robot in specialized glass manufacturing After learning from initial talks with three different SMEs in Denmark, we agreed with the ABC company to adapt and evaluate the digital peer-tutoring learning format in one of their production facilities.The ABC company is a European SME specializing in glass processing.The company produces individual pieces and small batches with special specifications as well as entire series of several thousand units.About a year prior to our visit, the ABC company purchased and installed a 100,000€ collaborative robot in order to explore if and how it could be used in their production.At the time of our visit, the robot was used only during the final polishing steps of one large scale order, and it was idle much of the time.Workers and management agreed, however, that the robot could be used for other purposes as well, and thus enable the company to accept more large batch orders, but no initiatives had been implemented for several months due to lack of time to experiment with the robot.Furthermore, the initial design decision had been a stationary installation, that is, the robot could not be moved to other positions on the floor where it could interact with other machines or workers. The initial design decisions seemed to be related to a limited initial understanding of the robot's capability and a lack of strategic intent.In any case, it was clear that there was an unexplored potential (and risks) for enhancing the factory's capacity while empowering workers and help them design their own user experiences with the robot. 4 Method: Action design research with SMEs Our approach to building new digital competences in SMEs is inspired by action design research (ADR).ADR argues that IT artifacts are 'ensembles' formed by the organizational context during development and use.Research in this tradition interweaves constructing the IT artifact, intervention in the organization, and evaluating outcomes [16]. We visited the company 6 times over a six-week period during the spring 2019.The purpose of our first visit was to develop insights into the company, the motivation for purchasing the robot, and challenges with its current as well as potential future useof the robot.We observed the robot's current (very limited) use, interviewed and discussed with robot vendors, managers and shop-floor workers, and observed work and demonstrations of the robot. The digital peer-tutoring learning format (see section 5) was implemented in four sessions over the next four visits, followed by a final evaluation on the sixth visit.We documented all observations, interviews, and learning sessions with video and audio recordings and photos, and we collected the videos produced by the workers. The learning format was evaluated after each session and at a final one-day meeting with participation from all key stakeholders. 5 The digital peer-tutoring learning format The digital peer-tutoring learning format consisted of an ensemble of instructionvideos, quizzes, example-solution-video, and worker-created-how-to-videos.Together with the case company production site, we designed and implemented four training sessions with selected shop-floor workers (Table 1). We developed short (3-5 minutes) instruction videos for each session that explained the theme, introduced techniques that the participants could use to investigate problems and describe solutions, and concluded with an exercise where the participants should develop a short video (3-5 minutes).We also produced short example videos with our 'answers' to the video assignment for each session. All video materialincluding instruction material was recorded with standard smartphone hardware and software, and published without editing, in order to promote a 'simple-yet'sufficient' attitude towards to video production. For each session, a 'digital competence facilitator' (student assistant) travelled to the factory and discussed the material with the participants, and helped them produce their own 'employee-videos', which were subsequently uploaded to a shared (secure) site for later download and knowledge sharing within the company. Field evaluation results The evaluation of the 'Digital Peer-Tutoring' learning format consisted of weekly evaluations after each of the four sessions, and a final evaluation with participation from all key stakeholders.Here, we report about the initial results from the finalevaluation; a one-day meeting in the location of the factory of the case company.The participants in the evaluation were all those present at the upstart meeting 6 weeks before.They were: company managers (Company manager J and Company manager K), learning format users (Worker Br, Worker H, Worker Bi), corporate learning consultant (corporate learning consultant F), educational institution teacher(s) (Teacher J, Teacher T), pilot project manager(s) (Teacher T, Teacher J), pilot project documentarist (Documentarist F), and a digital competence facilitator (Digital competence facilitator S). The initial results from the final evaluation reveal both short-and long-term benefits and challenges of Digital Peer-Tutoring. Short term benefits The workers liked the learning format and found it useful: "...worker-video on iPad [could beuseful]...", [Worker Br].This confirms previous findings on the usefulness of video [11], and extends it to the shop floor workers.However, the workers found that the instruction videoswere too long and complicated."[They should be cut down to a list of four points" [Worker Br].Too long videos can be an expression of an 'apathetic ethical stance', a stance that reduces the worker-user to be a mean of input for the intended final design [18]. On the other hand, the workers expressed that they could use video to both think about a problem, sketch different solutions, and evaluate their use:"Sketches ....I had read up on it, go and think about it...." [Worker Br], and "the workershould be able to pause the video ..." [Worker Bi].Thus, there were indications that the format helped workers explore new technologies from an emphatic ethical userthat is, from their ownperspective [18].Company manager K supported this: "We, as a business must spend more time on [workers' use of video to innovate]."The management perspective adds a new layer to understand short term benefits of video-sketching and ethical design, and thus center our focus on the multi-layered essence of user experiences at work. Long-term benefits The stakeholders also commented on the long-term benefits of the learning format: 1.The format could be used to tackle issues in the manufacturing, as "help videos" [Worker Bi], and a "'Company database of videos that could be accessed even years after production" [Company manager J], 2. New employees could be introduced to the job: "A new one that is totally novice [could use worker-created-how-to-videos]" [Company manager J], 3. Help dyslexic employees who could watch how to do things, rather than read, 4. Supplier courses could be made memorable by "record [ing] and "The video can be used to "squeeze" a good idea out of an experienced employee who will have to think a little about the idea" [Teacher T]. 6.Finally, the stakeholder group discussed that the learning format could also be used to produce videos for customers for marketing purposes and quality documentation. These benefits allude to a diversity of user experiences in work situations, and perhaps tell us that the ethical stances taken by workers-as-designers-of-their-own-work may be confounded by management's strategic interest in how-to knowledge. Discussion and conclusion We conclude that our proposed Digital Peer-Tutoring learning format enabled shop floor workers to design positive UXs for themselves and their colleagues, and beyond ways that we expected.The participating shop floor workers stated in various ways that they liked the Digital Peer-tutoring how-to videos and foundthem useful.This corresponds to the claim made by Twidale (2005) [17]that it is possible to use peer tutoring to give informal technical help between colleagues, and with Ørngreen et al. (2017) [11]who suggest to link various sketching techniques and creative reflection processes to video productions.The videos helped workers create ideas about robot use, identify problems not formulated before, sketch alternatives, test solutions, and demonstrate them to colleagues. Company owners, management, and workers had unexpected ideas about how to use the peer-tutoring videos within and outside the company, in for example internal quality control and customer communication.Thus, similar to the point made about peer tutoring [13], we should acknowledge the role of Digital Peer-Tutoring in developing problem solving skills at various organizational levels. Based on the categories proposed in [18],we furthermore observe that the ethical stance built into the'Digital Peer-Tutoring' learning format can be characterized as 'apathetic', when too long and complex instructional videos, intended to teach workers' design thinking and enabling their own video-production, tendto make workers give up.However, the learning format also showed to be 'empathetic', as workers produced their own videos and evaluated solutions together, effectively co-designing work procedures. We developed the Digital Peer-Tutoring learning format to improve workers' capability to create and share solutions to human-robot collaboration challenges in SMEs. Thereby we also answer the call for research into how SMEs can adopt and implement new technologies that build upon and enhance worker capabilities, skills, and knowledge [3,6]. Table 1 . Overview of training sessions. what the supplier shows on the shop floor" [Corporate learning consultant F], and "Cut out what is not useful [from the supplier teaching]"[Company manager K] 5. Starting new ways to produce, for example "recording the results from the company's informal and formal experiments on the shop floor" [Company manager J, Company manager K], and "recording order-specific ideas for how-to, so next time this order comes in, the video shows what to do" [Worker Bi],
3,353
2019-09-02T00:00:00.000
[ "Engineering", "Computer Science", "Education" ]
International Journal of Physical Sciences Applications of Schrödinger Equation in General Besov Spaces In this paper, we obtain solution of Schrödinger equation in general Besov spaces. Precise results on p L and general Besov estimates of the maximal function of the solutions to the Schrodinger equation are given. The obtained results improve some recent results. Further, we shall consider estimates of general 2 L-norm and the general Besov type norm of integrals of this kind by means of the general Besov norm of the function f, and give p L-estimates of their maximal functions. INTRODUCTION The Schrödinger equation was formulated in 1926 by Austrian physicist Erwin Schrödinger.Used in physics, specifically quantum mechanics, it is an equation that describes how the quantum state of a physical system changes in time.The Schrödinger equation takes several different forms, depending on the physical situation.Now, we present the equation for the general case and for the simple case encountered in many textbooks for a general quantum system. In the standard interpretation of quantum mechanics, the quantum state, also called a wave function or state vector, is the most complete description that can be given to a physical system.Solutions to Schrödinger's equation describe not only molecular, atomic and subatomic systems, but also macroscopic systems, possibly even the whole universe.The most general form is the timedependent Schrödinger equation, which gives a description of a system evolving with time.For systems in a stationary state (that is, where the Hamiltonian is not explicitly dependent on time), the time-independent Schrödinger equation is sufficient.Approximate solutions to the time-independent Schrödinger equation are commonly used to calculate the energy levels and other properties of atoms and molecules. Schrödinger's equation can be mathematically transformed into Werner Heisenberg's matrix mechanics, and into Richard Feynman's path integral formulation.The Schrödinger equation describes time in a way that is inconvenient for relativistic theories, a problem which is not as severe in matrix mechanics and completely absent in the path integral formulation.It is well-known that the solution is Schrödinger equation (Almeida et al., 2013;Cowling, 1983;Furioli and Terraneo, 2003). is given by: .) In this paper we shall consider estimates of the general Besov type norm of integrals of this kind by means of the general Besov norm of the function "f" (Taoka, 2005), then we will give p L estimates of their maximal functions for more information on Schrödinger equation (Almeida et al., 2013;Carbery, 1985;Jing-Wei et al., 2013;Cowling, 1983;Muramatu and Taoka, 2004;Müller-Kirsten, 2006;Furioli and Terraneo, 2003;Fukuma and Muramatu, 1999;Polelier, 2011;Michael et al., 2012;Shankar, 1994;Taoka, 2005).Our motivations in the present work are slightly different from what has previously been done.Firstly, we aim at a better understanding of the recent construction of self-similar solutions for Equation (1).A self-similar solution is by definition invariant by the scaling Equation ( 2), and therefore cannot be obtained by these aforementioned results in Sobolev spaces. RESULTS Our first result is the following theorem: Theorem 1 Let σ be a positive number, I = (0, 1), γ > 1 and let ∞ < < q p, 1 Assume that h(t, ξ), h*(t, ξ) are real-valued, measurable, and ∞ C in t and the inequality Holds for any positive integer , k where k λ is a constant independent of t and ξ and β α and are positive constants.Then the operator ( ) Proof First, consider the case where q = 2 and σ is a non- . Then, ( ) Hence using Parseval's formula we obtain that Since, the Besov spaces are identical with the real interpolation of the Sobolev spaces: where Χ is a Banach space and q p , .) , (. denotes the real interpolating spaces.Therefore, the conclusion of the theorem follows from interpolation of linear operators and the fact that T 1 is bounded from for any non-negative integer m.Our proof is therefore completed.For the next result we will need the following lemma. Proof The proof of this Lemma is similar to Lemma 1, so we will omit it. Theorem 2 Assume that h(t, ξ), h*(t, ξ) are real-valued, measurable satisfying condition of Equation ( 2).Then the operator 1 T as defined by Equation (3) will satisfy the following inequality: where C is a constant independent of t and ξ and β α and are positive constants. Proof To get 2 L maximal estimates for the operator of Equation (3), from Lemma 1 and the imbedding theorem with continuous inclusions, which, combined with Theorem 1, completes the proof of Theorem 2. Theorem 3 Let Y X , and Z be Hilbert spaces, and S and T be operators defined by ( ) ) proof of our theorem.
1,072.4
2013-05-09T00:00:00.000
[ "Mathematics", "Physics" ]
Linear Statistics of Random Matrix Ensembles at the Spectrum Edge Associated with the Airy Kernel In this paper, we study the large $N$ behavior of the moment-generating function (MGF) of the linear statistics of $N\times N$ Hermitian matrices in the Gaussian unitary, symplectic, orthogonal ensembles (GUE, GSE, GOE) and Laguerre unitary, symplectic, orthogonal ensembles (LUE, LSE, LOE) at the edge of the spectrum. From the finite $N$ Fredholm determinant expression of the MGF of the linear statistics, we find the large $N$ asymptotics of the MGF associated with the Airy kernel in these Gaussian and Laguerre ensembles. Then we obtain the mean and variance of the suitably scaled linear statistics. We show that there is an equivalence between the large $N$ behavior of the MGF of the scaled linear statistics in Gaussian and Laguerre ensembles, which leads to the statistical equivalence between the mean and variance of suitably scaled linear statistics in Gaussian and Laguerre ensembles. In the end, we use the Coulomb fluid method to obtain the mean and variance of another type of linear statistics in GUE, which reproduces the result of Basor and Widom. Introduction In random matrix theory, the joint probability density function for the eigenvalues {x j } N j=1 of N ×N Hermitian matrices from an unitary ensemble (β = 2), symplectic ensemble (β = 4) or orthogonal ensemble (β = 1) is given by [15] w(x j ). The moment-generating function (MGF) of the linear statistics N j=1 F (x j ) is, We write it in the following form where f (x) = e −λF (x) − 1. We assume f (x) lies in the Schwartz space [19] and 1 + f (x) = 0 over [a, b]. Min and Chen [16] expressed G (4) N (f ) and G (1) N (f ) as Fredholm determinants based on the work [12] and [22]. For the β = 1 case, we take N to be even for simplicity. We state the results as the following two lemmas [16]. (4) where K (1) N is an integral operator with kernel ν jk εψ j (x)ψ k (y). We introduce here some notations, which will be used in the following sections of this paper. Let K(x, y) be the Airy kernel The equality (1.4) is obtained by taking the limit y → x from (1.3) and using the property Ai ′′ (x) = x Ai(x). We then define Finally, we mention that χ J (x) is the indicator function defined on the interval J, namely, The paper [16] studied the large N behavior of the MGF of the linear statistics in Gaussian ensembles associated with the sine kernel and Laguerre ensembles associated with the Bessel kernel. This paper continues to study the large N behavior of the MGF in these Gaussian and Laguerre ensembles associated with the Airy kernel, from which we obtain the mean and variance of the scaled linear statistics. The unitary case is the simplest one among them. We established the relation between the mean and variance of the scaled linear statistics in symplectic, orthogonal and unitary ensembles. We also show that as N → ∞, the MGF of a suitably scaled linear statistics in the Gaussian ensembles and Laguerre ensembles are the same, which leads to the same mean and variance of the linear statistics between the Gaussian ensembles and Laguerre ensembles. For the problems on the mean and variance of linear statistics in unitary ensembles, see [4,5,11,17] for reference. Finally, we point out that the variance of linear statistics play an important role in the random matrix theory of quantum transport [8,9]. The rest of this paper is organized as follows. In Sec. 2, we study the large N behavior of the MGF of the scaled linear statistics in Gaussian unitary, symplectic and orthogonal ensembles, respectively. From this we obtain the mean and variance of the scaled linear statistics in the three Gaussian ensembles. In Sec. 3, we repeat the development of Sec. 2, but for the Laguerre ensembles. In Sec. 4, we use two different methods to consider the large N behavior of another type of linear statistics in GUE. The mean and variance of the linear statistics are obtained. The conclusion is given in Sec. 5. Gaussian Unitary Ensemble In the Gaussian case, w(x) = e −x 2 , x ∈ R. From Lemma 1.1 we have where H j (x) are the Hermite polynomials of degree j. We consider the large N asymptotics of G N (f ) in this subsection. It is well known that log det I + K (2) We state a theorem before our discussion. where K(x, y) is the Airy kernel defined by (1.3). Remark. The above result was obtained by [10,13,18], but they did not show the order term. See also [21] on the study of this Airy kernel. We now use Theorem 2.1 to compute (2.2) term by term as N → ∞. We replace f (x) by in the following computations. The first term reads, TrK (2) The second term, It follows from (2.2) that log det I + K (2) We proceed to study the mean and variance of the scaled linear statistics N j=1 F 2 so we need to obtain the coefficients of λ and λ 2 from (2.4). From the relation of f (x) and F (x) we (2.5) Substituting (2.5) into (2.4), we have log det I + K (2) x j − √ 2N , and note that log G N f . Then we have the following theorem. where K(x, y) is the Airy kernel defined by (1.3). Gaussian Symplectic Ensemble In this case, where ϕ j (x) is given by (2.1). It follows that M (4) is the direct sum of the N copies of [12,22]). From Lemma 1.2, we obtain the following result [16]. where and K 2N +1 is an operator on L 2 (R) with kernel We also have the following expansion formula, Similarly as Theorem 2.1, we have the following theorem. Proof. From the definition (2.1) and the asymptotic formula (2.3), we readily obtain (2.9) and (2.10). It follows from the definition of ε that where use has been made of (2.9). Similarly, we find 11 12 ). Now we use Theorem 2.4 and Theorem 2.5 to compute (2.8) as N → ∞. We will change f (x) in the following calculations. In this case, f ′ (x) becomes x − √ 4N . We consider Tr T GSE firstly, The first term reads, The second term, Then where L(x, y) is given by (1.5). The third term, The fourth term, So we obtain We proceed to compute Tr T 2 GSE , Similarly, we compute the ten traces one by one. We write down the result here without the detailed calculations: Proceeding as in the previous subsection, we replace . Substituting these into (2.11) and (2.12), we finally find Denoting by µ x j − √ 4N , and noting that log G are given by (2 .6) and (2.7), respectively. Gaussian Orthogonal Ensemble It is convenient in this case to choose w(x) to be the square root of the Gaussian weight, and keep in mind that N is even. Define where ϕ j (x) is given by (2.1). It follows that M (1) is the direct sum of the N 2 copies of [12,22]). From Lemma 1.3, we obtain the following result [16]. Theorem 2.7. For the Gaussian orthogonal ensemble, we have and K N is an operator on L 2 (R) with kernel We also have log det(I + T GOE ) = Tr log(I + T GOE ) = Tr Similarly as the previous subsection, we have the following results. given by (1.2). In the computations below, we replace f (x) by f 2 as N → ∞, we obtain the following results: where R contains the terms of integrals with integrands consisting of f , f , f ′ or f , f ′ , f ′ . These lead to at least power 3 of λ in the following discussions, and they will not affect the final results, so we need not write down the detailed results of R. Laguerre Unitary Ensemble In the Laguerre case, w(x) = x α e −x , α > −1, x ∈ R + . From Lemma 1.1 we have j (x) are the Laguerre polynomials of degree j. We state a theorem before our discussion. Proof. From the asymptotic formula of Laguerre polynomials [20] (page 201), we have Using the Christoffel-Darboux formula, Replacing the variables x by 4N + 2α + 2 + 2 together with Stirling's formula, we obtain the desired result after some elaborate computations. Remark. The above result was obtained by Forrester [13], but they also did not show the order term. We now use Theorem 3.1 to compute (2.2) term by term as N → ∞. We replace f (x) by in the following computations. The first term reads, TrK (2) The second term, It follows from (2.2) that log det I + K (2) We proceed to study the mean and variance of the scaled linear statistics N j=1 . From the relation (2.5), we have log det I + K (2) be the mean and variance of the linear statistics N j=1 . Then we have the following theorem. where K(x, y) is the Airy kernel defined by (1.3). Remark. Comparing Sec. 2.1 and Sec. 3.1, we see that the large N behavior of the MGF of a suitably scaled linear statistics in GUE are the same with a suitably scaled linear statistics in LUE. It follows that as N → ∞, the mean and variance of the corresponding linear statistics are also the same in GUE and LUE. 5) and S (4) We also have the following expansion formula, Using the similar method in Theorem 3.1, we obtain the following theorem. (1.3). where B(x) is given by (1.2). Proof. From the definition (3.4) and the asymptotics (3.1), we have where we have made use of the formula [1] (page 257) It follows that Similarly, we obtain and εϕ (α−1) The proof is complete. Now we use Theorem 3.4 and 3.5 to compute Tr T LSE and Tr T 2 LSE as N → ∞. We change f (x) 8N − 2α)) in the following computations. Firstly we have The first term reads, The second term, Let x = 8N + 2α + 2 The third term, Hence, Similarly we obtain the result for Tr T 2 LSE after some tedious computations, (3.7) From the above, we find that as N → ∞, It follows that as N → ∞, We have the following theorem. Theorem 3.6. As N → ∞, are given by (3.2) and (3.3), respectively. Remark. We find that the large N behavior of the MGF of a suitably scaled linear statistics in LSE are the same with a suitably scaled linear statistics in GSE. It follows that as N → ∞, the mean and variance of the corresponding linear statistics are also the same in LSE and GSE. Laguerre Orthogonal Ensemble In this subsection, w(x) is taken to be the square root of the Laguerre weight, namely, and N is even. Following [16], we let Note that the definition of ϕ N is an integral operator with kernel We also have the following expansion formula, Similarly as previous subsection, we have the following theorems. Theorem 3.9. As N → ∞, we have . 4N − 2α − 4)) and use Theorem 3.8 and 3.9 to compute Tr T LOE and Tr T 2 LOE . We find that as N → ∞, It follows that as N → ∞, log det(I + T LOE ) = log det(I + T GOE ). Denoting by µ x j − √ 2N , we have the following theorem. Remark. We find that the large N behavior of the MGF of a suitably scaled linear statistics in LOE are the same with a suitably scaled linear statistics in GOE. It follows that as N → ∞, the mean and variance of the corresponding linear statistics are also the same in LOE and GOE. Gaussian Unitary Ensemble Continued For the Gaussian unitary ensemble, if we change f (x) to f x − √ 2N , we can gain a better insight into the mean and variance of the corresponding linear statistics by using the result of Basor and Widom [7]. We see that as N → ∞, . This is because, as N → ∞, We now introduce the result of Basor and Widom as the following lemma [7]. where Hence we have the following theorem. respectively. At the end of this section, we use another method, the coulomb fluid approach, to prove the above theorem. We state an important lemma [6]. For the Gaussian unitary ensemble, it is known that [6] σ(x) = √ b 2 − x 2 π , b = −a = √ 2N. Conclusion This paper studies the large N behavior of the MGF of the scaled linear statistics in Gaussian ensembles and Laguerre ensembles, from which we obtain the mean and variance of the corresponding linear statistics. We find that there is an equivalence between the mean and variance of suitably scaled linear statistics in Gaussian and Laguerre ensembles. In addition, we use the results of [7] and [6] to consider another type of linear statistics in GUE and also obtain the mean and variance of the corresponding linear statistics. For the GSE and GOE, we will deal with the corresponding type of linear statistics in the future.
3,187.8
2018-06-29T00:00:00.000
[ "Mathematics" ]
Analog Radio Over Fiber Aided C-RAN: Optical Aided Beamforming for Multi-User Adaptive MIMO Design Given the increasing demand for high data-rate, high-performance wireless communications services, the demand on the radio access networks (RAN) has been increasing significantly, where optical fiber has been widely used both for the backhaul and fronthaul. Additionally, advances in signal processing such as multiple-input multiple-output (MIMO) techniques, have improved the performance as well as transmission rate of communications networks. Beamforming has been used as an efficient MIMO technique for providing a signal to noise ratio (SNR) gain as well as reducing the multi-user interference. However, beamforming requires the employment of phase-shifters, which suffers from reduced phase resolutions, degraded noise figures as well as beam-squinting in addition to the implementation challenges. Hence, in this paper we employ an analogue radio over fiber (A-RoF) aided architecture for supporting the requirements of the current and future mobile networks, where we design a photonics aided beamforming technique in order to eliminate the bulky electronic phase-shifters and the beam-squinting effect, while also providing a low-cost RAN solution. Additionally, this photonics aided beamforming is combined with a reconfigurable multi-user MIMO technique, where users can communicate with one or multiple remote radio heads (RRHs), while employing stand-alone beamforming, beamforming combined with diversity or with multiplexing depending on the available resources and the user channel information as well as the quality of service requirements. INTRODUCTION The United Nation's Sustainable Development Goals (UN SDGs) include 17 goals in the Agenda 2030, which are framed to address global challenges including climate change, poverty and inequality (UN2, 2015;6G Flagship White paper, 2020). On the other hand, wireless communications has played a key role in creating the world as we know it, with its enormous social, environmental and economical impact, and its links with the UN SDGs are numerous (6G Flagship White paper, 2020). The radio access network (RAN), which bridges the terminals to the core network, requires significant cost in order to support the growing demand for high data rate applications (Goldsmith, 2005;Hanzo et al., 2012;Checko et al., 2015). The RAN evolved significantly over the past decade. For example, in the fourth Generation (4G) mobile network, the concept of centralised RANs (C-RAN) was employed, where the central unit (CU) employs several baseband units (BBUs) connected to several remote radio heads (RRHs) by fiber. In this case, each RRH supports an individual cell (Li et al., 2020 (accepted). Then, in the fifth Generation (5G) mobile network, the RAN relocated some functions of the CU to the distribution unit (DU), flexibly supporting all usecases, while some physical layer functions were moved to the RRH connected by radio over fiber (RoF) links [Li et al., 2020 (accepted)]. Furthermore, in order to have a decent quality of service, a large number of base-stations must be deployed, which increases the total cost of the RANs and hence ultra-light RANs are required (Checko et al., 2015). Generally, the frequency, space, time, code and polarisation domains can be exploited as the available degrees of freedom for supporting a multiplicity of users (Goldsmith, 2005). Multiple-input Multiple-output (MIMO) techniques have been proposed for improving the performance as well as data rate of communications systems Hemadeh et al., 2018). Explicitly, beamforming is a MIMO technique designed to attain an improved signal to noise ratio (SNR) gain and/or to reduce the inter-user interference of multi-user scenarios (Blogh and Hanzo, 2002;Satyanarayana et al., 2019). Beamforming is achieved by focusing the transmitted and/or received beam in the direction of the transmitter or receiver (Huang and Guo, 2011). On the other hand, MIMO techniques can be used to improve the system performance using diversity schemes, to increase the throughput using multiplexing schemes or to attain a combination of diversity, multiplexing and beamforming gains using the concept of multi-functional MIMO Hemadeh et al., 2018). Additionally, the family of spatial modulation (SM) received significant research attention as detailed in (Ishikawa et al., 2018;Dogan-Tusha et al., 2020) for its reducedcomplexity processing at both the transmitter and the receiver. Additionally, it is worth noting that beamforming requires the employment of phase-shifters (Cao et al., 2016), which suffers from reduced phase resolutions, degraded noise figures as well as beamsquinting in addition to the implementation challenges such as the synchronization of the phase shifters (Poon and Taghivand, 2012;Zhang et al., 2018). The phase-shifter based beamforming results in the beam-squinting phenomenon, which is expected to be more severe when employing wide-band signal beam steering (Cao et al., 2016). Note that beam-squinting is the beam-shifts caused by the frequency shifts when applied with the constant phase-shift among neighbouring antenna elements (AE). Beam-squinting can affect the codebook design in the phased-array systems, which limits the bandwidth and the number of antennas (Cai et al., 2016). Additionally, beamsquinting affects the channel estimation and the precoding design, which results in a degraded performance (Wang et al., 2019). The analogue radio over fiber (A-RoF) based true-time delay is a low-cost, high-performance RAN solution with ultra-light RRHs [Li et al., 2017;Li et al., 2018a;Li et al., 2018b;Li et al., 2020 (accepted)]. Explicitly, the A-RoF aided beamformer has been proposed for providing beam-squinting free solution with the aid of the uniform fiber Bragg grating (FBG) (Molony et al., 1996;Cao et al., 2016) or a single chirped FBG (CFBG) Hunter et al., 2006). In (Cao et al., 2016) a review of the integration techniques for the mmWave beamforming is provided, while focusing on the integration techniques rather than on their applications. Then, in (Li et al., 2018a) the spatial modulation and multi-set space-time shift-keying were optically processed and implemented in the A-RoF aided C-RAN system, while in (Li et al., 2018b) the twin-antenna spatial modulation was experimentally demonstrated. Furthermore, in (Li et al., 2017) the analogue beamforming using the optical aided true-time-delay was implemented in an indoor environment over the plastic optical fiber, while in (Molony et al., 1996) the phasedarray antenna using the uniform fiber Bragg grating was presented, while having very limited tunability. In (Cao et al., 2016;Li et al., 2018a;Li et al., 2018b;Molony et al., 1996) the feasibility of A-RoF aided beamforming or MIMO is validated, inspiring more wireless applications. Hence, in this treatise, we propose a tunable optical aided true-time delay (TTD) beamforming system for supporting multi-user MIMO communications in a C-RAN environment. In this article, we propose a low cost optical aided beamforming design using A-RoF aided C-RAN architecture to support multiple users. Additionally, we propose an reconfigurable multi-user MIMO scheme utilising the proposed beamforming technique. Explicitly, users can communicate with one or multiple RRHs, while employing standalone beamforming, beamforming combined with diversity or with multiplexing depending on the available resources and the user channel information as well as the quality of service requirements. More specifically, after performing user association, the CU collects all information about the user association with RRHs, the user channel state information and quality of service requirements. Then, the CU will decide on the transmission scheme for each user, which can be using beamforming or beamforming combined with diversity or multiplexing techniques. Then, using optical processing the beamforming is implemented, in order to eliminate the need for phase shifters at the RRH. Hence, in the proposed architecture, no signal processing is performed at the RRHs. Against this background, the novel contributions of our system can be summarized as follow: 1. We conceive an analogue radio over fiber aided multi-user beamforming system, where the CU is capable of controlling the beam direction, which is used to facilitate the different wireless transmission modes. 2. The proposed optical aided beamforming design can support multi-beam system with the aid of the wavelength division multiplexing techniques, where an all-optical signal processing based beam steering is achieved. 3. Finally, given that the user equipment can communicate with one or multiple RRHs, we present a reconfigurable multi-user MIMO technique, supported by the proposed beamforming combined with diversity or multiplexing. The remainder of this paper is organized as follows. In Section 2 we present an overview of the C-RAN aided multi-user MIMO system model followed by our proposed multi-user reconfigurable MIMO system employing a novel optical aided beamforming technique in Section 3. Afterwards, we present our results and analysis in Section 4 and finally we present our conclusions in Section 5. CENTRALISED RAN AIDED MULTI-USER MULTIPLE-INPUT MULTIPLE OUTPUT-SYSTEM In this section, we present a general architecture for the C-RAN system supporting multi-user MIMO communications, which can be exploited in our design. As shown in Figure 1, the signal is generated in the CU and transmitted via fiber to several RRHs, where only optical-to-electronic conversion, amplification and filtering are performed. This substantially reduces the RRH size and cost. Explicitly, the RRH receives the signal from the CU using fiber and then transmits this signal to the user equipment using a set of antenna arrays, as shown in Figure 1, where a user equipment can be associated with one or more RRHs. In the following, we present an overview of conventional RoF-aided system followed by a background description of the optical aided beamforming, while we describe our proposed novel design in Section 3. Conventional Analogue Radio Over Fiber Aided Centralised RAN System Model Conventionally, the A-RoF aided system can be supported using the architecture of Figure 1, where a number of directly modulated lasers (DMLs) are used for electronic-to-optical (E/O) conversion, which are then combined by the wavelength division multiplexing (WDM) multiplexer. Afterwards, as shown in Figure 1, the combined optical signals are transmitted through optical fiber to the RRH, where the WDM de-multiplexer is responsible for separating each wavelength of the WDM signal. After the opticalto-electronic (O/E) conversion by the photo-detector, the recovered RF signal is power-split and fed into individual phase shifters to form a directional beam. as shown in Figure 1. The RoF technology is suitable for the transport of wireless signals due to its transparency to the type of signal being transported and its support for dynamic spectrum allocation in wireless communications (Huang et al., 2018). The RoF architecture significantly reduces the complexity and cost of the RRHs, since most of the complex signal processing tasks such as frequency up-conversion, modulation and multiplexing are generally performed at the CU . Figure 1 shows an example A-RoF aided C-RAN design using four lasers. As shown in Figure 1, four DMLs are used for E/O conversion, where the four optical outputs are then combined by the WDM multiplexer. In this system, each DML-generated signal supports a single-user connection and analogue beamforming is typically realised using the analogue phase shifters available at each antenna array (Balanis, 2005). However, as seen in Figure 1, a large number of phase shifters is required in the RRH, which has many implementation challenges as briefly mentioned above and detailed in (Poon and Taghivand, 2012;Zhang et al., 2018). In addition, phase shifting based beamforming suffers from the beam-squinting, which might degrade the signal quality and system bandwidth (Cao et al., 2016). Hence, in the following we present an overview of optical aided beamforming and then we propose an A-RoF based design in Section 3, which is capable of addressing the above issues. Electronic Versus Photonic Phase Shifters The integration technologies based upon CMOS have matured significantly, where most of the electronics based phase control techniques are developed on integrated platforms (Cao et al., 2016). There are mainly four electronics based techniques for shifting the phase among adjacent elements of an antenna array (Cao et al., 2016), which includes RF phase shifting, local oscillator phase shifting, IF phase shifting and digital phase shifting. The RF phase shifting technique uses a low noise amplifier and a phase shifter for each channel and then after combining all the channels, a local oscillator and a mixer are used to up-convert the signal for transmission. However, the RF phase shifters induce non-linearity and noise. On the other hand, the local oscillator phase shifting introduces the required phase shift over the local oscillator instead of the RF signal, which avoids degradation of the RF signal but increases the number of required components since a mixer is now required in the path of each channel along with a network for the distribution of the local oscillator. Furthermore, a long distribution network introduces undesired coupling among different blocks, especially when the signal frequency is high. The IF phase shifting involves down-conversion of the channels before they are passed through a phase shifter, where processing the signal at lower frequency results in less noise but requires a larger number of components compared to the RF phase shifting due to the requirement of a mixer in each path. Finally, the digital phase shifting has a similar architecture to IF phase shifting except that the phase shifters are now replaced with digital signal processing circuits in each path, which provides better design flexibility and enables the application of different algorithms in the digital domain. However, the number of required components and their complexity is still higher compared to the RF phase shifting technique. Hence, given the above discussion, RF phase shifting is the most suitable solution among electronic phase shifting techniques. Apart from the introduction of non-linear distortion over the RF signal, another major drawback of RF phase shifting is the high insertion loss. Furthermore, electronic techniques are not suitable for wider bandwidth signals that are required for 5G systems (Rotman et al., 2016). For signals having a large bandwidth, the electronic phase shifters have a frequency dependent response, which results in a wider beam. This effect is known as beam-squinting and is not desirable for high bandwidth systems such as the 5G and beyond. Beam squinting can be eliminated by using phase shifting techniques that are based upon true time delay, where it has been shown that photonic techniques for phase control offers true time delay along with very low power loss. Photonic beam steering is achieved by modulating an optical carrier with the RF signal, resulting in electrical to optical conversion. The modulated optical signal is manipulated by using various optical signal processing techniques through optical devices to achieve the desired phase shift. The phase shifted optical signal is photodetected to obtain the RF signal. When combined with photonic true time delay, the RoF presents a promising technology for the implementation of wideband phased array antenna systems. Photonic signal processing provides the advantages of immunity to electromagnetic interference, low attenuation, and very large bandwidth (Thomas et al., 2015). Overview of Radio Over Fiber Aided Beamforming Several photonic techniques have been reported in the literature for achieving true time delay beamforming. In (Cao et al., 2014), an experimental study has been presented to achieve broadband beam steering by performing tunable spectral filtering based on cyclic additional optical true time delay. A tunable laser source in combination with a high dispersion compensation fiber is used to obtain true time delay for a 1 × 2 element phased array antenna in (Yang and Lin, 2015). Additionally, an optical frequency comb is modulated with multiple RF signals and passed through a dispersive element in (Ye et al., 2015) to obtain independently controllable true time delays. Furthermore, a beamformer for two-dimensional phased array antenna is proposed in (Ortega et al., 2016) by employing tunable dispersive FBGs in combination with fiber based delay lines. The proposed technique also demonstrated the control of multiple beam radiations through sub-array control. In a more recent study combining RoF with reconfigurable intelligent surface (RIS), in (Huang et al., 2021), an optical truetime delay pool-base hybrid beamforming is introduced in the RISaided C-RAN, where the analog beamforming is centrally deployed, presenting an effective algorithm for improved system performance for the RIS based C-RAN. Then, in , the nonlinear phase shift introduced by a highly non-linear fiber is used to introduce delay over radio frequency (RF) modulated optical signals, which can be varied by controlling the optical power of each carrier passing through a parallel arrangement of highly nonlinear fibers. Meanwhile, in (Tsakyridis et al., 2021), a bandwidthreconfigurable intermediate frequency over fiber fronthaul is integrated using silicon photonic reconfigurable optical add/drop multiplexer (ROADM) with phase-shifter based 60 GHz phased array antenna, supporting a 32-element phased array antenna. However, the above techniques either focuses on the RIS application (Huang et al., 2021) or employs the phase-shifting based analogue beamforming, resulting in the detrimental beamsquinting phenomenon in the context of wide-band signal Tsakyridis et al., 2021), which is not suitable for wide-band C-RAN with compact-size RRH serving multi-user communications. Thus, photonics based wideband true time delay has been experimentally demonstrated in (Srivastava et al., 2020) by employing multiple raised cosine apodized linearly chirped fiber Bragg gratings (CFBG). The gratings have different lengths and chirp parameters and are used to induce variable delays over an RF modulated optical tunable laser source. Hence, in this article, we exploit the CFBG to enable the optical beamforming, which has been experimentally verified, capable of adaptively supporting the MU-MIMO system. Figure 2 shows an analogue radio over fiber fronthaul network containing one CU and N RRHs, as a design example 1 . In the CU of Figure 2, we implement four DMLs and four Mach-zehnder Modulators (MZM) for generating four beams that can be used to communicate with up to four users. PROPOSED ANALOGUE RADIO OVER FIBER AIDED SYSTEM MODEL Before any data communication between the user equipment and RRHs, user association is performed. One challenge in C-RAN networks is the user association, which significantly enhances the load balancing, the spectrum efficiency as well as the power efficiency of the network (Ejaz et al., 2020). Different resource allocation mechanisms have been proposed for efficient resource management in C-RAN and have been discussed in several surveys including (Olwal et al., 2016;Ejaz et al., 2020;Rodoshi et al., 2020). After user association, the CU has all the information of the users' association with the RRHs. In this case, each user equipment can be associated with one or more RRHs. Then, the CU decides on the transmission scheme for each user, which will depend on the user association with RRH, the channel quality information for each link as well as the quality of service requirements for each user. More explicitly, one of the characteristics of 5G and beyond mobile networks is their flexibility and ability to support broadband as well as ultra-reliable low latency communications (URLLC). Explicitly, broadband traffic, known as enhanced Mobile Broadband (eMBB) in 5G, can support gigabit per second data rates, while URLLC data requires extremely low delays with very high reliability (99.999%) (3GPP, 2016). Hence, the type of transmitted data, eMBB or URLLC, will also influence the decision of the CU for the transmission scheme for each user. One option for the transmission from the RRH to the user equipment is to employ beamforming from one RRH to each user equipment. In this case, if the user equipment is associated with more than one RRH, then the CU decides to transmit the downlink signal from the RRH with the best channel quality. Hence, in the following we propose a novel optical aided beamforming for the C-RAN. Let us first consider the rationale of generating the beams in the RRH1 in Figure 2 in order to clarify our centralized design. As depicted in the CU of Figure 2, the DML is used for the E/O conversion, where DML1 operating at λ 1 is modulated by RF signal 1. Then, the modulated optical signal is fed into the MZM1 for generating a WDM signal with frequency spacing of 50 GHz, where each wavelength carries the RF signal 1 as a result of the MZM's nonlinearity. Consequently, the Chirped Fiber Bragg Grating (CFBG) imposes a linear time delay to the different wavelengths of the WDM signal, which can introduce a constant time-delay among the transmit antenna elements in the RRH. More explicitly, the CFBG can introduce the linear time delay by changing its reflective index as a result of imposing varied strains or temperatures . The linear time delay would then be mapped to the time-delay of each RF signal transmitted by each antenna element. Similarly, the RF signal 2 would be modulated by the DML2 operating at λ 2 and time-delayed relying on the CFBG after the MZM2. The two outputs from the above two CFBGs are combined by a WDM multiplexer and coupled into an optical fiber and transmitted to a WDM demultiplexer (WDM Demux). The WDM Demux separates the wavelength carrying RF signal 1 and 2 into the RRH 1, where each output, which has been time-delayed in the CU, is recovered to the RF signal by the photo-detector (PD) and transmitted to the different antenna elements. Specifically, if the wavelengths carrying the RF signal 1 has six wavelength as shown in the block 1 of Figure 2, each wavelength is filtered out using the WDM Demux of Figure 2 and passed to the PDs, capable of recovering the RF signal 1 but with different time-delays. Then, as shown in Figure 2, these time-delayed RF signals 1 are input into the antenna array 1 (N array 1) having six elements to form the directional beam. The beam direction can be tuned by the CFBG in the CU by changing its refractive index by applying varying strains or temperatures. Thus, in the RRH 1, two beams with independently tunable beam direction can be supported. On a similar note, in the RRH2, we can also generate two beams by expanding the design of the CU in Figure 2 to two more DML chains. Therefore, this architecture can be potentially exploited for the multi-user MIMO wireless system due to its centralized beamsteering control and the flexible RF spectrum allocation. In our proposed design of Fig. 2, the chirped FBG is capable of imposing a linear relation between the imported wavelengths and their time delay. Here, we aim to characterise the proposed design mathematically. Considering a single chain of the CU in Figure 2, the RF signal is directly modulated by DMLs and the input optical field of the optical bandpass filter (OBPF) is formulated as (Thomas et al., 2015;Li et al., 2018a): where P Laser is the LD's output power and ω λ1 denotes the optical carrier's angular frequency corresponding to λ 1 of Figure 2. ω f1 represents the angular frequency of the modulated RF signal. As seen in Eq. 1, E 1 (t) is an optical double side-band (ODSB) signal consisting of the spectral component of ω λ 1 , ω λ 1 − ω f 1 and ω λ 1 + ω f 1 , which correspond to the optical central frequency having the wavelength of λ 1 , and its left side-band and right side-band with spacing of f 1 , respectively. Then, after the optical bandpass filter, which is capable of filtering single side-band of the generated optical signal of E 1 , we arrive at: As shown in (Eq. 2), the OBPF is capable of removing one of the side bands of the ODSB signal E 1 (t) generated by the MZM. After the OBPF, the signal E 2 (t) becomes an optical single side band (OSSB) signal. E 1 (t) contains the spectral component of ω λ 1 , ω λ1 − ω f1 and ω λ1 + ω f1 , while E 2 (t) contains the spectral component of ω λ1 and ω λ1 − ω f1 . We have previously demonstrated via implementation in (Li et al., 2018b) that the OBPF having a 3-dB bandwidth of 0.114 nm can generate the SSB signal from an ODSB signal having sidebands spacing of 3 GHz by removing ω λ 1 + ω f 1 . The MZM output field expressed in Eq. 3 can be combined with Eq. 2 to arrive at (Thomas et al., 2016): (3) Hence, we have: where ω Δf , V dr and V pi are the angular frequency of Δf, the amplitude of the drive frequency of the MZM and its switching voltage. J n ( πV dr 2V π ) is the Bessel function of the first kind and order n, which determines both the number and the amplitude of the sidebands, respectively. Specifically, as illustrated in Figure 2, the spectral components can also be represented by the items A + B + C of Eq. 3, where A contains the spectral components of ω λ1 , ω λ1 − ω f1 , B represents the spectral components of MZM output field is derived in (Eq. 3), which results from feeding E 2 (t) and the RF signal having the voltage of V dr to the MZM of Figure 2. According to (Kalman et al., 1994;Ma et al., 2007;Thomas et al., 2015;Thomas et al., 2016;Zhang et al., 2017;Zhai et al., 2021), when the MZM operates at the push-pull mode and applies the quadrature point biasing, the MZM output can be derived and expressed as detailed in (Thomas et al., 2016). Due to the nonlinear transfer function of the MZM, we obtain multiple harmonics by varying the amplitude of the drive voltage applied to the MZM of Figure 2, which is denoted by the Bessel function of (Eq. 3). Then, the MZM output forms a WDM signal carrying the same signal on each wavelength as depicted in Figure 2. Each wavelength is photo-detected and recovered to the RF signal, where for the simplicity, we derive the RF signal fed into three neighboring elements of N array 1 of Figure 2 as follows: where t 01 , t 02 , t 03 , t 04 , t 05 , and t 06 are the time-delays imposed on the wavelengths of f λ 1 , Hence, due to the linear relationship between the time-delay and the optical spectrum, we have Δt t 06 − t 05 t 04 − t 03 t 02 − t 01 . Then, by comparing the time-delay of the photo-detected signal S 0 , S 1 and S 2 , we are capable of obtaining the time-delay difference between S 0 and S 1 as Δt, while that between S 2 and S 1 is Δt. On a similar note, the time delay between the neighboring element would be constant as −ω Δf ω f 1 Δt, enabling the optical aided analogue beamforming using CFBG. In the above description and derivations, we include the 3antenna element array as a design example, where this design can be extended to any number of elements in the antenna array. We have shown in the above that by imposing the linear time delay on the WDM signal of E MZM (t) we can obtain a constant time delay difference between the neighboring elements. It can be readily verified using similar derivations as presented in (Eqs 3-6) that any arbitrary number of antenna elements would have the same rule of the time-delay difference of −ω Δf ω f 1 Δt by extending S 3 to S N , where S N corresponds to the recovered RF signal fed into the nth antenna element of N array 1 2 . On the other hand, given that many users can be associated with multiple RRHs with a reasonable channel quality, we can consider improving the communication's performance using diversity techniques combined with the beamforming, for URLLC data for example. Another alternative can be to employ a multiplexing transmission scheme combined with beamforming, in order to attain a higher throughput, which is beneficial for the eMBB scenario. More specifically, in (Hemadeh et al., 2018) we presented a flexible multi-functional MIMO technique and compared in details the performance of the different configurations, where we have shown that the diversity aided MIMO has the best performance at the expense of reduced throughput as opposed to the multiplexing techniques, which have a higher throughput and reduced bit error rate (BER) performance compared to the diversity techniques. Hence, in this paper we select the following options as the potential transmission schemes from the RRHs to the different users: 1) beamforming using one RRH, 2) beamforming combined with space-time block code (STBC) diversity technique using multiple RRHs to transmit to the specific user, and 3) beamforming combined with spatial modulation (SM) using multiple RRHs to transmit to the specific user 3 . The CU decides on the transmission scheme for each user, since the CU has all the information needed for making the decision. In the context of the proposed reconfigurable design, a single RRH can connect with a user as shown in Figure 3A, while multiple RRHs can connect to the user as shown in Figure 3B. Furthermore, given the challenges imposed on the A-ROF fronthaul in the proposed C-RAN system, such as the bandwidth, latency and jitter as well as the need for low cost transport network (Checko et al., 2015), in the following we consider the maximum user load that can be supported by the fronthaul link. When the CU makes the decision to transmit to a specific user from one RRH using beamforming, then the optical aided beamforming described above will be employed. Specifically, the proposed architecture in Figure 2 exploits the optical aided beamforming employing CFBG to facilitate a flexible analogue beamforming scheme. More specifically, the optical module of Figure 2 can be employed in the corresponding block of Figure 3, while using algorithm 1 for wireless transmission. Note that, in this paper, we are proposing the beamforming solution, while the user association has been widely investigated (Olwal et al., 2016;Ejaz et al., 2020;Rodoshi et al., 2020). On the other hand, when a user is associated with multiple RRHs and the fronthaul has the capacity to allow transmission for this user from multiple RRHs, then the CU will decide to transmit using STBC or SM combined with the proposed optical-aided beamforming. First, when the diversity scheme is considered, the CU will choose the RRHs having the highest channel quality metric, to transmit to the specific user. Afterwards, the CU will encode the data to transmit from the multiple RRHs and then transmit it to the RRHs using the above proposed optical aided beamforming. This will result in a diversity gain in addition to the beamforming gain, which results in significant performance improvement compared to the case of using only beamforming from one RRH, while attaining the same throughput. When the CU decides on the SM as the transmission scheme from the RRHs to the user equipment, then a multiplexing gain can be achieved. In this case, the CU will split the data bit stream to two parts, one for the conventional amplitude and phase modulation (APM) such as PSK/QAM and the other bit stream is used to decide which RRH transmits the signal. Then, the CU transmits the signal to the selected RRH, where the above-proposed optical aided beamforming is performed. Furthermore, in addition to the increased throughput per user attained using this mode, the RRH, which is not transmitting to a specific user can transmit signals to other users, which results in an increased area spectral efficiency or sum rate in addition to the efficient utilisation of the A-ROF fronthaul resources. Finally, the CU can adaptively decide on the transmission scheme for each user according to Algorithm 1, where the CU categorises Simulation parameters Values Number of antenna element (N A ) 6 Wavelength spacing 50 GHz Length of the chirped FBG 40 mm RF signal frequency 3 GHz WDM Central frequencies (THz) 193.450,193.500,193.550 193.600,193.650,193.700 Simulation platform Matlab, Optisystem, OptiGrating users as URLLC and eMBB and then decides on the transmission scheme for each user considering the available resources as well as the channel quality information for each RRH-user link. SIMULATION RESULTS AND ANALYSIS As mentioned above, we aim to design an optical aided beamforming system to support the adaptive MU-MIMO system. In our design architecture, we are capable of reducing the beam-squinting resulting from the conventional electronic phase-shifting aided beamforming. In this section, we will evaluate the beamforming performance of the proposed A-ROF aided C-RAN design, followed by a comparison of the beam-squinting phenomenon between our design and the conventional design. As mentioned in Section 3, the optical beamforming system is capable of providing the required beam-steering. In this section, from the perspective of optical communication, we simulate the flexibility and range of the optical aided beamforming techniques. Table 1 lists the simulation parameters, where we invoke an antenna array having 6 elements. We consider a radio frequency (RF) signal at 3 GHz, which is transmitted by the A-RoF system and carried by a WDM signal of wavelengths as shown in Table 1. In our simulation, the RF signal is generated off-line by Matlab and then we measure the time delay using the OptiGrating, a software for designing the fiber Bragg grating. As discussed in Section 3, the RF signal is first used to directly modulate a laser source. The directly modulated signal is bandpass filtered to obtain a single side-band modulated carrier and then the output of the bandpass filter is passed through a MZM that is driven by a sinusoidal signal having a frequency of 50 GHz. This results in the generation of multiple side-bands at the output of the MZM due to its non-linearity. These side-bands are then passed through the CFBG which induces a different delay over each sideband depending on the wavelength. As mentioned earlier, the CFBG was implemented using the commercial tool OptiGrating, which gives the values of the delays induced over each side-band. We use these delays to calculate the angle of the beam obtained at the output of the antenna array. The delays can be tuned by varying the period of the CFBG. By tuning the CFBG of Figure 2, we can obtain a beamforming coverage of almost 180°as depicted in Figure 4. More specifically, we apply six antenna elements to validate our flexible beam-steering scheme of Figure 2. Each beam emitted from around 95°to 265°to the orientation of the antenna array in Figure 4 is generated by tuning the total chirp of the CFBG implemented in the CU of Figure 2 from 0.7 to 4. Thus, any beam direction shown in Figure 4 can be used for the wireless beam-steering and the multi-user MIMO as discussed in Section 3, which results in multi-user beamforming as shown in Figure 5. Furthermore, as shown in Figure 2, the beamforming angle emitted from each antenna array can be independently tuned by imposing temperature or strain on the corresponding CFBG (Yunqi . Thus, multi-beam can be flexibly generated since each beam is related to different CFBG in the CU of Figure 2, where the time-delay is independently tuned. In Figure 6 we show the relation between the wavelength and the time delay imposed by the CFBG, where the time delay difference among the neighbouring wavelengths of the WDM signal of Fig. 2 can be tuned by changing the gradients of the curves in Figure 6, as a result of tuning the total chirps of the CFBG. Note that in Figure 6, we show two different curves for two different values of the total chirp as example values to verify the tunability of the linear relations with the tuning of the total chirps of the CFBG. It may be observed from Table 1 that the wavelengths we have chosen for transmission of the RF signals have a range of 1,548.78nm-1,550.78 nm. For these wavelengths, the time delay has values between 180 and 280 ps, as may be observed from Figure 6, which shows the linear relationship between the wavelengths and the time-delay of the WDM signal of Figure 2 provided by the CFBG. Here, the linear optical time-delay can be translated to the constant antenna time-delay differences of −ω Δf ω f 1 Δt as derived in Section 3. Additionally, multiple beams to support multiple users can be realised using the proposed system. Figure 5 shows an example two beams generated with different angles, where we tune the time delay difference among the neighboring wavelength to 64.03 and 14.73 ps, which correspond to the total chirp of 0.85 and 3.65, respectively. Therefore, we show in Figure 5 that our system is capable of supporting multi-user beamforming by simply tuning the total chirp to map the beam to the desired direction. The beam direction range can be further enhanced by reconfiguring the CFBG's length or grating period. Finally, the beam squinting impairment resulting from the phase-shifting based beamforming can be mitigated by our truetime delay scheme as shown in Figure 7, where the true-time delay scheme can effectively remove the beam-squinting phenomenon of the phase-shifting scheme. Hence, the A-RoF based true-time delay system is capable of supporting the multi-user transmission dispensing with the phase shifters, while supporting flexible beam steering control and removing the beam-squinting problems. Compared to the conventional system of Figure 1, our proposed system utilizes the passive CFBGs for an all-optical beamforming solution and a true-time delay phased array system. This system can facilitate the channel estimation and precoding scheme (Wang et al., 2019), while improving the bandwidth and increasing the number of antennas (Cai et al., 2016) 4 . CONCLUSION In this paper we proposed an A-RoF aided architecture for supporting the requirements of the current and future mobile networks, where we designed a photonics aided beamforming technique in order to eliminate the bulky electronic phase-shifters and the beam-squinting effect while providing a low-cost RAN solution. We showed that the proposed system is capable of providing a beamforming range of 180°, while also supporting multiple users. Then, we presented a reconfigurable multi-user MIMO system utilising the proposed beamforming technique, in order to allow improved performance or increased throughput for the different users depending on the channel quality as well as the user requirements. DATA AVAILABILITY STATEMENT The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
8,820
2021-08-31T00:00:00.000
[ "Engineering", "Physics", "Computer Science" ]
Effective data routing using mobile sinks in disjoint mobile wireless sensor networks ABSTRACT Introduction Recent advances in WSNs have attracted much interest and have remarkable influence on the efficiency and performance of a variety of applications such as disaster management and surveillance.Such systems are able PEN Vol. 7, No. 1, June 2019, pp.82-98 to process data collected from multiple sensor nodes to monitor activities and events in the observation area [1][2][3][34][35][36][37][38][39][40][41][42][43][44].Due to presence of wind and obstacles, the actual landing positions of sensors cannot be controlled.As a result, the coverage may be marginal as compared to the requirements of application and could remain true no matter the count of deployed WSN nodes.Therefore, it is required to make use of mobile nodes that are capable to move to required positions so as to ensure the desired area coverage [4][5][6]. Consequently, the design of related algorithms or protocols is highly challenging in such a distributed environment.Moreover, the sensor nodes suffers from many limitations, such as limited power source, short communication radius and small storage.These restrictions also make the design of WSN protocols challenging. A MWSN is comprised of a collection of mobile sensors with self-organizing and cooperative capability to detect and predict objects of interest [7,8].All properties of MWSNs inherit from static WSNs and they possess their own unique property of mobility.The nodes in these networks collaboratively sense and collects data from different regions of the monitored area and provides a global sketch of monitored events and activities.However, a WSN node failure can imply connectivity loss, which in turn leads to partitioning of the WSN.Adding mobility to WSN nodes can appreciably enhance the potential of the WSN by making it reactive to events, resilient to failures and to support diverse set of applications with a common set of sensors. Collecting and routing sensed data from the sensor nodes is an essential concern in WSNs due to limited power of WSN nodes which are usually deployed randomly as WSN inside the region of interest.Using mobile nodes in data gathering process after a assistance in monitoring larger region and transmitting data to unreachable BS.A remote BS is deployed to gather data captured by the WSN nodes.Sensor nodes communicate with the BS through multi-hop, which may be feasible in the case of disjoint clusters.To develop an efficient protocol for data gathering with existing holes is an on-going research and is the core of this paper. Most of the existing routing protocols are proposed to handle dense and sparse networks.However, not much work has been proposed to handle the situations where the MWSNs is divided into a set of partitions and lasting for a significant period of time.The current routing protocols cannot handle such situation.The routing challenges for disjoint clusters of WSNs without having global information about dynamically created clusters is discussed in this paper.Each disjoint cluster needs to find a path to BS so as to deliver collected data and cluster information to BS.We propose two new distributed and centralized routing discovery protocols.In the centralized protocol, the static sink controls the motion of mobile sinks; while in distributed scheme, each mobile sink is responsible for collecting data in a given specific region.The mobile sinks need to coordinate PEN Vol. 7, No. 1, June 2019, pp.82-98 globally among themselves so as to maintain the original network topology as much as possible and make global knowledge. The remainder of this work will be as: The relvent work will be illustrated in Section 2. Section 3 presents a description of relevant terms.Section 4 explains the proposed protocols.Section 5 includes the evaluation of the proposed protocols and compares the results with the existing protocols.The conclusion of our work is given in Section 6. Related Work WSN routing is challenging and difficult due to various features that differentiate WSNs from contemporary Wireless Adhoc Networks [15,16,19] such as: (1) It is not feasible to apply Classical IP-based approaches to WSNs , as it is not possible to create a global addressing scheme because of sheer count of WSN nodes, (2) In contrast to existing communication networks, most of the WSN applications demand the flow of data sensed from various areas to a specific sink; (3) significant redundancy exists in the data traffic generated, because multiple WSN nodes may produce the same data within the neighborhood of an activity under monitoring.In order to improve bandwidth utilization and energy, routing protocols are required to exploit such redundancy, (4) The WSN nodes are highly constrained in terms of resources like energy, capacity for processing and transmission power and therefore needs effective management of resources.Thus, several protocols are proposed for solving data routing problem.Together with the application and architecture necessities, such routing protocols have included the features and demands of WSN nodes. Existing WSN data routing protocols are categorized as Location based, Hierarchical Data-centric and a few distinct protocols according to QoS or network flow demands.The protocols in Data-centric class are often query-based and in contrast to traditional address based routing, it is based on naming of intended data based on application needs, that helps to eliminate various transmission redundancies [9,11,17,18,23].In Hierarchical protocols, the sensor nodes are grouped into a set of clusters, and one node is elected from each cluster as a cluster head (CH) and the remaining sensors are cluster members (CMs).CH can do aggregation to reduce data and save energy [12,14,[38][39][40][41][42][43][44].Location-based protocols utilize location details to send the data to required areas instead of the whole WSN [22].The last type of routing techniques include protocols that depends on network-flow and that aims to satisfy the QoS needs in conjunction with the routing operation [10]. All these protocols work well in connected network, where the path from the sensor nodes to others or the BS can be determined.However, in our work, the network consists of disjoint clusters where the path between each two clusters and between a CH and the BS cannot be established.In [31], a grouping scheme is proposed such that the nodes are partitioned into various groups, where the nodes that belongs to a particular group are relatively close to each other based on geographical information.This method of grouping performs cluster-PEN Vol. 7, No. 1, June 2019, pp.82-98 based routing based on grouping CHs scheme and therefore enhance scalability, decreases energy utilization, and extends the lifetime of WSN.Such protocols cannot exchange messages if the CHs cannot find another CH within its communication range or clusters are disjoint.It is highly challenging to find an optimal WSN deployment strategy that can reduce cost and computation, minimizing communication overhead, offering high degree of coverage and connectivity and resilient to node failures.The network could become disjoint at any time.WSN cannot serve its application well.To work in this case, some techniques must be utilized.In [24], a message ferrying (MF) technique is deployed, that works well for sparse networks.MF is a proactive method that exploits mobility.It employs a special set of mobile sensors as message ferries (MF or ferries) that facilitate the required services for communication between sensor nodes, and are in charge for data gathering in sparse WSNs.The major purpose of MF approach is to instigate non-random sensor movements and to utilize such movements to facilitate efficient data collection and delivery. [25] Proposed, in hybrid WSN, a three-phase efficiently heuristic scheduling method of mobile sink.The WSN is split into grids and these grids are categorized into clusters.This approach produces optimal grid cell division for the network which prolongs the WSN lifetime.For energy balancing inside clusters, the clusters arrange themselves by allocating or deallocating grid cells.An inter-cluster based on ACO algorithm was proposed in [26] for data packet routing in WSN.The algorithm uses NLEACH when inter-cluster ACO aggregate data.This method reduces energy consumption in transferring unnecessary data sent by sensors.In [27] the Mode Switched Grid-based Routing (MSGR) protocol is proposed which uses the benefits of gridbased protocols.Each grid head nodes switches between active and sleep modes.The grid head nodes that are put on sleep mode are those which are not taking part in the routing. In [28], a polynomial time heuristic approach is introduced, called Relay Placement algorithm based on Space Network Coding (RPSNC) which finds the optimal locations of relay sensor nodes, to find solution to the problem of network connectivity restoration and aims to reduce the count of relay sensor nodes needed for connectivity restoration in huge networks using non-uniform partitioning, Delaunay triangulation and linear programming (LP) techniques.The authors in [29] proposed the method which adds long distance routing nodes to the WSN to improve the network connectivity during the growth of plants.The connection rank matrix is used to represnt the connectivity of the deployed WSN based on the graph theory.A fully connected network is marked by the rank with value of 1.The smaller value of rank means the better connectedness. In a given MWSN, the authors of [30] investigate the issue of data gathering, in which multiple MS focus to gather data from a provided collection of static sensor nodes.This work proposes a Co-operative Data Collection Algorithm (CDCA) that divides the sensors as groups with local mobile sink to gather the generated data by the nodes in each group of sensors.The CDCA constructs a global path for gathering data from the PEN Vol. 7, No. 1, June 2019, pp.82-98 local mobile-sinks to the global mobile-sink after selecting a set of data gathering points in each group and constructing a separate path covering every data gathering point in each group. In [13], two routing protocols are introduced, first one is utilizes straight line messengers moving (SLMM) and the second utilizes flexible sharing policy of messengers (FSPM).In SLMM, once a WSN node in a cluster gets the request from the CH, it communicates with the neighbor nodes, and moves out of the cluster in the direction of BS.When it reaches BS, it interchange its data with BS.BS gives information of other existing clusters to the messenger.It then takes the data and back to its initial cluster.After iterating this scenario with different clusters, all the cluster information is shared among themselves.The packet structure for FSPM is same as that of SLMM protocol.In [32], the authors added a higher tier genetic fuzzy system to SLMM and FSPM for route discovery and maintenance.They proposed two routing protocols: (1) Genetic Fuzzy Straight Line Moving of Messengers (GFSLMM) and (2) Genetic Fuzzy Flexible Sharing Policy of Messengers (GFFSPM).The main distinction between our proposed protocols and the work in [13,32] are the following: (1) In [13,32], there is only one static sink which causes a large long movement of the messengers.However, in our work, we introduce mobile sinks that move to the cluster of request to pick the sensing data directly from the clusters.In [13,32], the messengers takes the sensed data packet and move to the BS to unload the data.This process is repeated by the messenger till the target moves away from the cluster.By this way, the messenger exhausts its battery quickly and causes additional partitioning of the network so as to save energy of the messengers.In our work, the messenger moves to the BS only one time to inform its cluster's location that makes mobile sink to move to that location.The mobile sink leaves this location as the target moves to another cluster.Here, the messengers moves to the new location of the mobile sink, decreasing the movement's distance.The messenger shares cluster information with the messengers from other clusters or with intermediate clusters it passes by.In [13,32], the messengers have the responsibility to move to the BS to route the data.However, in our work, the mobile sinks have the responsibility to route the data. Mobile Sinks Based Routing Protocols In DMWSN, there exist relatively stable network partitions and every partition of the network has a connected dominated set (CDS) [33].In this paper, we consider how to solve the problem of routing data from disconnected clusters effectively.In the following subsections, we introduce a description of some relevant terms that will be used in our proposed protocols. Cluster Based Organization: In the cluster-based partitioned network, sensors functions are clas-sified as general sensor, CH, sink node and messenger.In each cluster, we employ Dynamic Source Routing protocol (DSR).PEN Vol. 7, No. 1, June 2019, pp.82-98 In this paper, any appropriate clustering algorithm can be used, and we are not concerned with the details of the clustering algorithms.Our proposed protocols do not impose any specific restrictions or requirements on the choice of the clustering algorithm. Following are the assumptions we have in developing the routing protocols for the DMWSN. Localization information exists in the network [20]. Each sink has the information of all mobile sinks in the network and it can communicate with any one of them directly or via other sinks. The mobile sinks are static if there is no any demand from either static sink or messenger. The transmission range is a disc-unit with the value r to the sensor nodes and R to the sinks in which R > r. Centralized Mobile Sinks Routing Protocol In centralized mobile sinks routing protocol, every CH selects a messenger as a sensor with highest energy. To reduce the consumed energy in messenger moving, condition 1 below and conditions 2 or 3 must be satisfied before CH assigns a messenger. 1. Cached pipeline is full. 2. The time of wait is greater than the minimum defined waiting time, before sending the messenger. 3. The time of wait is greater than the maximum defined waiting time, before resending the messenger. Before appointing a messenger, condition 1 assures that enough information has been collected.Condition 2 assurances the proper time for sending a messenger.Condition 3 indicates that for sharing cluster information the cluster resumes sending out next messenger if the previous messenger expires after a long time. In this protocol, the network consists of sensor nodes, BS, and mobile sinks.Node moves out of the cluster in the direction of BS, when it receives a request of messages from the CH using multiple hops. It exchanges carried data with BS.The BS provides the messenger with the clusters information and the new location of the mobile sink to send the information so that when the second time the messenger carries the data, it returns to its actual cluster.The process is iterated in different clusters, and the linear time is the total cost of sharing all clusters information.Figure 1 shows the process. The BS demands the nearest mobile sink to move to the location of the target cluster if there is no mobile sink in the range of the cluster.The movement of the mobile sink must occur without any destruction of connections of the sinks.The new location (xn, yn) of the moved mobile sink is computed as follows: Here, (xc, yc) denotes the target's cluster and (xs, ys) denotes the second mobile sink location that is the nearest to the target's cluster.The mobile sink has three states: power-off, power-on, and processing.If the mobile sink in the processing state or play a role in keeping connections between sinks is demanded by the BS to move, the location of the sink is replaced by cascading mobile sinks of the route path [21]. . The Protocol Outline Step 1 Sensory data is collected from members and is stored in the pipeline memory by the CH.CH elects the sensor node with maximum energy as a messenger if the cached pipeline is full.Start step 2 if condition 2 or 3 above is satisfied. Step 2 Using DSR, Cluster Set Information is generated and routed by CH to the chosen messenger.The command type and trigger time is shown by messenger.The messenger starts step 3 if the command type is 0. Step 3 Messenger that have information packet proceeds out cluster and goes in the direction of BS.At every r/v time, a probing message is broadcasted by the messenger, and it waits for answering.In its way, if any mobile sink replies to BS, the messenger exchanges its carrying packet with BS through that mobile sink directly or via another mobile sinks.In Step 4, the messenger waits to get the reply New Location from the BS via the sink.If the messenger received reply from a mobile sink, it enters step 6. However, the mobile sink tranmits the cluster details in its communication cycle.The track information is saved by the moving messenger for next delivery.The cluster set information packet includes the cluster's own details and details of other clusters obtained from the previous messengers (global knowledge). Step 4 The messenger unloads its carrying packet at the BS.BS reveals the information, and adds/updates the cluster information.Then, it provides the messenger's cluster with the nearest mobile sink.The BS routes the message to the mobile sink using routing solution as in step 2. Step 5 The selected mobile sink receives the message from the BS and moves to the nearest point to the cluster without breaking the connections to other sinks. Step 6 The new packet is carried by the messenger and it goes back to its cluster. Step 7 When the messenger contacts any one of its cluster members, it unloads the message packet from BS.The packet is routed to the CH as in step 2 as per the process is shown in Figure 1. Distributed Mobile Sinks Routing Protocol This protocol works on the network where all sinks are mobile.This structure is based on straight line mobility of the messenger.The key difference between this protocol and the previous one is that each sink is responsible for gathering data from a specified region (its authority region).The mobile sink can communicate with all sensors in its region.The regions of the mobile sinks are overlapped so the sink can communicate and exchange data with its neighbor sinks by moving the sinks to an overlapped region within a delay time (the time that every two adjacent sinks accepted to exchange data between them) to exchange the information.If there are more than one cluster requests for a particular mobile sink, the mobile sink moves to the location of center of gravity (COG) or centroid which is the average of X ,Y coordinates of the cluster locations. The Protocol Outline Step 1 Collected data is stored in the the pipeline memory of the CH.If pipeline is full cached the CH selects the messenger as the node with maximum energy.If condition 1 or 2 are satisfied, it starts step 2. Step 2 Using DSR, the selected messenger obtains the Cluster Set Information Packet from the CH.The command type and trigger time messenger are retrieved by the messenger.If zero command type, then the messenger starts step 3 after the trigger time; Step 3 At every r/v time, messenger with information packet time broadcasts a probing message, and if it receives a reply from an other cluster, it shares with the cluster its packet.It shares information with PEN Vol. 7, No. 1, June 2019, pp.82-98 90 the mobile sink as well.After this process, it starts step 5.The mobile sink broadcasts cluster information in its communication cycle.The moving messenger saves the track information. Step 4 At the BS, the messenger unloads its packet.The mobile sink updates or adds the current cluster information and sends its new location. Step 5 The messenger is returned a new packet back to its cluster. Step 6 The messenger unloads its packet to regular member of its cluster when they meet, it is routed to its CH.The process is illustrated in Figure 3. Simulation Results We provide the results of simulation in this section to show the performance and illustrates the benefits of the two proposed protocols and to compare the energy consumption, count of live nodes, packet delivery ratio and end to end delay of our proposed protocols (CMSR and DMSR) and existing protocols. Simulation Environment: The simulation uses the same environment that used in [13], where ten disjoint The experimental results shown in Figure 3 shows that DMSR protocol for route discovery is more energy efficient and consistent than CMSR.This is because assigning sink for each region to be responsible for exchanging and gathering data reduces the movements required and so the energy consumption.Figure 3 also shows that DMSR and CMSR is more energy efficient than FSPM, SLMM, and GFFSPM.This is because FSPM, SLMM, and GFFSPM have static BS, so the movements of the messengers are performed to a fixed location that cause a negative influence on energy consumption.This is not the case in DMSR or CMSR where the mobile sink moves to the cluster that requests data exchange and delivery of data among the sinks.The mobile sink stays in the cluster to collect the data as the target exists.The transmission data packets is executed by the mobile sinks not via the messengers (sensor nodes) which can increase the lifetime of the network.In addition, mobile sinks collect the sensed data from other clusters in this way to the cluster of request that saves more energy of the messengers.On the other side, FSPM is more energy efficient than SLMM because in FSPM, a messenger exchanges cluster information with messengers from other clusters or with intermediate clusters it passes by , while in SLMM, the messengers have to move to a fixed location of the static sink to exchange data and return to its clusters.GFFSPM shows better energy consumption than FSPM. But, due to limited number of clusters in our experiment, the population space is not large enough to make use of genetic algorithm efficiently.on increasing the number of the active nodes as compared with FSPM, SLMM, and GFFSPM protocols.This is because all the data delivery is performed by the sensor nodes in FSPM, SLMM, and GFFSPM, however, in our proposed protocols every messenger moves to the static sink at most only one time to submit its cluster information.Then, the messengers move to the nearest sink in the next time which decreases the consumed energy of the messengers. Conclusion From the simulation outputs, we can conclude that our proposed routing protocols provide more energy savings than the existing protocols.Therefore, our protocols not only enhance the network lifetime but also minimizes the end to end delay and increase the packet delivery ratio.The cooperative efforts of cluster knowledge sharing and collaboration among autonomous agents in carrying out autonomous terrain monitoring and communication with the BS for prolonging network lifetime. Figure 1 : Figure 1: Planar diagram of the Process Figure 2 : Figure 2: Planar diagram of the Process targets are deployed randomly in 800 m × 640 m area.In our preliminary experiments 100 WSN nodes are randomly roaming to detect targets initially.The radio propagation range of each sensor is 50 m, the radio propagation range of each sink is 200 m, the speed of moving of the sensors and the sinks is 2 m/s and the number of the mobile sinks is 10.PEN Vol. 7, No. 1, June 2019, pp.82-98 packets, Ro denotes the steps of roaming activity, Mg represents the steps of messaging activity, Dex is the amount of exchanged packets, and Dam is the set of appointing messenger times. Figure 3 : Figure 3: Energy Consumption Comparison between the proposed and existing Routing Discovery Protocols Figure 4 : Figure 4: Active nodes in our proposed and existing Routing Discovery Protocols Figures 5 and 6 shows end to end delay and packet delivery ratio of our proposed protocol in comparison with the existing protocols.As shown in the figures, end to end delay and the packet delivery ratio increases as the average degree increases.End to end delay of our proposed protocols is better than of existing protocols (FSPM, SLMM, and GFFSPM ) as the use of mobile sinks in our proposed protocols reduces the movement distances inside the network while existing protocols use static sinks.Also, reducing the movement distances improves the packet-delivery ratio as shown in and Figure 5 and the packet-delivery ratio increases as the time spent increases. Figure 5 :Figure 6 : Figure 5: End to End Delay in our proposed and existing Routing Discovery Protocols
5,556.6
2019-04-29T00:00:00.000
[ "Computer Science" ]
Ab-initio calculations of shallow dopant qubits in silicon from pseudopotential and all-electron mixed approach Obtaining an accurate first-principle description of the electronic properties of dopant qubits is critical for engineering and optimizing high-performance quantum computing. However, density functional theory (DFT) has had limited success in providing a full quantitative description of these dopants due to their large wavefunction extent. Here, we build on recent advances in DFT to evaluate phosphorus dopants in silicon on a lattice comprised of 4096 atoms with hybrid functionals on a pseudopotential and all-electron mixed approach. Remarkable agreement is achieved with experimental measurements including: the electron-nuclear hyperfine coupling (115.5 MHz) and its electric field response (−2.65 × 10−3 μm2/V2), the binding energy (46.07 meV), excited valley-orbital energies of 1sT2 (37.22 meV) and 1sE (35.87 meV) states, and super-hyperfine couplings of the proximal shells of the silicon lattice. This quantitative description of spin and orbital properties of phosphorus dopant simultaneously from a single theoretical framework will help as a predictive tool for the design of qubits. Modelling the quantum transport properties of qubit arrays and the electronic properties of dopant qubits is computationally challenging yet crucial for device optimization in quantum computing. Here, the authors compare different DFT-based methods to describe the properties of shallow donor-based qubits in silicon. T he exponential progress of microelectronics over the last decade has been dramatically dependent on the excellent host material, silicon. The weak spin-orbit coupling and the existence of zero nuclear spin isotopes have made silicon a prime semiconductor platform for the emerging field of quantum information technology 1 . Shallow hydrogenic impurities (e.g., phosphorus, bismuth, and arsenic) 2 offer natural Coulomb confinement to single electrons and provide access to single electron and nuclear spins for encoding quantum information 3 . The relatively large electronic wavefunctions of the donors allow gate control of orbital and spin properties. This has led to the demonstration of high fidelity single 4 and two-qubit logic gates 5,6 with exceptional spin coherence times 7,8 and all-electrical spin readout and initialization 9 . Advanced lithographic techniques have been developed to deterministically place one to few phosphorous donors in a plane of silicon with a lattice constant precision 10,11 , leading to the fabrication of single-atom transistors 12 , few-atom thick nanowires 13 , and low-noise atomic spin qubits 4,5 . Accurate modeling of the spin and orbital properties of shallow dopants is crucial for designing and optimizing high-performance spin qubits 14 . Two dominant approaches in the community are the multi-valley effective mass technique and the Slater-Koster tight-binding (TB) method, as these enable calculations at the length scale of 10-100 nm. While effective mass theory is a continuum minimal-basis approach lacking atomic resolution, Slater-Koster TB theory is an atomistic full Brillouin Zone method 15 . However, both these methods are semi-empirical and single-electron in nature, relying on different levels of simplifications and parameters from ab-initio methods. The ability to extend the simulations to a larger number of electrons and compute properties without fitting parameters can be achieved by ab-initio density functional theory (DFT). DFT has been extensively applied to elucidate the electronic properties of semiconductors and is currently the only approach to obtain absolute values of Fermi contact hyperfine (HF) parameters between electron and nuclear spins in multi-electron systems 16 . Although DFT has succeeded in providing accurate hyperfine parameters for some highly localized deep impurity systems 17 , it is extremely challenging to simulate relatively delocalized shallow dopant states due to the system size limitations and excessive computational burden 18 . While DFT calculations on shallow dopants have improved over the years, past works were able to calculate either the hyperfine coupling or binding energy with accuracy, but not both. Swift et al. 19 employed pseudopotential and extrapolation approaches with HSE and PBE functionals working in tandem, achieving outstanding accuracy in binding energy calculations of bismuth and arsenic donors in Si. However, the predefined frozen core region of Kohn-Sham orbitals results in an incomplete description of the hyperfine coupling. Smith et al. 20 achieved success in the binding energy calculations directly from the wavefunctions extracted from the supercells containing 10648 atoms and the impurity potential of the phosphorus donor using an empirical model. And Gerstmann 21 accurately reproduced the isotropic hyperfine and super-hyperfine parameters of shallow donors in Si using Green's function approach. However, the empirical correction included in these methods lacks the capacity to provide an accurate evaluation of the hyperfine coupling and orbital splitting of the donor simultaneously from a single theoretical framework. In addition, electric field dependency of the hyperfine coupling, critical for quantum computing, has never been obtained from ab-initio calculations. In the present work, these obstacles are overcome by using pseudopotential (PSP) and all-electron (AE) mixed Gaussian type localized basis sets combined with hybrid functional to conduct large-scale DFT calculations containing up to thousands of atoms (4096 atoms; 4 × 4 × 4 nm 3 ). A host of important spin and orbital properties of a single phosphorus donor in silicon (Si:P), such as hyperfine coupling and its electric field dependency, superhyperfine couplings at silicon lattice points within a few shells of the impurity, and excited orbital (valley-orbit) splitting are accurately evaluated simultaneously. To obtain consistently accurate values of all these quantities together, the wavefunction, and hence the electron density needs to be accurately described not only close to the donor nucleus but also over a large part of the silicon lattice extending over a Bohr radius of 1-2 nm. Figure 1 shows a schematic of a single P dopant as a spin qubit in a silicon host, with the corresponding charge density difference map obtained from DFT showing the electronic interactions close to the donor atom. We focus on P donors due to their technological significance in quantum computing 2 , but the presented methods also apply to other shallow Group V dopants in silicon. For comparison, three separate approaches are used, namely, (1) the Pseudopotential approach (PSP) in VASP 22 , (2) the All-Electron approach (AE) in CP2K 23 , and (3) the mixed Pseudopotential and All-Electron approach (PSP-AE) in CP2K. The three-pronged approaches enable us to contrast the shortcomings of each technique and help us to assess computational cost versus accuracy. We find that the use of proper basis functions, state-ofthe-art functionals, and extrapolation method can yield quantitatively accurate results without significantly increasing the computational burden. As an AE approach based on Gaussian type localized basis sets, CP2K solves the inherent problems induced by the PSP approximation such as the disregard of the core spin-polarization effect and the contribution of the exchange-correlation potential in the vicinity of the nucleus 24 . Hence, it provides high computational efficiency and superior accuracy for core-electron related properties such as hyperfine coupling, nuclear magnetic resonance, and g-factor [25][26][27][28] . Additionally, hybrid functional is employed, which can further improve the accuracy of the wavefunction amplitude and the electron localization at the donor nucleus 19,29 . With the combination of AE approach and hybrid functional, a value of 115.5 MHz HF constant is predicted in the present work which is in excellent agreement with the Fig. 1 Illustration of single P donor in Si host in a quantum information processor. A-gates above the donors control the hyperfine coupling and hence the resonance frequency of the nuclear spin qubits; J-gates between the donors control the electron-mediated coupling between adjacent nuclear spins. The black background Si region below the barrier is used to reflect a more bulk-like scenario for qubits embedded into Si at depths >10 nm. The embedded charge density difference map illustrates the electronic interaction around the donors (isosurface value of 1 × 10 −3 e/ Å 3 ), where cyan area represents electron depletion and yellow area represents electron accumulation. experimental value of 117 MHz 30 . The corresponding binding energy 1 s(A 1 ) of the ground state and energies of excited 1 s(T 2 ) and 1 s(E) states are found to be 46.07 meV, 37.22 meV, and 35.87 meV, respectively, in good agreement with the experimental values 31,32 . Furthermore, the Stark shift of the hyperfine coupling is of utmost importance for electrically tuning electron-nuclear resonance frequency of donor spin qubits and it is computed from DFT in quantitative agreement with measurements. Finally, we show that the computational burden of an AE method can be largely alleviated by using a PSP-AE mixed method which can achieve the same accuracy as the AE approach. The method introduced in the present work, is currently, to the best of our knowledge, the only approach that can simultaneously reproduce all the important properties of shallow donors in predictive accuracy (e.g., anisotropic/isotropic hyperfine, superhyperfine interactions, and their electric field dependency, as well as binding energy calculations). Therefore, the present work represents significant progress in ab-initio calculations of shallow spin defects and paves the way for predictive exploration of similar quantum defects in a host of emerging materials. Results Hyperfine coupling. The Hamiltonian describing the hyperfine interaction between an electronic spin S and a nuclear spin I is known as: where A I iso is the isotropic or the Fermi contact hyperfine coupling at the site of the nuclear spin and is calculated by 33 : where μ 0 is the magnetic susceptibility of free space. γ e and γ I are the electron gyromagnetic ratio and the nuclear gyromagnetic ratio of the nucleus at R I , respectively. δ T ðrÞ is a smeared out δ function to take scalar-relativistic effects into account 33 and ρ s is the spin density. For all-electron basis sets approach, the relativistic effects onto the A I iso are ignored and the last term becomes ρ s R I À Á . If the nuclear spin site is the impurity site itself, we obtain the hyperfine coupling of the donor; while if the site is a silicon site in the vicinity of the donor, occupied by a 29 Si isotope of 28 Si, we obtain the super-hyperfine couplings. Pseudopotential approximation approach (VASP-PSP). In order to predict the HF constants from DFT results, a very recently proposed extrapolating method is employed in the present work with minor modification 19 . Briefly, since the error induced by the exponential tail of wavefunctions inversely scales with the supercell size, the calculated HF constants (A I iso ) of a variety of supercell sizes obtained from the PBE functional are plotted as a function of 1/N ( Fig. 2a and Supplementary Table S1), where N represents the number of atoms in the supercell. A fitting function is then obtained based on the polynomial least-square method starting from the supercell size of n = 4 (N = 512) to n = 6 (N = 1728), which gives an expression of A I iso = 7.47x 2 + 72.8x + 43.6. Subsequently, an intercept value of 43.6 MHz is achieved by the extrapolation of N⟶∞ (i.e., x → 0), corresponding to the predicted HF constant. In contrast to the experimental HF constant of 117 MHz 30 , the predicted value is significantly underestimated owing to the excess delocalization of wavefunction inherent in PBE. The use of hybrid functional can improve the description of localization. As reported, the fitting functions obtained from PBE and HSE functionals provide exactly the same slope value 19 . Hence the fitting function A I iso from PBE is subsequently employed to fit HF results obtained from HSE at the supercell size of n = 5 (i.e., A I iso = 136.9 MHz and x = 1) (Fig. 2a). This gives an intercept value of 56.7 MHz (Supplementary Table S2), showing a slight improvement over PBE. However, the HF coupling is still considerably underestimated due to the inherent problems of the PSP approximation method discussed in the introduction section. It is worth noting that due to the excessive computational burden and lower computational efficiency of large-scale systems, a limited supercell size in the VASP-PSP and the previously reported linear-extrapolation 19 approaches results in a weak convergence of the fitting function (i.e., the linear coefficient is dominant in A I iso ), leading to a significant underestimation of the HF constants. This poor convergence issue and underestimation of the HF constants can be considerably eliminated by the employment of all-electron basis sets and larger supercell sizes as described in the following CP2K-AE and CP2K-PSP-AE approaches, providing a significant improvement in evaluating the core-electron dominant properties of shallow donors. All-electron approach (CP2K-AE). To eliminate the inherent problems induced by the PSP approach, GAPW method implemented in CP2K using an AE basis set is employed to calculate the Fermi contact HF parameter. Due to the higher computational efficiency, a larger supercell size is evaluated for the PBE approach, containing 4096 (n = 8) atoms. The calculated HF parameters for the donor are presented in Supplementary Table S1. In contrast to VASP-PSP approach, the use of AE scheme provides higher HF constants for different supercells due to the more localized nature of Gaussian type basis sets and improved description of the core spin-polarization effect. The same extrapolating method as the VASP-PSP approach is used to predict the HF constants. With the increase in supercell size, the quadratic coefficient of the polynomial fitting function becomes more dominant. The fitting function starting from the supercell size of n = 5 (N = 1000) to n = 8 (N = 4096) for PBE results gives an expression of A I iso = 20.22x 2 + 15.3x + 110.0 (Fig. 2b). This is subsequently employed to extrapolate HF results obtained from HSE at n = 5 (i.e., A I iso = 151.0 MHz and x = 1), giving an intercept value of 115.5 MHz (Supplementary Table S2 Pseudopotential approximation and all-electron mixed approach (CP2K-PSP-AE). Although the AE approach significantly improves the inherent delocalization problem induced by the PSP method, the consideration of all electrons for each atom considerably increases the computational cost, making large-scale system calculations expensive. Therefore, a PSP and AE mixed approach is employed in the present work to efficiently and accurately evaluate the hyperfine coupling interaction for largescale systems. To achieve consistent results in contrast to pob-DZVP AE approach, pseudopotentials from the same family (DZVP-GTH-PBE) were used. The lattice parameter, the bond length of Si-Si, and the angle of Si-Si-Si of the geometry optimized from DZVP-GTH-PBE pseudopotential using PBE are 5.479 Å, 2.373 Å, and 109.5°, consistent with the results from pob-DZVP AE approach using PBE (5.486 Å, 2.375 Å, and 109.5°, respectively). The CP2K-PSP-AE scheme provides similar HF constants compared with CP2K-AE approach (Supplementary Table S1). The same extrapolating method starting from the supercell size of n = 5 (N = 1000) to n = 8 (N = 4096) is used, providing a fitting function of A I iso = 19.49x 2 + 18.9x + 102.3 (Fig. 2c). A subsequent extrapolation on HSE results n = 5 gives an intercept value of 112.6 MHz (Supplementary Table S2), in good agreement with the experimental value and the HF constant predicted from CP2K-AE approach. Furthermore, a comparison of the computational cost of CP2K-PSP-AE mixed approach with CP2K-AE approach summarized in Supplementary Table S3 indicates a significant improvement in the computational speed (4.5-5 times faster), permitting accurate spin-density description with higher computational efficiency. Super-hyperfine coupling. In addition to the calculation of HF parameters, super-hyperfine (SHF) interactions are evaluated. In contrast to the HF parameters which describe the wavefunction distribution on the central donor nucleus, SHF interactions depend on the wavefunction distribution in the silicon lattice sites surrounding the donor which can be occupied by the spin ½ isotope 29 Si. Five shells of silicon atoms closest to the donor center are investigated, namely (1,1,1), (2,2,0), (1,1, 3), (0,0,4), and (3,3,1) in units of a 0 /4 with a 0 being the lattice constant. The SHF parameters are calculated in a similar way to the HF parameter through the extrapolation approach. The fitting functions are obtained starting from the supercell size of n = 4 (N = 512) to n = 6 (N = 1728) for VASP approach (Supplementary Figure S1) or n = 8 (N = 4096) for CP2K approach (Supplementary Figs. S2 and S3). These fitting functions are subsequently used to fit the SHF results obtained from HSE at the supercell size of n = 5, giving the final prediction of SHF constants. As shown in Fig. 3, the 4 th -nearest-neighbor shell (0,0,4) gives the highest SHF constant owing to the Kohn-Luttinger oscillations 34 , consistent with the experimental results 30 and data from Green's function calculation (LMTO-GF) 21 . A better agreement with Green's functional method is obtained from AE approach. It's worth noting although the differences between VASP and CP2K results might be partially induced by the different optimized geometries, the consideration of the core electrons (i.e., PSP vs. AE basis sets) is the dominant factor that influences the HF and SHF calculations. These results prove that a combination of hybrid functional and AE approach can significantly improve the delocalization problem induced by PSP and PBE functional and accurately describe the spin-density of the donor state. The anisotropic SHF constants of the five shells of silicon atoms closest to the donor center are subsequently analyzed. Since most of the anisotropic parts (kHz) are at least 3-4 orders of magnitude smaller compared with the isotropic (MHz) SHF constants, the extrapolation method is not applicable in this case due to the sensitivity and significant fluctuation of the anisotropic parts as a function of supercell size. Hence, the anisotropic constants calculated from the largest supercells (n = 6 for VASP and n = 8 for CP2K) were used as the final prediction. As shown in Supplementary Table S4, both VASP and CP2K results exhibit good agreement with the experimental value 35 . The only exception is the (0,0,4) results obtained from VASP which provide different values of |B zz |(23 kHz) and |B xy |(49 kHz) components, contradictory to the experimental measurements (41 kHz for both |B zz | and |B xy |). In this sense, CP2K results show better consistency with the experimental values, where an anisotropy of these values could not be resolved. Electric-field dependent hyperfine coupling. Different from previous studies that investigate the influence of strain-induced internal field on the hyperfine coupling 19,34,36 , an external static electric field is employed in the present work. The presence of an external electric field (ε) can distort the shape of the donor wavefunction and pull the wavefunction away from the donor site. This will result in a decrease in the electric field dependent hyperfine parameter A(ε) which is proportional to jΨðε;R I Þj 2 , whereR I represents the donor site. Experimental measurements 37 have shown a quadratic dependence of the hyperfine coupling with electric field, which is parameterized as 38 : where 4Aε ð Þ represents the change of hyperfine parameter in the presence of electric fieldε (i.e., Aε ð Þ À A 0 ). η 2 and η 1 are the coefficients of the quadratic and linear terms respectively. For bulk donors, η 2 was measured to be −3.7 × 10 −3 μm 2 /V 2 for Si:Sb and η 1 was found to be negligibly small. Since 4Aε ð Þ is a relative value, there is no need to employ all-electron basis sets for all atoms; hence only CP2K-PSP-AE method is used. Detailed calculation parameters are described in the Methods section. Figure 4a shows our DFT calculations of 4Aε ð Þ=A 0 as a function of the electric field magnitude for different supercell sizes. The quadratic coefficient η 2 is the dominant part and decreases with the increase of supercell size, achieving a value of −2.65 × 10 −3 μm 2 /V 2 at the supercell size of n = 8 (N = 4096) ( Fig. 4b and Supplementary Table S5), in good agreement with the TB results (−2.76 × 10 −3 μm 2 /V 2 ) 38 and the experimental data for Si:Sb (−3.7 × 10 −3 μm 2 /V 2 ) 37 . Although the calculated quadratic coefficient is larger than the experimental measurement owing to the limitation of supercell size, the trend shows that it is approaching the experimental value with the increase of supercell size. In contrast, the linear coefficient η 1 is relatively stable and close to zero for all supercells. The effect of electric field on the isotropic and anisotropic parts of HF and SHF interactions is further analyzed. In this part, AE basis sets were used for P donor and Si atoms located in (1,1,1) and (0,0,4) shells, while the rest Si atoms were treated with PSP in order to reduce the computational burden. (1,1,1) and (0,0,4) shells were chosen to reflect the effect of electric field on the inner and outer 29 Si atoms. Compared with the isotropic parts, the value of the anisotropic parts are at least two orders of magnitudes smaller except the (1,1,1) shell (Supplementary Table S6), consistent with the experimental results 35 . And the change of the anisotropic parts under the effect of electric field is at least one order of magnitude smaller compared with the isotropic parts. The anisotropic parts of the HF constant of P donor are increased under electric field, due to the symmetrybreaking effect, except the |B xy | component. In terms of the anisotropic parts of the four 29 Si atoms located in inner (1,1,1) shell, only the diagonal components (i.e., |B xx |, |B yy |, and |B zz |) are increased under electric field, while the non-diagonal components show an opposite trend. For the anisotropic parts of the six 29 Si atoms located in outer (0,0,4) shell, the effect of electric field can be summarized as two cases: (1) For Si1 and Si2 atoms which are along the electric field z-direction and located on the opposite sides of the P donor (i.e., (0,0,4) and (0,0,−4)), the diagonal components (i.e., |B xx |, |B yy |, and |B zz |) and one non-diagonal component (|B xy |) are decreased under electric field, while the rest two non-diagonal components (i.e., |B xz |, |B yz |) are increased; (2) In contrast, the anisotropic parts of the rest four Si atoms showing a different trend, with one diagonal component decreased while all the rest diagonal and non-diagonal components increased. It's worth noting that the |B zz | (19 kHz) and |B xy | (27 kHz) components of (0,0,4) shell in this case are different, contradictory to the experimental measurements, showing the same issue as previously discussed in the anisotropic VASP SHF results (Supplementary Table S4). This further proves the importance of using all-electron basis sets for all atoms to correctly describe the core-electron contributions to the spinpolarization effect. Binding energy calculation. Binding energies (E) are calculated from the Kohn-Sham eigenvalues as described in a previous work 19 : where ϵ cb represents the eigenvalue of the conduction band minimum (CBM) in bulk Si and ϵ donor represents the resultant eigenvalue in P doped Si. ∇V, which is calculated by the Freysoldt method 39,40 , is the correction term and is used to align the Kohn-Sham levels between the bulk Si and Si:P system (Supplementary Table S7). Pseudopotential approximation approach (VASP-PSP). The final binding energy is predicted in a similar way to the calculation of HF constant by fitting the binding energy results from PBE as a function of the supercell size starting from n = 4 (N = 512) to n = 6 (N = 1728) using the linear least-square method. This provides a fitting function of E b = 0.0312x + 0.0325 (Fig. 5a). A subsequent extrapolation to N⟶∞ (i.e., x⟶0) gives an intercept value of 32.5 meV, representing the prediction of binding energy from the PBE functional. However, the delocalization of the wavefunction and the underestimation of the exchange-splitting effect (δ ex ) from PBE functional result in an underestimation of the binding energy compared with the experimental value of 45.59 meV 31 . Therefore, HSE functional is employed and the underestimation of δ ex in PBE is considered (Supplementary Table S7). δ ex represents the difference between the spin-up and spin-down eigenvalues of the donor state 19 and it also shows a clear trend inversely scaling with the supercell size (N) (Fig. 5b), giving slope values of 0.0086 (s PBE;δ ex ) and 0.0304 (s HSE;δ ex ) for PBE and HSE functionals, respectively. The final HSE binding energy slope (s HSE ) is calculated by adding half the HSE exchange-splitting slope ( 1 2 s HSE;δ ex ) on the spin-averaged PBE slope (s PBE À 1 2 s PBE;δ ex ) 19 : This gives a slope value of 0.0421 (s HSE ), which is subsequently used to extrapolate the binding energy calculated from HSE functional at the supercell size of n = 5 to the N⟶∞ limit (i.e., E b = 0.088 and x = 1) ( Table 1). The resultant intercept value corresponds to the final prediction of the binding energy, with the value of 46.07 meV, in excellent agreement with the experimental value (45.59 meV). The same method was employed to calculate the higher 1 s(T 2 ) and 1 s(E) orbital states, giving the final prediction of 37.22 meV and 35.87 meV, respectively (Table 1 and Supplementary Figure S4), showing good agreement with the experimental value of 33.88 meV and 32.54 meV 32 . The splitting of these states is due to the so-called valley-orbit splitting originating from strong donor core potential. These low-manifold states in the donor spectra are responsible for several properties of donor spin qubits including the spin-lattice relaxation times 41 . These results indicate that HSE has the capacity of reducing the error induced by the PBE underestimation of binding energy, while the size limitation of HSE functional could be corrected by the PBE scaling, achieving significant agreement with experiments. All-electron approach (CP2K-AE). Unfortunately, binding energy cannot be calculated currently in this method because the output of electrostatic potential which is required for calculating the correction term (e∇V) is incompatible with GAPW method in CP2K. Pseudopotential approximation approach (CP2K-PSP). Unlike GAPW approach, electrostatic potential can be obtained with GPW method in CP2K. Therefore, the binding energy is calculated in the same method described in the VASP-PSP approach. While performing the hybrid functional calculation using the pseudopotential approximation approach in CP2K (CP2K-PSP), the auxiliary density matrix method (ADMM) was employed to reduce the computational burden and accelerate the calculation speed 42 . The calculated results are summarized in Table 1. The same extrapolating method is used by linearly fitting the PBE results as a function of the supercell size starting from n = 4 (N = 512) to n = 8 (N = 4096), This provides a fitting function of E b = 0.0345x + 0.0319 (Fig. 5c) in the prediction of Fermi contact HF, its electric field dependency, and SHF has been achieved with all-electron level analysis and hybrid functional working in tandem. This highlights the importance of using all-electron level analysis to provide satisfactory elucidation of shallow donor systems. Results also reveal that a combination of pseudopotential approximation and allelectron level analysis can permit an accurate description of spindensity at the donor nucleus and generate all calculated parameters in a considerably more efficient way. Additionally, calculations of energies of the ground and excited states have exhibited outstanding agreement with the experimental value, demonstrating the predictive nature of the extrapolating approach for shallow donor systems. In the qubit architectures, the dopants are typically embedded into silicon at depths >10 nm (Fig. 1), which is larger than several Bohr radii of the dopants. Hence, the bulk-like scenario is more appropriate for these qubits. While the method introduced in the present work is focused on evaluating the properties of bulk systems, further consideration of surface effects for dopants buried a few nm below the barrier may be also achievable, owing to the superior computational efficiency of the PSP-AE mixed approach and its capacity of investigating large-scale systems containing thousands of atoms. Methods Pseudopotential approximation approach (VASP-PSP). The first set of calculations were conducted with the projector augmented-wave (PAW) method implemented in the Vienna Ab-initio Simulation Package (VASP) 22 most accurately reproduce the band gap (1.14 eV) and lattice constant (5.473 Å). A matrix diagonalization method was applied for wavefunction optimization. The convergence threshold on energy for self-consistent field was set to 10 −5 Hartree (Ha). Double-ζ valence with polarization quality (pob-DZVP) for Si 50 and 6-311G** Gaussian type basis set for P 51 , respectively, were used, together with a 450 Ry cutoff energy for the auxiliary plane-waves grid. The largest supercell corresponded to n = 8 containing 4096 atoms. A single k point was employed for all supercells. The geometry was optimized using Broyden-Fletcher-Goldfarb-Shanno (BFGS) [52][53][54][55] and Limited-memory BFGS 56 optimizers, with the convergence criteria of 3 × 10 −4 Ha/Bohr and 1.5 × 10 −3 Bohr for the root mean square (RMS) force and RMS atomic displacement, respectively. Pseudopotential approximation and all-electron mixed approach (CP2K-PSP-AE). The third set of calculations included two steps for PBE functional calculations: (1) The geometry optimization was performed with gaussian and planewaves (GPW) scheme in CP2K/Quickstep 57 in combination with Goedecker −Teter−Hutter pseudopotentials 58,59 and (2) Double-ζ valence with polarization quality (pob-DZVP) for Si 50 and 6-311G** Gaussian type basis set for P 51 , respectively, were used to perform a single-point calculation based on the optimized geometry. For hybrid functional calculations, the procedure was the same as that used in the second set of calculations (i.e., using AE scheme for both geometry optimization and single-point calculation), owing to the vastly lower computational cost of HSE in AE scheme in contrast to that in PSP approach. All the rest of the parameters and convergence criteria were the same as that used in the second set of calculations. Electric field-dependent hyperfine parameter calculation. A static electric field was applied to evaluate its influence on hyperfine coupling. A variety of electric field values was investigated, including: 0, 2E-07, 4E-07, 6E-07, 8E-07, 10E-07, and 12E-07 (units are in a.u.). In order to consider the symmetry-breaking effect induced by the electric field, geometry relaxation of ligand atoms was performed after applying an electric field with the GAPW scheme 49 . Goedecker−Teter −Hutter pseudopotential for Si 58,59 and modified pcH-2 Gaussian type basis set for P were used 60 , respectively. The rest of the parameters and convergence criteria are the same as those used in the second set of calculations.
7,112
2022-06-27T00:00:00.000
[ "Physics", "Engineering" ]
Monetary Policy in a Markov-Switching VECM: Implications for the Cost of Disinflation in Ghana Corresponding Author: Richard Kwabi Ayisi Department of Economics, Management and Quantitative Methods (DEMM), University of Milan, Italy Email<EMAIL_ADDRESS>Abstract: Monetary policy assessment in Ghana has been conducted using vector auto-regression. This however, presumes stability of long run outcomes and particularly ignores monetary policy regime changes that has characterized the economy overtime. This study thus introduced the possibility of switches in the long run equilibrium in co-integrated vector auto-regression by allowing both the covariance and weighting matrix in the error-correction term to switch. The study did not find any significant difference in monetary response in the different states. However, significant difference was obtained for the cost of disinflation across states. Though, disinflation cost has declined as the Bank of Ghana shifts from monetary targeting to inflation-targeting regime, overall cost is still high. This has implication on disinflation policy given the development agenda pursue by the country. Introduction Monetary policy has been the main tool used for macroeconomic stabilization in Ghana. Over the years, the monetary framework has undergone important changes regarding implementation (shocks) and policy (regime) framework. The policy regimes involve switches in the policy rule (i.e., from credits to interest rate instruments) to reflect monetary authorities' reaction to target inflation and output. Emerging from a direct control approach, monetary policy has evolved via monetary targeting approach (an indirect approach under the requirement of structural adjustment program) to its current state of inflation targeting. These evolution processes aim to enhance the impact of monetary actions on the aggregate economy. Though, monetary policy objectives compose of wide range of aggregates (including growth, exchange rate stability, interest rate and among others), its paramount effort is to curtail the high prices that have bedeviled the economy through disinflationary strategies. This is predominantly motivated by the high cost associated with high and volatile prices. However, following from Okum (1978), there is potential loss in output or employment associated with disinflationary policy. Given that Ghana is a developing country and desires to accelerate growth in its development path, knowledge about the cost of disinflationary policies is worthwhile. This will guide monetary policy implementation because policy makers will be guided by the economic cost of their actions in terms of output loss. Also, the regime changes can potentially have a large effect on the volatility of money, interest rates, outputs and prices. This study thus investigates monetary shocks by exploring the cost implication of regime changes on the disinflation strategy adopted by Ghana. The investigation is conducted within the periods 1960 to 2013. We conduct this study for Ghana because no literature has been identified on this theme. Secondly, since the focus of the Bank of Ghana is price stability, it is important to understand the economic effect of this policy directions in terms of output loss. This is because a fore knowledge of the economic cost associated with the disinflation policy will aid monetary authorities in implementing monetary policy. The study adopted the modelling approach based on multivariate Markov-Switching vector error correction model (hereafter MS-VECM). This strategy explicitly allows for regime changes in the variables since Ghana overtime has been characterized by different monetary and policy regime. The regime changes might have potential stochastic effects on both the short and long run dynamic impacts of monetary policy. MS-VECM modelling approach can account for the long run properties in this regard. Existing evidence on the impact of monetary policy in Ghana were based on vector error correction model but the results are mixed (Abradu-Otoo et al., 2003). Hitherto VAR models assume linearity and thus are unable to represent many non-linear dynamic patterns such as asymmetry, amplitude dependence and volatility clustering. For example, GDP growth rates typically fluctuate around a higher level and are more persistent during expansions, but they stay at a relatively lower level and are less persistent during contractions. Given this peculiarity, it would not be reasonable to expect a single, linear model to capture these distinct behaviors. Also, the underlying linearity assumption implies that the dynamic multipliers obtained from the VAR are invariant about the history of the system, size and sign of the shocks. However, the time-invariance of the parameters and Gaussianity are problematic for the better understanding of monetary policy shocks in Ghana especially regarding the structural shocks that has characterized the economy over the period. For example, as Fig. 2 show, the distribution of GDP and CPI are bimodel. This implies that the single distributional assumption used in hitherto VAR might have probable inference consequences on the estimates and monetary behavior in Ghana. Hence, this paper in its first attempt for Ghana provides an important contribution to the literature in this context. The study proceeds with section 2 providing a brief literature on monetary policy in Ghana. Section 3 describes the econometric strategy employed. Section 4 presents the results and discussion whiles section 5 concludes the study with some policy recommendations. Literature Review The empirical literature directed to verify monetary policy implementation and its effectiveness has grown extensively overtime. Given that monetary policy changes can occur in the implementation of policy (shocks) as well as objectives of policy (regimes), the implementation of policy (shocks) has been typically modelled as vector innovations to a Vector Auto-Regression (VAR) where monetary policy is identified by structural restrictions on the contemporaneous impacts of the variables (Neville and Owyang, 2004;Sims, 1992). The structural VAR literature on monetary policy exists in several studies (Cambazoglu and Karaalp, 2012;Epstein and Heintz, 2006;Luke, 2000;Moscarini and Postel-Vinay, 2010;Bernanke and Mihov, 1998). VAR Models however, assume linearity and thus it is unable to represent many non-linear dynamic patterns such as asymmetry, amplitude dependence and volatility clustering. Due to these inherent weaknesses in the VAR model, switching monetary policy regimes have gained a lot of attention in recent literature (Boivin and Giannonni, 2002;Hanson, 2002;Ghiani et al., 2014;Thams, 2007). Policy regimes engage switches in the policy rule that mirror changes in the policy maker's reaction to deviations from the target inflation rate and or output growth. Switching monetary policy studies are also able to account for unrelenting adjustments in policy which result from changes in central bank leadership or transparency which also affect the volatility of money, output and interest rates (Clarida et al., 2000;Dennis, 2001;Hanson, 2002). For instance, Dennis (2001) argues that a change in policy maker preferences has shifted after -1979 inflation target from around 7% to a value below 2%. Other studies have examined both the regime changes (objectives of policy) and policy shocks (policy implementations). To these studies, monetary policy is relevant not only to the policy maker's response to the exogenous economic shocks but also to the contemporaneous effects of the monetary policy innovations (Owyang, 2002;Sims and Zha, 2002). These papers however, failed to address the long-run objectives and impacts of monetary policy. The paper, like Neville and Owyang (2004) incorporates these long-run impacts. Regime switches in the long run relationship through the weighting matrix of the error correction term is also taken care off. Although a lot of studies have used the Markov Switching in an error correction framework (Clarida et al., 2003;Paap and Van Dijk, 2003;Hanson, 2002 and among others) around the world, Monetary policy studies in Ghana has been based on Vector innovations to a Vector Auto regression (VAR) (Abradu-Otoo et al., 2003;Epstein and Heintz, 2006;Atta-Mensah and Bawumia, 2003). Such studies are unable to represent many non-linear dynamic patterns. Also, these studies ignored monetary policy regime changes that has characterized the Ghanaian economy overtime. This study thus comes handy to address such issues. Econometric Modelling The aim of the study is to explore monetary policy implementation in regime switching. Hence the study adopted a vector error-correction model that allows for different states of the economy. The regime switching can either be modelled to allow all or part of the coefficient matrix to switch independently or with the error-correction term. However, this study allows the switch with the error term. This approach thus, assumes a stable long-run relationship i.e., regime invariant co integrating vector whereas the short run dynamics are analyzed in a Markov-Switching framework which allows the error correction to respond to regimes. By this, the study can examine the state dependent responses to monetary policy shocks. Where: ∆Y t = An n dimensional vector of differenced variables of interest. α = A vector of intercepts. α i = nXn parameter matrices. ω st = The state-dependent long run impact matrices. The long run state dependent matrix comprises of rXn matrix of co integrating vector β and nXr statedependent weighting matrix t st . Therefore: Given a two state first order Markov process S t ∈{0, 1} with its associated transition kernel P, where P ij = Pr[S t = i| S t-1 = j], then Equation 3.1 can be re-written as: Though, the long run state-dependent matrix can either switch in the co integrating vector, the weighing matrix or both, this study allows only switches in the error-correction term which implies a single set of long run relationship. This means that the correction mechanism depends on the state. By implication, switches in this framework are interpreted as differences in the rate at which the common long run relation is obtained. Allowing switches only in the error term is predominantly motivated by some potential interpretations. Given a regime-invariant long run relationship between the variables, the state-dependent coefficient assign weights to each relationship which implies that any perturbation to the system could have different long run effects across states (though the long run relationship is unchanged). For example, monetary perturbation has different long run effects depending on the monetary objective (targets). The different effect is because the long run response coefficients (ω st = t st β) is a function of the switching elements (Hamilton 1994, pp. 579-581). Estimation of Equation 3.2 is through the Gibbs sampling techniques. The procedure determines the co integrating relationships at the initial stage which are used to draw parameter values from the posterior prior. The study used the Bayesian methodology that uses Sims and Zha (2002) prior. This approach uses prior which accounts for non-estimated co integrating vectors. This therefore, does not require any explicit modelling of the co integrating vectors. To analyze the effect of monetary policy shock, the study adopts the Cholesky ordering which places the policy instrument last in the system ordering. In this three-variable system comprising price, output and policy instrument, the study assumes that monetary authorities observe prices and output before determining the level of the instrument. By this identification, it is assumed that policy does not contemporaneously impact on prices and output. Data Annual data ranging between 1964 and 2013 obtained from the World Development Indicator (WDI) were used for the analysis. The variables include consumer price index, gross domestic product at constant local currency unit and broad definition of money (M2). Though the central bank of Ghana in recent times is using interest rate instrument, the study adopted M2 as proxy for policy instruments because the time frame of the study includes periods of monetary targeting regime. To eliminate outliers, all the variables are logged. Figure 1 shows the graph of the series at both level and first differenced. We observe spikes in the plot of the differenced series suggesting structural changes and regime shifts. Thus, we conducted a preliminary exploration analysis to inspect the distribution of the series with some of its lags. This gives first-hand information on whether any of the series contain regimes. Figure 3 and 4 depict non-parametric plots of the series versus their first to fourth lags. The figure reveals a linear approximation for the series. This suggests that a linear approximation for the analysis may not be questionable since the entire series exhibits linear trend with no possibilities of regime shifts. However, the distributional plot for CPI and GDP in Fig. 5 indicates that the series depict bi-modal distribution suggesting the possibilities of regimes (i.e., the evolution process of the series might differ across periods). Following this both regime and non-regime unit root test were conducted on the series. Table 1 shows the test results for both regime and non-regime unit root tests. The non-regime unit root tests were conducted using the ADF test, whiles the regime test is conducted on a unit root null hypothesis against stationary SETAR. The test statistic is compared with the bootstrapped critical value 16.181, 18.4 and 23.01 for 10, 5 and 1% respectively. As Table 1 shows, the results from both tests indicate the presence of unit root in the series. The study further conducted a formal test to investigate the presence of co integration among the series. The formal test result is provided in Table 2. The test was conducted on two hypotheses. First, a test of no co integration against threshold co integration was conducted. A P-value of 0.93 fails to reject no co integration in the series. The second, a test of linear co integration against threshold co integration, supports the presence of linear co integration given a P-value of 0.44. Though, both tests reject threshold co integration, a test of model fit supports a model with one threshold. A P-value of 0.07 associated with the test statistic in the model fit test of linear VAR versus threshold VAR indicate that at 10% critical level, modelling the data in one threshold regime is superior. Based on this, the study proceeds in a Markov switching approach with one regime. Result and Discussion Given the study's objective to investigate monetary shocks in regimes, the study estimated a VECM model with extensions to accommodate states. This follows the exploratory analyses which indicate the presence of cointegration among the variables. The VECM is estimated in the presence of state restrictions following a tractable Markov process. The innovation of monetary shocks is estimated within a simple Cholesky specification ordering the policy variable (i.e., M2) last. Table 3 reports that there is only one co-integrating relationship and provides the weighting matrix for the relation that vary across regimes. The co-integration vector is fixed across regimes. States The transition probabilities for each state is reported in Table 4. The probability estimates indicate high level of persistence in each state. The probability of transition from one state to another is approximately the same in the arena of about 12%. Response to Policy Shocks The study considered the short run response to a one standard deviation shock to the policy instrument (i.e., money supply). The impulse response function is generated for a horizon up to twelve years. The generated IRF are either conditioned or not conditioned on the state (i.e., when the shock is generated in one state, it is transmitted through that particular state). Figure 6 The graph shows that there are no significant differences in how prices and output respond to the policy instrument. The effect of policy changes on prices and output is very minimal with coefficient ranging the same in both state 1 and state 2. The effect of policy instrument hits prices and output respectively from the 11 months and 8 months onwards in state 1. Similar evidence is found in state 2. Cost of Disinflation High inflation has bedeviled the economy of Ghana for long. However, in recent times inflation has showed a downward trend over the past few decades. In comparing the developments in the current monetary regime (inflation-targeting) to the control regimes and the monetary targeting regimes, the inflation rate has been quite stable. It averaged 50.0% per annum during the 1970s, 44.5% during the 1980s and was 27.9% during the 1990s and further down to 16.2% in the early six years of 2000s. Within the period 2009 and 2010, the rate has been stable at single-digit, though the trend has reverted upward in recent years. The favorable downward trend in the inflation rate together with the gains in the general macroeconomic trends raise issues in the short run tradeoff between stability and growth particularly given that Ghana is a developing country and desires to accelerate growth for development purposes. Thus, this study estimated the cost of disinflationary policy for Ghana. The tradeoff between output and inflation has been a popular area of research for years. Though, there are consensus among economists that high inflation is inimical to the economy, disinflationary policies on the other hand result in some short-run costs in terms of loss in output. As identified by (Okun 1978), disinflationary monetary policy result in output or employment loss (See among others Cecchetti and Rich, 2001;Fuhrer, 1994;1995) a one percentage point fall in inflation. Various methods for estimating the sacrifices ratio has been suggested in the literature (Ball, 1994;Zhang, 2001;Cecchetti and Rich, 2001). To calculate the sacrifice ratio, this study adopted Cecchetti and Rich (2001) VAR approach to access the output cost of disinflationary monetary shock within a single regime. As argued by Neville and Owyang (2004), this modelling approach can measure the cost of disinflation occurring because of switches between regimes. Following Neville and Owyang (2004), this study posits two distinct disinflationary episodes to include disinflationary periods driven by a policy shock and one driven by change in regime. Aside using the Markov process for the states, the study experimented to investigate the credibility of monetary authorities as policy switched from monetary targeting to inflation targeting framework. The aim is to identify if the credibility is enhanced given that credibility underscore inflation targeting. The estimated sacrifice ratios for both within and across states are reported for both the Markov process and pre-specified regimes in Table 5. As showed in the table, the within-regime sacrifices ratio is estimated to be 1.46 and 1.90 for state 1 and 2 respectively. For the pre-specified, the study estimated the ratios for the periods prior to 2002 and the aftermath representing monetary and inflation targeting regimes respectively. The results indicate that the sacrifice ratio has fallen from 0.59 to 0.43. This has implication for expectation formation hence, credibility from monetary authorities. The results suggest that agents can forecast inflation very well since they are utilizing the same information available to monetary authorities. By this the cost of disinflation becomes minimal. Generally, the study found a low sacrifice ratio which is in conformity with Kinful (2007) study. Though, the foregoing discussions indicate disinflationary cost has fallen within the inflation targeting period, the overall (pooled sample) sacrifice ratio estimated at 1.42 suggests a cumulative output loss of approximately 15%. This produces a worrying situation given that Ghana is a developing country which desires to accelerate growth for development. Conclusion and Recommendation This study examined monetary policy shock in a Markov-switching vector error correction framework. The study assumed a stable long run co-integration relationship, whiles allowing long run variations through switches in the weighing matrix of the error correction term. While as this approach overcomes the linearity assumption in dealing with monetary policy shock, it's theoretical appealing goes to the rational expectation critique of model of this kind. In investigating monetary impulse, the study found that though, monetary shocks generate different impulse in each state, the monetary response do not differ significantly across regime. The study also analyzed the cost implication of disinflationary policy in Ghana. The estimated sacrifice cost of disinflation differs within and across states. In conformity with studies in the literature, the result indicates that the cost of disinflation is very low though. The finding of this study has some policy implication for the conduct of monetary policy in Ghana. The sacrifice ratio obtained indicate that monetary policy should be conducted with care in order not to erode output growth given the state of economic development in the country. Further, the study suggests that cost of disinflation is low within inflation targeting period because agents can forecast better due to enhanced credibility. By implication, policy makers should be more transparent and credible in their actions to help minimize associated cost.
4,635.4
2016-02-01T00:00:00.000
[ "Economics" ]
Lexical Errors Made by Instagram Machine Translation in Translating the Account of “CNN Indonesia” News Article . This study aims to depict the types of errors made by Instagram Machine Translation and to find out the most dominant types of lexical errors made by Instagram Machine Translation on ‘CNN Indonesia’ Instagram account. The design of this research was qualitative design. The data were in words, phrases and sentences that contained lexical errors made by Instagram Machine Translation on the “CNNIndonesia” Instagram account. The data were taken by running an Instagram in one account of various captions related to the lexical errors of the study object. The data were collected through stages: finding out and determining, classifying and separating the words, phrases and sentences that contained lexical errors made Instagram Machine Translation on the “CNN Indonesia” Instagram account. The techniques of analysis data researcher translated the captions using Instagram machine translation and then the translation result is compared to the source language. The next step is to examine the lexical errors produced by Instagram machine translation. The research result shows that the types of lexical errors made by Instagram Machine Translation on the “CNN Indonesia” Instagram account based on the error categories theory by Vilar et al founds are: 4 missing errors, 10 incorrect words and 8 unknown words. All errors indicated that Instagram machine translation could not represent the target language in the “CNN Indonesia” Instagram account. The users of Instagram need to filter every translation that is translated by Instagram machine translation before receiving it as information . Introduction The concept of translation introduces to communicate messages from one language to another. Translation is an activity that aims to convey the meaning or meanings of a given linguistic discourse from one language to another.Translation may be described in phrases of sameness of meaning across languages.Translation is a challenging task in transferring meaning from a source language (SL) to a target language (TL).It's concluded for this sense because an irresponsible translation system will result in a misunderstanding of the message located inside the supply language within the target language.The equivalence of a translation must be expressed in an appropriate way in SL to TL so that readers can enjoy the translation and forget for a moment that they are reading is only a translation [1]. According to [2] when translating, it is important to understand the meaning of the source text in order to create an accurate translation in the target text so that meaning is translated in terms of grammar, style and sound.It is very important to understand the meaning of the source language when we translating, by understanding the meaning of a source language the translating will reach the target language according to the structure, writing style, and sound of the writing.And we need to be able to understand or be proficient in the two languages we are going to translate so that the translation does not look awkward [2]. Nowadays in the world of translation it is different in translating the text.People are getting used to using machine translation.But still, machine translation still needs to be check by human rectification.This is to avoid the translation error found in the translation result.Machine translation means automatic translation.Machine translation is designed to translate text from one language (source language) to another (target language) without human help.Machine Translation offers a machine that interprets textual content from the source language to the target language.The interpretation expresses the same meaning as inside the source language [2]. The "Incorrect Words" errors are the most general type of error.When the system is unable to find the correct translation of a given the word.We distinguish five subcategories here.The incorrect word in the first example distorts the meaning of the sentence.In this case, we could further distinguish two additional subclasses: when the system selects an incorrect translation and when the system is unable to disambiguate the correct meaning of a source word in a given context, though the distinction is certainly hazy.The following subcategory of "Incorrect Words" errors occurred when the system was unable to produce the correct form of a word, even though the translation of the base form was correct.It is especially important for inflected languages, where the high variability of open word classes makes machine translation difficult. Extra words in the generated sentence cause another type of error.This type of error was introduced when researching the translation of spoken language input, as artifacts of spoken language may produce additional words in the generated sentence. The last two classes are of lesser importance.The first is style errors refer to a poor choice of words when translating a sentence, but the meaning is preserved even if it is not entirely correct. A common example is the repeated use of a word in a close context.A translator would choose a synonym and avoid word repetition in this case.The second is about idiomatic expressions that the system does not recognize and attempts to translate as normal text.Normally, these expressions cannot be translated in this manner, resulting in additional translation errors. Unknown words can also cause errors.In this case, we can differentiate between truly unknown words (or stems) and unseen forms of known stems.A variation of this category is particularly important for the Chinese-English language pair.The majority of European languages, or even languages with the same alphabet, can be "translated" simply by copying the input word to the generated sentence, with no further processing. Lastly, there may be punctuation errors, but for the current machine translation output quality, these are minor annoyances for languages with no fixed punctuation rules and are not taken into account further in this work.Of course, the error types defined in this manner are not mutually exclusive.In fact, it is not uncommon for one type of error to result in another.A bad word translation, for example, can result in a bad ordering of the words in the generated sentence. Then, according error classification based on [4], actually, there are only three categories are related to the type of lexical error.As in [6], many of the errors made were as a result of the student's lack of understanding in differentiating the word class.There are missing words, incorrect words and unknown words.For instance, to reveal the adjective form, the students did not add the suffix to the noun. Instagram is one application that is booming and is used by many people in the world.Quoting "An Analysis of Grammatical Errors Of Using Google Translate From Indonesia To English In Writing Undergraduate Thesis Abstract Among The Students' English Department Of Iain Metro In The Academic Year 2016/2017" by (Kurniasih, 2017) in this thesis she focused on grammatical error in translation.This study applied Miles and Huberman.The research result show that the student using Google translate in translating the abstracts and show results of a finding translate by Google Translate is not accurate in English [7]. The similarity with my research is the topic about machine translation.The differences are the focuses in Kurniasih's research is on grammatical error while this researcher focuses on lexical error and in Kurniasih's used [8] while this research refer use [4].Previous study contributing to enhances the writer knowledge about Miles and Huberman theory and shows that Google Translate is still not accurate. "An Error Types Analysis on YouTube Indonesian-English Auto-Translation in Kok Bisa? Channel" by [9] in this article author investigates the error types that commonly occur in the translation produced by YouTube auto-translate.This research uses error classifications from This study used Morgan's sample selection table.The errors were categorized based on the classification of error types developed by [11].The results of the study revealed that the register category was the most frequent error area. The similarity with this research is the topic about machine translation.The differences are the focuses in Jahanshahi's analysis of the type and frequency of the errors occurring in the English translation of Islamic texts by Iranian translators and analyze the possible cause of the errors while this researcher focus on lexical error made by Instagram Machine Translation and in Jahanshahi's research Morgan's sample selection table, while researcher refer use [4].Previous study contributing to enhances the writer knowledge about Morgan's sample selection table and revealed that errors occurring in the English translation of Islamic texts by Iranian translators and the possible cause of the errors revealed that the register category was the most frequent error area. "Lexical Errors Produced By Instagram Machine Translation" by [12] in this thesis she focuses on lexical error in translation.This study applied theory of [4].The research result shows that Instagram machine translation produced so many errors and shows the weakness of machine translation to represent the genuine language. The similarities with my research are the topic about machine translation, and then the thesis focuses on lexical error in translation and use theory of Vilar [4].The difference between Susanti's and mine is only Instagram account.But the application we use is the same.Previous study contributing to enhances the writer knowledge more about Vilar theory and shows that Instagram translation is not represent the genuine language. Discussion . Table 11 Frequency of types of Instagram Machine Translation From the table above, it can be seen that types of IMT errors based on Vilar et al (2006) related to lexical categories are missing words 18,18%, incorrect words 45,45%, and unknown words 36,36%.The incorrect words shows the highest percentage because incorrect word happened when the system or machine translation unable to find the correct translation in the translation result. Conclusion According to the analysis and findings in the previous chapter, it can be concluded that the ten data from the captions on "CNNIndonesia" Instagram account contain three types of errors in the lexical category.Those errors are missing words, incorrect words, and unknown words.But, three types of error are not always found in every data.Incorrect word and unknown word become the highest frequent error found in ten data on the captions of "CNNIndonesia" Instagram account.In general, the error that is often encountered in IMT is translating the trade name or the name of the Institute literally.Thus, there is an unnecessary translation process.In addition, all errors indicated that Instagram machine translation cannot represent the target language in the "CNNInndonesia" Instagram account.So, the users of Instagram need to filter every translation that translated by Instagram machine translation before receiving it as the information. REFERENCES photos posted on Instagram.This feature will make it easier for many people to know the translation of captions in other languages. [ 4 ] . The result shows that the most frequent error types are wrong lexical choice, bad word form, missing auxiliary word, short range word level word order and extra word.The other error types rarely occur in the translation.The similarities with my research are the topic about machine translation and uses error classifications from [4].The difference are the focuses in Laksana's research is investigates the error types that commonly occur in the translation produced by YouTube auto-translate while researcher focuses on lexical error produced by Instagram Machine Translation.Previous study contributing to enhances the writer knowledge about Vilar theory and shows that YouTube auto translates is still made error."Error Analysis of English Translation of Islamic Texts by Iranian Translators" by [10] in this journal author analysis of the type and frequency of the errors occurring in the English translation of Islamic texts by Iranian translators and analyze the possible cause of the errors. Table 1 Classification of Data 1 Table 2 Classification of Data 2 Table 3 Classification of Data 3 Table 4 Classification of Data 4 Table 5 Classification of Data 5 Table 6 Classification of Data 6 Table 7 Classification of Data 7 Table 10 Classification of Data 10
2,716
2023-01-31T00:00:00.000
[ "Linguistics", "Computer Science" ]
Weak Identification in Fuzzy Regression Discontinuity Designs In fuzzy regression discontinuity (FRD) designs, the treatment effect is identified through a discontinuity in the conditional probability of treatment assignment. We show that when identification is weak (i.e., when the discontinuity is of a small magnitude), the usual t-test based on the FRD estimator and its standard error suffers from asymptotic size distortions as in a standard instrumental variables setting. This problem can be especially severe in the FRD setting since only observations close to the discontinuity are useful for estimating the treatment effect. To eliminate those size distortions, we propose a modified t-statistic that uses a null-restricted version of the standard error of the FRD estimator. Simple and asymptotically valid confidence sets for the treatment effect can be also constructed using this null-restricted standard error. An extension to testing for constancy of the regression discontinuity effect across covariates is also discussed. Supplementary materials for this article are available online. INTRODUCTION Since the late 1990s regression discontinuity (RD) and fuzzy regression discontinuity (FRD) designs have been of growing importance in applied economics. There is extensive theoretical work on RD and FRD designs. A few examples include Hahn, Todd, andVan der Klaauw (1999, 2001); Porter (2003); Buddelmeyer and Skoufias (2004); McCrary (2008); Frölich (2007); Frölich and Melly (2008) ;Otsu, Xu, and Matsushita (2015); Imbens and Kalyanaraman (2012) ;Calonico, Cattaneo, and Titiunik (2014); Arai and Ichimura (2013) ;Papay, Willett, and Murnane (2011); Imbens and Zajonc (2011); Dong and Lewbel (2010); and Fe (2012). See Van der Klaauw (2008) and Lee and Lemieux (2010) for a review of much of this literature. Hundreds of recent applied articles have used RD, and in many cases FRD designs. (For example, as of July 18, 2013, Imbens and Lemieux (2008) review of RD and FRD best practices was cited in 990 articles according to Google Scholar, with 372 of these articles explicitly considering FRD.) Around the same time, the seminal works of Bound, Jaeger, and Baker (1995) and Staiger and Stock (1997) made weak identification in an instrumental variables (IV) context an important consideration in applied work (see Stock, Wright, and Yogo 2002;Andrews and Stock 2007 for surveys of the literature). However, despite the close parallel between an IV setting and the FRD design (see Hahn, Todd, and Van der Klaauw 2001) there has been no theoretical or practical attempt to deal with weak identification in the FRD design more broadly. To get a sense of the practical importance of weak identification in the FRD design, we have examined a sample of influential applied articles that use the design. We then apply the F-statistic standards discussed below to see how many of these articles may suffer from a weak identification problem. We find that in about half of the articles where enough information is reported to compute the F-statistic, weak identification appears to be a problem in at least one of the empirical specifications. (For the procedure followed to obtain the sample of articles, see the online supplement, Section 1.) We take this as evidence that weak identification is a serious concern in the applied FRD design literature. Since it is a matter of practical importance, we examine weak identification in the context of the FRD design, demonstrate the problems that arise, and propose uniformly valid testing procedures for treatment (RD) effects. In this article, we show that the local-to-zero analytical framework common in the weak instruments literature can be adapted to FRD, and when identification is weak, we show that the usual t-test based on the FRD estimator and its standard error suffers from asymptotic size distortions. The usual confidence intervals constructed as estimate ± constant × standard error are also invalid because their asymptotic coverage probability can be below the assumed nominal coverage when identification is weak. We rely on novel techniques recently developed in the literature on uniform size properties of tests and confidence sets (Andrews, Cheng, and Guggenberger 2011) to formally justify our local-to-zero framework. Unlike the framework used in the weak IV literature, ours depends not only on the sample size but also on a smoothing parameter (the bandwidth). We suggest a simple modification to the t-test that eliminates the asymptotic size distortions caused by weak identification. Unlike the usual t-statistic, the modified t-statistic uses a nullrestricted version of the standard error of the FRD estimator. The modified statistic can be used with standard normal critical values for two-sided testing. For two-sided testing, the proposed test is equivalent to the Anderson-Rubin test (Anderson and Rubin 1949) adopted in the weak IV literature (Staiger and Stock 1997). For one-sided testing, the modified t-statistic has to be used with nonstandard critical values that must be simulated on a case-by-case basis following the approach of Moreira (2001Moreira ( , 2003. We discuss how to evaluate the magnitude of potential size distortions in practice following the approach of Stock and Yogo (2005). The strength of identification is measured by the concentration parameter, which in the case of FRD depends on the magnitude of the discontinuity in the treatment variable and on the density of the assignment variable (the variable that determines treatment assignment). The magnitude of potential size distortions can be tested by testing hypotheses about the concentration parameter with noncentral χ 2 1 critical values using the F-statistic, which is an analog of the first-stage F-statistic in IV regression. Surprisingly, we find critical values that are much higher than would be required in a simple IV setting. When the F-statistic is only around 10, which is often used as a threshold value for weak/strong identification in the IV literature, a twosided test with nominal size of 5% is in fact a 13.6% test, and a 5% one-sided test is in fact a 16.9% test. Nearly zero (under 0.5%) size distortions of a 5% two-sided test correspond to the values of the F-statistic above 93. Asymptotically valid confidence sets for the treatment effect can be obtained by inverting tests based on the modified t-statistic. Since the FRD is an exactly identified model, these confidence sets are easy to compute, as their construction only involves solving a quadratic equation. Most of the literature on weak instruments deals with the case of over identified models (see, e.g., Andrews and Stock 2007). In exactly identified models, the approach suggested by Anderson and Rubin (1949) results in efficient inference if instruments turn out to be strong and remains valid if instruments are weak. However, in over-identified models, Anderson and Rubin's tests are no longer efficient even when instruments are strong. Several articles (Kleibergen 2002;Moreira 2003;Andrews, Moreira, and Stock 2006) proposed modifications to Anderson and Rubin's basic procedure to gain back efficiency in over identified models. Since the FRD design is an exactly identified model, we can adapt Anderson and Rubin's approach without any loss of power. These confidence sets are expected to be as informative as the standard ones, when identification is strong. However, unlike the usual confidence intervals, the confidence sets we propose can be unbounded with positive probability. This property is expected from valid confidence sets in the situations with local identification failure and an unbounded parameter space (see Dufour 1997). In a recent article, Otsu, Xu, and Matsushita (2015) proposed empirical likelihood-based inference for the RD effect. Using the profile empirical likelihood function, they proposed confidence sets for the RD effect, which are expected to be robust against weak identification. However, they did not provide a formal analysis of the weak identification. While their method does not involve variances estimation and for that reason can enjoy better higher-order properties than our approach, it requires computation of the empirical likelihood function numerically and is computationally more demanding. We also discuss testing whether the RD effect is homogenous over differing values of some covariates. The proposed testing approach is designed to remain asymptotically valid when identification is weak. This is achieved by building a robust confidence set for a common RD effect across covariates. The null hypothesis of the common RD effect is rejected when that confidence set is empty. To illustrate how our proposed confidence sets may differ from the standard ones in practice, we compare the results of applying the standard confidence sets and the proposed confidence sets in two separate applications that use the FRD design to estimate the effect of class size on student achievement. Our main finding is that, as weak identification becomes more likely, the standard confidence sets and the weak identification robust confidence sets become increasingly divergent. Interestingly, in a number of cases the robust confidence sets provide more informative answers than the standard ones. More generally, the empirical applications, along with a Monte Carlo study reported in an online supplement, suggest that our simple and robust procedure for computing confidence sets performs well when identification is either strong or weak. The rest of the article proceeds as follows. In Section 2, we describe the FRD model, derive the uniform asymptotic size of usual t-tests for FRD, discuss size distortions and testing for potential size distortions, and describe weak-identificationrobust inference for FRD. Section 3 discusses robust testing for constancy of the RD effect across covariates. We present our empirical applications in Section 4. The online supplement contains additional materials including the proofs and the Monte Carlo results. The Model, Estimation, and Standard Inference Approach In RD designs, the observed outcome variable y i is modeled as y i = y 0i + x i β i , where x i is the treatment indicator variable, y 0i is the outcome without treatment, and β i is the random treatment effect for observation i. If x i is binary, it takes on value one if the treatment is received and zero otherwise. When there are treatments of different intensity, x i may be nonbinary. The treatment assignment depends on another observable assignment variable, z i through E(x i |z i = z). The main feature in this framework is that E (x i |z i = z) is discontinuous at some known cutoff point z 0 , while E (y 0i |z i ) is assumed to be continuous at z 0 . For binary x i , when lim z↑z 0 E(x i |z i = z) − lim z↓z 0 E(x i |z i = z)| = 1, we have a sharp RD design, and a fuzzy design otherwise. When x i is a continuous treatment variable, the design is sharp if x i is a deterministic function of z i , and fuzzy otherwise. The focus of this article is fuzzy designs, and the main object of interest is the RD effect: where , and x + and x − are defined similarly with y i replaced by x i . The exact interpretation of β depends on the assumptions that the econometrician is willing to make in addition to Assumption 1. As discussed by Hahn, Todd, and Van der Klaauw (2001), if β i and x i are assumed to be independent conditional on z i , then β captures the average treatment effect (ATE) at z i = z 0 : . When x i is binary and under an alternative set of conditions, which allow for dependence between x i and β i , Hahn, Todd, and Van der Klaauw (2001) showed that the RD effect captures the local ATE (LATE) or ATE for compliers at z 0 , where compliers are observations for which x i switches its value from zero to one when z i changes from z 0 − e to z 0 + e for all small e > 0. (See the discussion on page 204 of their article.) Regardless of its interpretation, the RD effect is estimated by replacing the unknown population objects in (1) with their estimates. Following Hahn, Todd, and Van der Klaauw (2001), it is now a standard approach to estimate y + , y − , x + , and x − using local linear kernel regression. Let K(·) and h n denote the kernel function and bandwidth, respectively. For estimation of y + , the local linear regression is and the local linear estimator of y + is given byŷ + n =â n . The local linear estimator for y − can be constructed analogously by replacing 1{z i ≥ z 0 } with 1{z i < z 0 } in (2). Similarly, one can estimate x + and x − by replacing y i with x i . Letŷ − n ,x + n , andx − n denote the local linear estimators of y − , x + , and x − , respectively. The corresponding estimator of β is given byβ The asymptotic properties of the local linear estimators and β n are discussed in Hahn, Todd, and Van der Klaauw (1999) and Imbens and Lemieux (2008). We assume that the following conditions are satisfied. a. K(·) is continuous, symmetric around zero, nonnegative, and compactly supported second-order kernel. b. {(y i , x i , z i )} n i=1 are iid; y i , x i , z i have a joint distribution F such that i. f z (·) (the marginal PDF of z i ) exists and is bounded from above, bounded away from zero, and twice continuously differentiable with bounded derivatives on N z 0 (a small neighborhood of z 0 ). ii. E(y i |z i ) and E(x i |z i ) are bounded on N z 0 and twice continuously differentiable with bounded derivatives on N z 0 \{z 0 }; lim e↓0 d p dz p E(y i |z i = z 0 ± e) and lim e↓0 bounded from above and bounded away from zero on N z 0 ; lim e↓0 σ 2 y (z 0 ± e), lim e↓0 σ 2 x (z 0 ± e), and lim e↓0 σ xy (z 0 ± e) exist, where σ xy (z i ) = cov(x i , y i |z i ); |ρ xy | ≤ρ for someρ < 1, where ρ xy = σ xy /(σ x σ y ), σ xy = lim e↓0 (σ xy (z 0 + e) + σ xy (z 0 − e)), and σ 2 x and σ 2 y defined similarly with the conditional covariance replaced by the conditional variances of x i and y i , respectively. iv. For some Remark. (1) The smoothness conditions imposed in Assumption 2(b) are standard for kernel estimation except for the left/right limit conditions in parts (ii) and (iii), which are due to the discontinuity design and have been used in Hahn, Todd, and Van der Klaauw (1999). (2) Asymptotic normality of the local linear estimators is established using Lyapounov's central limit theorem (CLT), and part (iv) of Assumption 2(b) can be used to verify Lyapounov's condition (see Davidson 1994, Theorem 23.12, p. 373). (3) With twice differentiable functions, the bias of the local linear estimators is of order h 2 n even near the boundaries. The condition √ nh n h 2 n → 0 in Assumption 2(c) is an under-smoothing condition, which makes the contribution of the bias term to the asymptotic distribution negligible. The condition nh 3 n → ∞ ensures that the variance of the local linear estimator tends to zero. Assumption 2(c) is satisfied if the bandwidth is chosen according to the rule h n = constant × n −r with 1/5 < r < 1/3. It is convenient for our purposes to present the asymptotic properties of the local linear estimators and the FRD estimator as follows. Define The constant k is known as it depends only on the kernel function. In the case of asymmetric kernels, we will have two different constants for the left and right estimators, with the bounds of integration replaced by (−∞, 0] for the left estimators. For y = y + − y − , y n =ŷ + n −ŷ − n , and similarly defined x and x n , by Assumption 2 and Lyapounov's CLT we have where Y and X are two bivariate normal variables with zero means, unit variances, and correlation coefficient ρ xy (the latter is defined in Assumption 2(b)iii together with σ x and σ y ). This in turn implies that under standard asymptotics, x − 2bσ xy . The last result holds due to Assumption 1(a), that is, only when x = 0 and is fixed. The asymptotic variance σ 2 y can be consistently estimated bŷ and σ xy can be constructed similarly by replacing ( Hence, a consistent estimator of σ 2 (b) can be constructed aŝ A common inference approach for the FRD effect is based on the usual t-statistic. Thus, when testing H 0 : β = β 0 one typically computes ) and compares it with standard normal critical values, as T n (β) → d N (0, 1), when x = 0 and is fixed. Confidence intervals for β are constructed by collecting all values β 0 for which H 0 : β = β 0 cannot be rejected using a test based on T n (β 0 ). Weak Identification in FRD Weak identification is a finite-sample problem, which occurs when the noise due to sampling errors is of the same magnitude or even dominates the signal in estimation of a model's parameters. In such cases, the asymptotic normality result T n (β) → d N (0, 1) provides a poor approximation to the actual distribution of the t-statistic, and as a result inference may be distorted. Assuming that H 0 : β = β 0 , we can rewrite the t-statistic as When testing H 0 against two-sided alternatives, one uses the absolute value of T n (β), which eliminates the sign term. Since under standard (fixed distribution) asymptotics √ nh n y n − β x n → d N (0, kσ 2 (β)/f z (z 0 )), the usual ttest has no size distortions as long asβ n is consistent andσ 2 n (β n ) approximates σ 2 (β) very well. Define Y n = (f z (z 0 )/k) 1/2 (nh n ) 1/2 ( y n − y) and X n = (f z (z 0 )/k) 1/2 (nh n ) 1/2 ( x n − x). We can now writê Note that in the above expression, the estimation errors Y n and X n represent the noise components, while the signal component is given by (nh n ) 1/2 x. Since the noise terms have bounded variances, the signal dominates the noise as long as (nh n ) 1/2 x → ∞. In this case,β n → p β. If, however, lim n→∞ |(nh n ) 1/2 x| < ∞, the signal and noise are of the same magnitude, which results in inconsistency of the FRD estimator and weak identification. Thus, similarly to the weak IV literature (Staiger and Stock 1997), it is appropriate to model weak identification by assuming that x is inversely related to the square root of the sample size. However, the kernel estimation framework and presence of the bandwidth, which is chosen by the econometrician, require some adjustments. Suppose one models weak identification as x ∼ 1/(ng n ) 1/2 , for some sequence g n → 0 as n → ∞. In this case, the econometrician can obtain consistency ofβ n and resolve weak identification simply by choosing h n so that h n /g n → ∞. This situation resembles so-called nearly weak or semistrong identification, see Hahn and Kuersteiner (2002), Caner (2009), Renault (2009, 2012), and Antoine and Lavergne (2014). Hence, the worst-case scenario, in which the econometrician cannot resolve weak identification by tweaking the bandwidth, occurs when g n = h n , that is, x ∼ 1/(nh n ) 1/2 . This idea can be formalized using the results obtained in the recent literature on uniform size properties of tests and confidence sets: Andrews and Guggenberger (2010), Andrews and Cheng (2012), and Andrews, Cheng, and Guggenberger (2011). The latter article provides a general framework of establishing uniform size properties of tests and confidence sets. To describe this framework, let S n be a test statistic with exact finite-sample distribution (in a sample of size n) determined by λ ∈ . Note that λ may include infinite-dimensional components such as distribution functions. Let cr n (α) denote a possibly data-dependent critical region for nominal significance level α. The test rejects a null hypothesis when S n ∈ cr n (α), and the rejection probability is given by RP n (λ) = P λ (S n ∈ cr n (α)), where subscript λ in P λ indicates that the probability is computed for a given value of λ ∈ . The exact size is defined as ExSz n = sup λ∈ RP n (λ). Note that ExSz n captures the maximum rejection probability for any combination of parameters λ (the worst case scenario). In large samples, the exact size is approximated by asymptotic size AsySz = lim sup n→∞ sup λ∈ RP n (λ). Contrary to the usual point-wise asymptotic approach, AsySz is determined by taking supremum over the parameter space before taking limit with respect to n. It has been argued in many articles that controlling AsySz is crucial for ensuring reliable inference when test statistics have discontinuous asymptotic distribution, that is, when point-wise asymptotic distribution is discontinuous in a parameter. On the importance of uniform size, see, for example, Manski (2004, p. 1848), Mikusheva (2007), and references in Andrews, Cheng, and Guggenberger (2011). In what follows, we rely on the following result of Andrews, Cheng, and Guggenberger (2011): (Lemma 1 combines Assumption B and Theorems 2.1 and 2.2 in Andrews, Cheng, and Guggenberger (2011). Lemma 1 (Andrews, Cheng, and Guggenberger 2011). Let {d n (λ) : n ≥ 1} be a sequence of functions, where d n : pose that for any subsequence {p n } of {n} and any sequence {λ p n ∈ } for which d p n (λ p n ) → d ∈ D, we have that RP p n (λ p n ) → RP(d) for some function RP(d) ∈ [0, 1]. Then, AsySz = sup d∈D RP(d). To apply Lemma 1, we define We define λ 4 = F , where F is the joint distribution of x i , y i , z i and is such that, given λ 1 ∈ R + , λ 2 ∈ [−ρ,ρ], and λ 3 ∈ R, the three equations in (6) hold. Note that λ 4 is an infinitedimensional parameter that depends on λ 1 , λ 2 , and λ 3 . As explained by Andrews, Cheng, and Guggenberger (2011, pp. 8-9), d n (λ) is chosen so that when d n (λ n ) converges to d ∈ D for some sequence of parameters {λ n ∈ λ : n ≥ 1}, the test statistic converges to some limiting distribution, which might depend on d. In view of (4) and (5), we therefore define While λ 4 = F affects the finite-sample distribution of the test statistic, it does not enter its asymptotic distribution, and therefore can be dropped from d n (λ) as discussed by Andrews, Cheng, and Guggenberger (2011, p. 8). Next, we describe the asymptotic size of tests for FRD based on the usual t-statistic and standard normal critical value. Let z ν denote the νth quantile of the standard normal distribution. Theorem 1. Suppose that Assumption 2 holds. Let X , Y be two bivariate normal variables with zero means, unit variances, and correlation d 2 . Define a. For tests that reject H 0 : β = β 0 in favor of Remark. A commonly. used measure of identification strength is the so-called concentration parameter. On the importance of the concentration parameter in IV estimation, see, for example, Stock and Yogo (2005). In our framework, the concentration parameter is given by d 2 n,1 , where d 2 n,1 → ∞ corresponds to strong (or semistrong) identification, and identification is weak when the limit of d 2 n,1 is finite. As it is apparent from the expressions for λ 1 and d n,1 in (6) and (7), the concentration parameter and, therefore, the strength of identification depend not only on the size of discontinuity in treatment assignment x, but also on f z (z 0 ), the PDF of the assignment variable at z 0 . Hence, smaller values of f z (z 0 ) would correspond to a more severe weak identification problem. For any permitted values of d 2 and d 3 , when d 1 = ∞, we have T ∞,d 2 ,d 3 ∼ N (0, 1). Thus, the asymptotic size of tests based on T n (β 0 ) is equal to nominal size α under strong or semistrong identification. When d 1 < ∞, it is straightforward to compute AsySz numerically. To compute asymptotic rejection probabilities given d 1 , d 2 , d 3 , first using bivariate normal PDFs, one integrates numerically 1(|T d 1 ,d 2 ,d 3 | > z 1−α/2 ) or 1(T d 1 ,d 2 ,d 3 > z 1−α ) over the support of the joint distribution of Y, X . Rejection probabilities then can be numerically maximized over d's. Table 1 reports maximal rejection probabilities of one-and two-sided tests based on the usual t-statistic. The rejection probabilities reported in Table 1 were computed by numerical integration using quad2d function in Matlab. Integration bounds for normal variables were set to [−7, 7], and the rejection probabilities were maximized over the following grids of values: from −0.99 to 0.99 at 0.01 intervals for d 2 , and from −1000 to 1000 at 0.5 intervals for d 3 . It shows that AsySz approaches one as the concentration parameter approaches zero. Size distortions decrease monotonically as the concentration parameter increases. In the case of two-sided testing, nearly zero size distortions (under 0.5%) correspond to the concentration parameter of order d 2 1 ≥ 64 for asymptotic 5% tests, and d 2 1 ≥ 50 2 for asymptotic 1% tests. The table also shows that one-sided tests suffer from more substantial size distortions than two-sided tests, which is due to asymmetries in the distribution of T d 1 ,d 2 ,d 3 . Testing for Potential Size Distortions Following the approach of Stock and Yogo (2005), Table 1 can be used for testing a null hypothesis about the largest potential size distortion against an alternative hypothesis under which the largest potential size distortion does not exceed a certain prespecified level. Suppose that the econometrician decides that identification is strong enough if, in the case of 1% twosided testing, the maximal rejection probability does not exceed 5%. Thus, the econometrician effectively adopts tests with 5% significance level, however uses the 1% standard normal critical value. According to the results in Table 1, the corresponding null hypothesis and its alternative in this case can be stated in terms of the concentration parameter d 2 1 as H W 0 : d 2 1 ≤ 9 and H S 1 : d 2 1 > 9, respectively. A test of H W 0 can be based on the estimator of discontinuity x. Define As long as the concentration parameter is finite, F n → d χ 2 1 (d 2 1 ), a noncentral χ 2 1 distribution with noncentrality parameter d 2 1 . Let χ 2 1,1−τ (d 2 1 ) denote the (1 − τ )th quantile of the χ 2 1 (d 2 1 ) distribution. Since size distortions are monotonically decreasing when the concentration parameter increases, an asymptotic size τ test of H W 0 should reject it when F n > χ 2 1,1−τ (d 2 1 ). Noncentral χ 2 1 critical values are reported in the last two columns of Table 1 for selected values of the concentration parameter and τ = 0.05, 0.01. For example, H W 0 : d 2 1 ≤ 9 should be rejected in favor of H S 1 : d 2 1 > 9 by a 5% test when F n > 21.57. In the case of 5% two-sided testing of β, one needs the concentration parameter of at least 64 to en- Table 1 substantially exceed the rule-of-thumb of 10, which is often used in the literature as a threshold value for weak IVs. According to our calculations, with an F-statistic of only 10, one cannot reject H W 0 : d 2 1 ≤ 1.51 2 at 5% significance level. However, a concentration parameter of 1.51 2 corresponds to maximal rejection probabilities of 16.9% and 13.6% for 5% one-sided and twosided tests, respectively. The results from Table 1 can also be used for designing valid tests (for the FRD effect β) based on usual t-statistics in combination with somewhat larger than usual critical values. For example, suppose one is interested in a 5% two-sided test about β, and rejects the null hypothesis when F n > 21.57 and |T n (β 0 )| exceeds the 1% standard normal critical value. According to Table 1, if the concentration parameter d 2 1 ≥ 9, the asymptotic size does not exceed 5%. On the other hand, if d 2 1 ≤ 9, lim n→∞ P (F n > 21.75) ≤ 0.05. Hence, overall this test has an asymptotic 5% significance level. Intuitively, such a test is valid because the null-hypothesis for the F-pretest assumes size distortions, and one proceeds using the t-statistic only if it is rejected, that is, if the concentration parameter is found to be large enough. Note, however, that the procedure is conservative. Furthermore, passing the F-test does not completely safeguard against size distortions, and the usual t-statistic must be used with somewhat larger critical values. Although the F-test provides useful guidance on the potential magnitude of size distortions, practitioners should not solely rely on this test to decide whether it is worth proceeding with the estimation. With this in mind, we present a robust inference approach in the next section that always yields valid confidence intervals regardless of the strength of identification and does not rely on any pretests. Weak-Identification-Robust Inference for FRD A common approach adopted in the weak IV literature is to use weak-identification-robust statistics to test hypotheses about structural parameters directly, instead of using their estimates and standard errors. The Anderson-Rubin (AR) statistic (Anderson and Rubin 1949; Staiger and Stock 1997) is often used for that purpose. In the context of IV regression, the AR statistic can be used to test H 0 : β = β 0 against H 1 : β = β 0 by testing whether the null-restricted residuals computed for β = β 0 are uncorrelated with the instruments. In our case, the structural parameter is defined by (1). Hence, to test H 0 : β = β 0 against H 1 : β = β 0 , following the AR approach, we can test instead H 0 : y − β 0 x = 0 against H 1 : y − β 0 x = 0. A test, therefore, can be based on nh n y n − β 0 x n 2 kσ 2 n (β 0 )/f z,n (z 0 ) where T R n (β 0 ) denotes a modified or null-restricted version of the usual t-statistic: and the equality holds by (4). Unlike the usual t-statistic, T R n (β 0 ) uses the null-restricted value β 0 instead ofβ n when computing the standard error. In view of the discussion at the beginning of Section 2.2 and since the asymptotic distribution of |T R n (β 0 )| does not depend on the concentration parameter, replacingσ 2 n (β n ) byσ 2 n (β 0 ) eliminates size distortions. Theorem 2. Suppose that Assumption 2 holds. Tests that reject H 0 : β = β 0 in favor of H 1 : β = β 0 when |T R n (β 0 )| > z 1−α/2 have AsySz equal to α. Consider now a one-sided testing problem H 0 : β ≤ β 0 versus H 1 : β > β 0 . Again, one can base a test on the null-restricted statistic. In this case under H 0 when β = β 0 , we have T R n (β) = ( Y n − β X n ) × sign X n ± d n,1 /σ (β) + o p (1). When identification is strong or semistrong, d n,1 → ∞, and the sign term is constant with probability one. Since the first term is asymptotically N (0, 1), T R n (β) is also asymptotically N(0, 1), and one could use standard normal critical values. On the other hand, when identification is weak and the concentration parameter is small, the sign term is random, and therefore, the null asymptotic distribution of the product differs from standard normal. To obtain an asymptotically uniformly valid test, one can use data-dependent critical values that automatically adjust to the strength of identification. Such critical values can be generated using the approach of Moreira (2001Moreira ( , 2003 by conditioning on a statistic that is (i) asymptotically independent of Y n − β X n , and (ii) summarizes the information on the strength of identification (see also Andrews, Moreira, and Stock 2006;Mills, Moreira, and Vilela 2014). Weak-identification-robust confidence sets for β can be constructed by inversion of the robust tests. For example, a confidence set for β with asymptotic coverage probability 1 − α can be constructed by collecting all values β 0 that cannot be rejected by the two-sided robust test: This confidence set can be easily computed analytically by solving for values of β 0 that satisfy the inequality (β n − β 0 ) 2σ 2 x,n F n − z 2 1−α/2 (σ 2 y,n + β 2 0σ 2 x,n − 2σ xy,n β 0 ) ≤ 0, (10) where F n is defined in (8). Depending on the coefficients of the second-order polynomial (in β 0 ) in Equation (10), CS 1−α,n can take one of the following forms: (i) an interval, (ii) a union of two disconnected half-lines (−∞, a 1 ] ∪ [a 2 , ∞), where a 1 < a 2 , or (iii) the entire real line. One will see cases (ii) or (iii) if the coefficient on β 2 0 in (10) is negative, which occurs when Thus, in practice one will see nonstandard confidence sets if the null hypothesis x = 0 cannot be rejected using the F-statistic and central χ 2 1,1−α critical values. Case (iii) arises when the discriminant of the quadratic polynomial in (10) is negative, which occurs if F nσ 2 n (β n ) − z 2 1−α/2 σ 2 y,n −σ 2 xy,n /σ 2 x,n < 0. When identification is strong or semistrong, the concentration parameter and, therefore, F n diverge to infinity. In such cases, both the discriminant and the coefficient on β 2 0 tend to be positive, and consequently, CS 1−α,n will be an interval with probability approaching one. Furthermore, one can show that when identification is strong and under local alternatives of the form β = β 0 + μ/(nh n ) 1/2 , tests based on T n (β 0 ) and T R n (β 0 ) have the same asymptotic power. Thus, in practice there is no loss of asymptotic power from adopting the robust inference approach if identification is strong. TESTING FOR CONSTANCY OF THE RD EFFECT ACROSS COVARIATES In this section, we develop a test of constancy of the RD effect across covariates, which is robust to weak identification issues. Such a test can be useful in practice when the econometrician wants to argue that the treatment effect is different for different population subgroups. For example, in Section 4, we use this test to argue that the effect of class sizes on educational achievements is different for secular and religious schools, and therefore it might be optimal to implement different rules concerning class sizes in those two categories of schools. The problem is related to the classical analysis of variance (ANOVA) hypothesis of homogenous populations (see, e.g., Casella and Berger 2002, chap. 11). Similarly to Otsu, Xu, and Matsushita (2015), we consider the RD effect conditional on some covariate w i . (See also Frölich 2007.) Let W denote the support of the distribution of w i . Next, for w ∈ W we define y + (w) using the conditional expectation given z i and w i = w: y + (w) = lim z↓z 0 E (y i |z i = z, w i = w) . Let y − (w), x + (w), and x − (w) be defined similarly. The conditional RD effect given w i = w is defined as β(w) = (y + (w) − y − (w))/(x + (w) − x − (w)). Similarly to the case without covariates, under an appropriate set of assumptions, β(w) captures the (local) ATE at z 0 conditional on w i = w. We are interested in testing the null hypothesis of constancy of the RD effect H 0 : β(w) = β for some β ∈ R and all w ∈ W, against a general alternative H 1 : When identification is strong, the econometrician can esti-mate the conditional RD effect function consistently and then use it for testing of H 0 . (Such a test can be constructed similarly to the ANOVA F-test as in Casella and Berger (2002, chap. 11) and is discussed in the supplement.) However, this approach can be unreliable if identification is weak. We therefore take an alternative approach. Suppose that W = {w 1 , . . . ,w J }, that is, the covariate is categorical and divides the population into J groups. The assumption of a categorical covariate is plausible in many practical applications where the econometrician may be interested in the effect of gender, school type, etc. However, even when the covariate is continuous, in a nonparametric framework it might be sensible to categorize it to have sufficient power (as is often done in practice). For j = 1, . . . , J , letŷ + n (w j ),ŷ − n (w j ),x + n (w j ), and x − j,n (w j ) denote the local linear estimators of the corresponding population terms computed using only the observations with w i =w j . Let n j be the number of such observations. Define σ 2 y (w j ), σ 2 x (w j ), and σ xy (w j ) as the conditional versions of the corresponding population terms, and letσ 2 y,n (w j ),σ 2 x,n (w j ), and σ xy,n (w j ) denote the corresponding estimators. Suppose that Assumption 2 holds for each of the J categories, and none of the categories is redundant asymptotically: n j h n j /(nh n ) → p j > 0 for j = 1, . . . , J , where n = J j =1 n j . If H 0 is true and the FRD effect is independent of w, one can construct a robust confidence set for the common effect: , β n (w j ) = y n (w j )/ x n (w j ), x n (w j ) =x + n (w j ) −x − n (w j ); σ 2 n (β 0 ,w j ) is defined similarly toσ 2 n (β 0 ) in (3) using the estimators conditional on w i =w j ; andf z,n (z 0 |w j ) = (n j h n j ) −1 n i=1 K((z i − z 0 )/h n j )1{w i =w j } is the estimator for f z (z 0 |w j ), which denotes the conditional density of z i at z 0 conditional on w i =w j . Under H 0 : β(w) = β for some β ∈ R, CS J 1−α,n is an asymptotically valid confidence set since G n (β) → d χ 2 J under weak or strong identification. We consider the following size α asymptotic test: Reject H 0 if CS J 1−α,n is empty. The test is asymptotically valid because under H 0 , P (CS J 1−α,n = ∅) ≤ P (β / ∈ CS J 1−α,n ) = P (G n (β) > χ 2 J,1−α ) → α, which again holds under weak or strong identification. Under the alternative, there is no common value β that will provide a proper recentering for all J categories, and therefore, one can expect deviations from the asymptotic χ 2 J distribution. We show below that the test is consistent if there is strong (or semistrong) identification for at least two valuesw j 1 andw j 2 that satisfy β(w j 1 ) = β(w j 2 ). Let d 2 n,1 (w j ) = n j h n j |x + (w j ) − x − (w j )| 2 f z (z 0 |w j )/(kσ 2 x (w j )) be the conditional version of the concentration parameter. EMPIRICAL APPLICATIONS In this section, we compare the results of standard and weak identification robust inference in two separate, but related, applications. We show that the standard method and our proposed method yield significantly different conclusions when weak identification is a problem, but similar results when it is not. We also show that the robust confidence sets can provide more informative answers than the standard confidence intervals in cases when the usual assumptions are violated. We also apply our weak identification robust constancy test. We begin with a case where weak identification is not a serious issue. In an influential article, Angrist and Lavy (1999) studied the effect of class size on academic success in Israel using the fact class size in Israeli public schools was capped at 40 students during their sample period. As demonstrated in Figure 1, this cap results in discontinuities in the relationship between class size and total school enrollment for a given grade. In practice, school enrollment does not perfectly predict class size and thus the appropriate design is fuzzy rather than sharp. We use the same sample selection rules as Angrist and Lavy (1999) and focus on language scores among 4th graders. The data can be found at http://econ-www.mit.edu/faculty/angrist/data1/data/anglavy99. There is a total of 2049 classes in 1013 schools with valid test results. Here, we only look at the first discontinuity at the 40-student cutoff. The number of observations used in the estimation depends on the bandwidth. It ranges from 471 classes in 118 schools for the smallest bandwidth (6), to 722 observations in 484 schools for the widest bandwidth (20). We use the uniform kernel in all cases. Table 2 shows that the estimated discontinuity in the treatment variable ranges from 8 to 14 students depending on the Angrist and Lavy (1999): Estimated discontinuity in the treatment variable for the first cutoff and their standard errors, estimated effect of class size on class average verbal score, and standard and robust 95% confidence sets (CSs) bandwidth chosen. The table also shows that, as expected, the F-statistic becomes smaller as the bandwidth gets smaller. Silverman's normal rule-of-thumb and the optimal bandwidth procedure of Imbens and Kalyanaraman (2012) both suggest a bandwidth value of approximately 8, which corresponds to a relatively large value of the F-statistic (approximately 62). Applying the standards of Table 1, we then conclude that weak identification is not a serious concern in this application. Using the 5% noncentral χ 2 critical value, we reject the null hypothesis that the concentration parameter is below 36, and therefore, the maximal size distortions of the 5% two-sided tests are expected to be under 1%. Note that even at the smallest bandwidth, the F-statistic is relatively large. This is consistent with Figure 2 that shows that the 95% standard and robust confidence sets for the class size effect are very similar. The figure shows that the two sets of confidence intervals are essentially indistinguishable for larger bandwidths, and only differ slightly for smaller bandwidths. In this application, we also compare the results of the standard constancy test of the treatment effect across subgroups to the results of our robust constancy test. The first set of results reported in Section 5 of the online supplement compares the treatment effect for secular and religious schools. The null hypothesis (the treatment effect is the same across subgroups) can never be rejected using a standard test. By contrast, the robust constancy test rejects the null hypothesis for the largest values of the bandwidth (18 and 20). We reach similar conclusions when comparing the treatment effect for schools with above and below median proportions of disadvantaged students. The null hypothesis is rejected by the robust test under the largest bandwidth (20). This suggests that our proposed test may have greater power against alternatives than the standard test in some contexts. The second application considers a similar policy in Chile originally studied by Urquiola and Verhoogen (2009). It should be noted that Urquiola and Verhoogen (2009) are not attempting to provide causal estimates of the effect of class size on tests score. They instead showed how the RD design can be invalid when there is manipulation around the cutoff, which results in a violation of Assumption 1(b) (exogeneity of z i ). So while this particular application is useful for illustrating some pitfalls linked to weak identification in an FRD design, the results should be interpreted with caution. In this application, the class sizes are capped at 45 students. Figure 3 shows the fuzzy discontinuity in the empirical relationship between class size NOTES: Silverman's rule-of-thumb bandwidth is 8.59. The optimal bandwidth suggested by Imbens and Kalyanaraman (2012) for the cutoff of 45 is 9.67 and for the cutoff of 90, the suggested bandwidth is 11.60. The optimal bandwidth suggested by Imbens and Kalyanaraman (2012) for the cutoff of 135 is 14.12 and for the cutoff of 180, the suggested bandwidth is 17.81. The scores are given in terms of standard deviations from the mean. and enrollment at the various multiples of 45. The figure also shows that the discontinuity becomes smaller as enrollment increases. In this example, the outcome variable is average class scores on state standardized math exams and we restrict attention to 4th graders. We also strictly adhere to the sample selection rules used by Urquiola and Verhoogen (2009). The total number of observations is 1636. The effective number of observations varies with the bandwidth and the enrollment cutoff. The range of the number of observations is 201 to 402 at the 90 student enrollment cut off; 45 to 95 at the 135 student enrollment cutoff, and 17 to 34 at the 180 student enrollment cutoff. The uniform kernel is used to compute all the results below. Table 3 reports the FRD estimates and the confidence sets for the different values of the bandwidth and cutoff points. As before, we set the size of the test at 5%. Starting with the first cutoff point, Table 3 shows that the robust and conventional confidence sets diverge dramatically as the bandwidth gets smaller. Interestingly, while the robust confidence interval is much wider than the conventional one, it nevertheless rejects the null hypothesis that the effect of class size is equal to zero while the conventional fails to reject the null. To help interpret the results, we also graphically illustrate the difference between standard and robust confidence sets in Figure 4. The first panel plots the standard confidence sets as a function of the bandwidth. The second panel does the same for the weak identification robust method. The shaded area is the region covered by the confidence sets. As the bandwidth increases, the robust confidence sets evolve from two disjoint sections of the real line to a well-defined interval. Note that class size is a discrete rather than a strictly continuous variable, hence the break between bandwidths 11 and 12 when the robust confidence set switches from two disjoint half lines to a single interval. This is consistent with the size of the discontinuity in class size as a function of enrollment estimated at different bandwidths and the corresponding F-statistic. At bandwidths below 10, the estimated discontinuity is small and the F-statistic is below 7. However at bandwidths higher than 12, the estimated discontinuity is progressively closer to 10 students and the F-statistic ranges from just over 40 to just over 188. This is important since the bandwidth suggested by Silverman's normal rule-of-thumb is only 8.59 and the optimal bandwidth suggested by Imbens and Kalyanaraman (2012) is 9.67. See Section 5 in the online supplement for a complete listing of the F-statistic and discontinuity estimates at different bandwidths. Identification is considerably weaker for the second cutoff point. At all bandwidths, the standard confidence intervals fail to reject the null that the effect of class size is zero. However, for most bandwidths, the robust confidence sets do not include a zero effect. For example, for a bandwidth of 8, we cannot reject the null that class size is not related to grades when using the standard method, while the robust method suggests rejecting the null. Identification is even weaker at the third cutoff and, for most bandwidths, the robust confidence sets consist of two disjoint intervals. Finally, results get very imprecise at the fourth cutoff and the robust confidence sets now map the entire real line. This suggests that identification is very weak at these levels and the standard confidence intervals are overly narrow. In summary, our results suggest that when weak identification is not a problem, the robust and standard confidence sets are similar. But when the discontinuity in the treatment variable is not large enough, the robust confidence sets are very different from those obtained using the standard method. We also demonstrate that our robust inference method provides more informative results than the standard method. SUPPLEMENTARY MATERIALS The supplementary materials contain: (i) the description of the procedure for selection and evaluation of the influential empirical RD papers; (ii) the proofs of Theorem 1, 3, and 4; (iii) the Monte Carlo results for standard and weak-identificationrobust confidence sets; and (iv) the additional tables from the empirical application.
11,839.8
2016-04-02T00:00:00.000
[ "Mathematics", "Economics" ]
Comprehensive Evaluation Method of Supply Chain Logistics System Quality Based on 3D Image Processing Technology With the rise of manufacturing informatization, many transactions are conducted on the Internet, but the fi nal form of transaction completion is the transaction of actual products, which makes the logistics industry emerge as the times require. How to achieve the optimal allocation scheme and the fastest e ffi ciency in the supply chain has become an urgent problem to be solved. This paper considers the characteristics and advantages of 3D image processing technology, describes the characteristics of 3D image processing supply chain (SC), and analyzes the channels that 3D image technology a ff ects SC. Combining the respective characteristics of fuzzy theory and grey theory, the two theories are combined to develop strengths and circumvent weaknesses to form grey fuzzy theory. Comprehensive evaluation of supply chain logistics capability can achieve better evaluation results. The application of grey theory in this chapter includes constructing the factor set of the evaluation index system and determining the weight matrix of the factor set with the game method. The grey fuzzy evaluation weight matrix (i.e., single index evaluation result) is determined with the grey theory, and the fuzzy comprehensive evaluation result is fi nally calculated. This paper studies the supply chain logistics capability evaluation and optimization system from the aspects of system analysis, system function module design, and system architecture design and analyzes the overall goal, demand, feasibility, system business process, and data fl ow of the system construction. At the same time, this paper designs the supply chain logistics capability evaluation and optimization system and shows some functional interfaces. It is of great signi fi cance to improve the responsiveness, total inventory level, total cost level, supply chain performance, agility, and fl exibility of the supply chain in the new environment. Introduction The competition in the 21st century will not be between enterprises, but between SC. Those supplier enterprises with unique advantages will become the object pursued by large enterprises [1]. The traditional reliability analysis method for system SC generally refers to the reliability calculation of its SC model, among which the calculation theory based on model tree is becoming more and more mature, but it is difficult to simplify it. Because the components in the system SC are uncertain, other reliability calculation methods are also difficult to apply in the SC reliability calculation [2]. 3D image processing technology can optimize the design and produce customized parts on demand. As this technology can digitize the complex processing process, it has the advantages of high precision, high speed, and low cost [3]. In the mass manufacturing of manufacturing industry, traditional subtractive manufacturing technology has always been based on production, standardization, and extensiveness. However, with the continuous improvement of people's material and cultural level, customers prefer to have personalized products. The extensive application of 3D image processing technology will change the production mode of traditional subtractive manufacturing technology before, and its unique advantages of personalized customization, environmental protection, energy saving, convenience, and high efficiency have changed the production mode of traditional manufacturing. Personalized customization will soon become the mainstream in future manufacturing [4]. The development of the new environment requires that the SC not only pay attention to products but also to the needs of users. The process of creating benefits is based on product flow. Improving product mobility in the SC to improve agility and flexibility is an effective means to improve the performance of the SC under the current environment [5]. Logistics energy, which exists in a specific logistics system, exists in the whole process of receiving, processing, refining, transporting, and delivering orders and goods. It is the response speed, customer's demand, cost, and guarantee of order realization punctuality and reliability [6]. Logistics operation ability refers to the ability to optimize resource utilization by means of management plan, organization, and control, in order to improve efficiency and reduce costs [7]. Compared with static logistics element capability, logistics operation capability is a dynamic capability formed on the basis of static capability. Compared with other capability viewpoints, the SC logistics capability has its own characteristics: the formation factors are more complex, the capability exists in every link of logistics activities, and the organization and management capability of logistics management can affect the functions of the entire logistics system [8]. With the rapid growth of logistics service outsourcing and the continuous improvement of its integrity and complexity, logistics service providers need to continuously penetrate into the upstream and downstream fields such as production and sales to meet the changing needs of logistics end customers. On this basis, the logistics service supply chain (LSSC) model that integrates the functions of each stage of logistics service has evolved [9]. Today, with the great change of production mode and the rapid intensification of commercial competition, the service quality provided by logistics service SC enterprises to customers, the relationship with customers, and the benefits obtained by serving customers are increasingly becoming the key factors for logistics service SC enterprises to improve profits [10]. Collaborative logistics takes cooperation and collaboration as the premise, combines advanced technology, focuses on personalized service, efficiency, and collaboration among enterprises, and creates a collaborative logistics information system that fully shares logistics resources and obtains on demand, so as to promote the collaborative operation of all links in the SC and the collaborative operation among enterprises. In order to solve the problems of high complexity of product structure, long manufacturing cycle, and high cost of early mold development, 3D image processing technology has been widely applied and studied. If the response process of the SC is regarded as a flow, in the whole flow of the SC, the production time of customers' demand products accounts for 5% of the total flow time, while it takes 95% of the total flow time to deliver the produced products to customers. This change from "internal audit" to "external view" has prompted the logistics service SC managers to make subversive changes in the logistics service SC from the aspects of management concepts, management methods, and management means. Only by penetrating the realization of customer value into all aspects of daily management of enterprises and actually implementing it in market behavior can logistics service SC enterprises achieve sustainable development and maintain long-term advantages in the industry. The research innovation lies in constructing the performance evaluation index system of logistics service supply chain based on customer value. Combining the respective characteristics of fuzzy theory and grey theory, combining the two theories to develop strengths and circumvent weaknesses to form a grey fuzzy theory to comprehensively evaluate the logistics capability of the supply chain can achieve better evaluation results. Extract the data of logistics capability, logistics cost, logistics processing capability, and logistics innovation capability of supply chain enterprises. The data is preprocessed, and the logistics capability is optimized and evaluated by combining the model data form of optimization analysis and evaluation analysis. This paper studies the results of the empirical analysis and puts forward countermeasures to improve the overall performance of the logistics service supply chain, so as to achieve the optimal allocation scheme and the fastest efficiency in the supply chain. Related Work Supply chain management is no longer a closed and lonely way to deal with business activities such as procurement, production, and sales of enterprises. Instead, it regards suppliers, producers, distributors, and consumers as an organic whole and harmonizes the information flow, logistics, and capital flow of all members through collective goals. Production planning and control under supply chain management take more uncertainty and dynamic factors into account, so that enterprises can react quickly to market changes. The traditional production planning decision-making mode is a centralized decision-making, while the decision-making mode under the supply chain management environment is distributed, group decision-making. In the traditional production planning decision-making mode, the information of planning decision-making comes from two aspects, one is demand information, and the other is resource information. Information diversification is the main feature of supply chain management. In essence, supply chain management is based on the concept of cooperation and win-win, transforming the demand of the final consumer into the collective activities of all participants, improving the quality of cooperation among many enterprises, and maximizing the overall benefits. At present, there are many researches related to SC management in China. In order to carry out targeted research, the author collected and combed the relevant research literature and found that the research results mainly include the research on green SC management, the research on supplier evaluation index system, and the evaluation method of supplier selection. Yang and Liu believe that big data technology is the basis of SC collaborative decision-making. As far as big data is concerned, they combine SC collaborative mechanism with collaborative theory and game theory to explore the significance and realization process of SC collaborative mechanism 2 Advances in Mathematical Physics [11]. Agrawal and Pal created the selection method and implementation process of collaborative management and control system for the first time based on the company's resolution operation mode, resource allocation, crisis assessment, and benefit contract [12]. Entezaminia et al. believe that "ability" refers to the ability and talent, which is the means for the main body to accomplish the set goals. Therefore, they believe that logistics is an enterprise or a SC, and in order to accomplish its logistics goals, it uses its own skills and talents, which is also an indicator of comprehensive evaluation and analysis [13]. Ju et al. first defined the concept of logistics capability and at the same time analyzed the characteristics of SC logistics capability in China's social industry environment and thought that SC logistics capability embodied several different main aspects [14]. Tu et al. quantitatively estimated the potential impact of 3D image processing technology on the global SC [15]. Yu et al. put forward a system and custom production is completely customer-centered, providing customers with 3D image processing services [16]. Bai et al. obtained the supplier evaluation criteria and corresponding weights by using analytic hierarchy process and considered that the supplier evaluation factors were delivery, quality, facilities, technical capability, financial status, management, discipline, and response in order of importance [17]. Liu et al.'s research shows that on the one hand, 3D image processing can improve the efficiency of SC by timely manufacturing and eliminating waste. On the other hand, customized production of 3D image processing is helpful to implement the production-to-order strategy [18]. Wang analyzed the definition of logistics capability. She believed that logistics capability is the ability of an enterprise to acquire and utilize various internal and external resource elements and to deliver the required items of users to the destinations required by users [19]. Woo et al.'s research and development starts from different kinds of SC and determines the capability elements that have a great impact on their benefits through the characteristics of various SC. They also summarize the calculation methods of each element [20]. Methodology 3.1. Basic Theory of SC Capability. Supply chain is a management concept and content that has been concerned by entrepreneurs in recent years. It is precisely because of the keen attention, research, and discussion of the theoretical and business circles that people generally believe that it is a very abstract and academic topic. In fact, the content of supply chain is something we may encounter every day. To be more precise, it should be attributed to a kind of management experience. It is just that there are different priorities in different industries. Logistics capability refers to the operational capability of an enterprise in the process of creating economic value and social value to design logistics plans, carry out logistics activities, and control the logistics process with the help of certain measures and schemes. The measurement object of logistics capability is the entire process of enterprise logistics activities. In addition to product distribution and transportation capabilities, it also covers external resource acquisition capabilities, internal materials, and semifinished product management capabilities. From the perspective of constituent elements, the logistics capability elements of the supply chain are divided into tangible and intangible parts. The logistics capability encountered in the actual work is tangible, while the intangible elements refer to the enterprise's equipment processing capacity, warehousing capacity, etc. In the current academic circles, the research on customer value is very rich, and the research directions are roughly divided into two categories. The second type is to take the enterprise as the evaluation subject and the customer as the evaluation object. The enterprise conducts in-depth research on the relative importance and contribution value of the customer, so that the enterprise can provide products, services, and solutions for customers with different values in order to maximize long-term benefits. Here, collaborative logistics is the focus of research. The realization of collaborative logistics mode of supply chain based on cloud manufacturing needs to be based on a certain business scale. Only when the purchase, inventory, and delivery of logistics have an appropriate scope can we share data and resources as the basis, promote the integrated control and collaborative delivery of products, and reduce the cost of SC system. The SC system adopts the collaborative logistics form based on cloud manufacturing, which requires full sharing of the manufacturing news of suppliers, the demand news of manufacturers, the delivery news of cloud platform, the in-transit news of trucks, etc. Therefore, the Internet and information system are very important to realize the collaborative logistics mode of SC. Research fields related to SC management pay attention to enterprise SC management. To sum up, SC management mainly refers to fully coordinating the internal and external resources of enterprises and, according to customers' diversified consumption needs, treating each process in the SC as a virtual enterprise interface management problem, in which each enterprise is a main body in the virtual enterprise alliance, and the internal management problem of enterprise alliance is SC management. Generally speaking, the logistics system, while accepting all kinds of resources outside the system, uses some basic functions to assemble these resources in various ways and then uses certain ways to turn the assembled resources into output systems. A subset of each assembly mode of the logistics system is shown in Figure 1. At present, many of them take the typical three-stage SC as the research object. The premise of SC capability analysis is SC system, and the analysis in this paper is based on the typical H-stage SC, analyzing its logistics system structure, which includes suppliers, manufacturers, and distributors, with the SC logistics system of manufacturing industry in the economic society as the typical representative, as shown in Figure 2. The research background of SC logistics capability is the research of SC management and logistics management. This paper will analyze and define the connotation of SC logistics capability through comparative analysis with SC management, logistics management, logistics, and capability. Since the logistics service integrator is at the core of the logistics 3 Advances in Mathematical Physics service SC, it is generally assumed by enterprises with strong financial support, strong information processing capability, good industry reputation, capable of personalized customization, integration and networking, and a certain scale of logistics services. Logistics service providers are a collection of many logistics service providers. A single enterprise generally only undertakes one or several types of logistics business, such as logistics transportation, logistics warehousing, and logistics consulting, and its service scope is limited. Logistics service consumers include not only individual consumers but also various enterprises that need logistics services, such as manufacturing enterprises and catering enterprises. Under the cloud manufacturing mode, this transfer process can be divided into e-commerce cloud, logistics cloud, and customer cloud, forming cloud services from ecommerce to customers. The business ability of ecommerce cloud can be realized through its favorable rating and credibility. The ability of e-commerce to comprehensively utilize customer demand, product varieties, and logistics channels has formed its advantages in business ability. The degree of standardization of the payment platform under supervision and the security of information provide necessary guarantees for customers to purchase. Logistics enterprises reduce their operating costs by increasing the Advances in Mathematical Physics number of distribution centers, expanding their scale, and sharing commodity information and infrastructure. Optimize its transportation path, improve transportation efficiency, and form a logistics cloud that cooperates with ecommerce. When obtaining goods, customers will comprehensively consider the accumulated cost of product purchase and logistics and the convenience of purchase compared with physical stores. And form the final online shopping satisfaction with the service experience. Comparing the collaborative logistics service of SC with cloud manufacturing system platform, it is found that there are many similarities between collaborative logistics service and cloud manufacturing, as shown in Table 1. As shown in Table 1, it is imperative to build a SC collaborative logistics cloud platform based on cloud manufacturing with reference to the cloud manufacturing system platform, SC integration, and logistics network. The SC collaborative logistics cloud platform is a networkbased and highly shared logistics cloud service platform. The platform virtually integrates logistics resources and supplier product information into the cloud to form a virtual logistics resource cloud pool and encapsulates it according to customer requirements, bringing more efficient, low-cost, and high-quality personalized logistics services to users. In addition, you can also create functional modules under the platform to focus on the whole process of enterprise production and operation, including site selection, transportation and distribution, loading and unloading, and storage. Design of Logistics Information Collection Software Based on Mobile Phone Platform. The distributed cluster database system is composed of multiple computers, and any of these computers can be placed in a single place. Because any computer in the system has a complete database, each computer has its own database. Even in different places, as long as computers are connected through the network, a complete large database can be formed. For the distributed cluster system, the system is a database as a whole in terms of logic. The database has the following three properties: consistency, integrity, and security. These three properties are used to control and manage the logic as a whole. The shared data is managed uniformly by distributed cluster servers. However, if it is a nondatabase processing operation, it can be completed through the client. The logistics information collection system based on mobile phone platform uses the image processing technology of digital and English characters and puts forward a convenient and safe solution. The staff of the logistics company use the mobile phone equipped with logistics information collection software to take pictures of the local goods list and process the photographed images with the identification software in the mobile phone to extract the information such as the location and current time of the goods and then send them to the database of the logistics head office through SMS. Finally, the head office sends the circulation information of the goods to the customers' mobile phones in real time, so that the customers can know the circulation of the goods conveniently. With the progress of science and technology, the pixels of mobile phone cameras are getting higher and higher, and the resolution of images taken by mobile phone cameras is higher, which is beneficial to feature extraction of mobile phones. Although the image pixel standards adopted by mobile phones are different, the mobile phone images of various pixel standards are used in the same way. First, the color image is grayed, and then processed by binarization, smoothing, denoising, thinning, normalization, etc. This makes a good job for the next step of image information extraction of goods list. In the handwritten character image preprocessing module, the video image input by the camera is first collected, and the software can automatically detect the image area range. Then, the collected color image is subjected to black-and-white binary processing, and a single word is marked with a rectangular box in the image display window. Then, the image preprocessing is performed on the single word, and the image features are extracted. Image preprocessing includes smoothing, denoising, thinning, and normalization introduced in Chapter 2. The image preprocessing flow is shown in Figure 3. After 3D preprocessing, the mobile phone extracts the features of the captured 3D by using the feature extraction methods introduced above (moment center feature, pixel distribution feature, discrete Fourier feature, and line feature value) and finally gets an 82-dimensional feature vector. Thus, 82-dimensional feature vectors are obtained. And it is convenient for the identification of the 3D identification module. After preprocessing the 3D image, the mobile phone uses the feature extraction methods introduced earlier (moment center feature, pixel distribution feature, discrete Logistics services can recombine logistics resources to form personalized logistics services according to the needs of customers. Resource virtualization The cloud manufacturing system virtually encapsulates manufacturing resources and capabilities in the cloud platform, and users can obtain them on demand through the terminal. The logistics platform virtually encapsulates logistics information and resources in the cloud platform, and customers can obtain corresponding logistics services according to their own needs. Payment on demand Pay as you go according to your needs. Customers pay according to the logistics services they receive. Advances in Mathematical Physics Fourier feature, and line feature value) to extract the features of the captured 3D and finally gets an 82-dimensional feature vector. Thus, 82-dimensional feature vectors are obtained. And it is convenient for the identification of the 3D identification module. Different classifiers corresponding to the same feature from different angles map the feature to the decision space, so it is possible to comprehensively reflect an object by combining different features and different classifiers, thus obtaining a better classification result. Geometric moments of general two-dimensional functions are defined as M mn = ∬ x,y∈Ω x m y n f x, y ð Þdxdy: ð1Þ In formula (1), M mn is the original lattice of image ðm, nÞðm, n = 0:1, ⋯Þ, which can be regarded as the projection of image f ðx, yÞ on a set of basis functions, and this moment has translation invariance. The character image f ðx, yÞ is divided into Ω i ði = 0, 1, ⋯, 15Þ areas of 4 × 4, assuming that the sum of black dots in each area is AðiÞ and the black dots in the largest black dot area and the smallest black dot area are A max and A min , respectively. Then, A min = min Take F s = ððAðiÞ − A min Þ/ðA max − A min ÞÞðs = 2, ⋯, 17Þ as a set of features with values between [0,1], which reflects the distribution characteristics of black spots in sample 1 to f ðx, yÞ. Fourier transform is widely used in pattern recognition to extract features, which not only has translation invariance but also can describe the image boundary. Image f ðx, yÞ is a binary matrix point set with P rows and Q columns. Its corresponding two-dimensional discrete Fourier transform can be defined as Type u = 0, 1, ⋯, P − 1 ; v = 0, 1, ⋯, Q − 1, expressed by matrix: And the large-value coefficient of Gðu, vÞ is concentrated in the low-frequency region, that is, around the upper left, upper right, lower left, and lower right corners of the matrix. In this experiment, P = Q = 16, 32 modulus values of discrete Fourier transform are selected and extracted from the above four low-frequency regions as feature vectors. Evaluation Model of Two-Level SC of 3D Image Technology. Consider establishing a two-level SC of 3D image processing composed of 3D image processing technologists and manufacturers, and analyze the decisionmaking and profit issues of the two-level SC of 3D image processing. Since the SC conditions of various industries will vary according to the actual situation of the industry, the SC logistics capacity of each industry will show its own characteristics due to the difference in SC conditions; for example, the SC logistics capacity of hataocao industry pays attention to the safety assurance ability, the power coal SC pays attention to the relative stability of the power coal SC logistics capacity, and the SC logistics capacity of traditional manufacturing industry pays attention to the integrity. However, there are commonalities in the logistics capacity Advances in Mathematical Physics under the structure of the SC logistics system, which is determined by the commonalities of the SC of various industries. In the distributed control SC, each member enterprise in the SC first cares about its own profit and then pays attention to the overall profit of the SC. In the SC supervision, the secondary chain composed of suppliers and producers must meet the application conditions of Stackelberg game, and both suppliers and producers pay attention to their own profits. Support vector machine is derived from the concept of optimal classification hyperplane, which is the extension of classification hyperplane. Consider the two-dimensional two-class separable case shown in Figure 4. The circular ("O") sample and the square ("□") sample are linearly separable, and we can see from the figure that there are many linear functions that can completely separate the two types of samples, not only H but also many others, so we will not cite them one by one here. The so-called optimal classification line is the general form of the linear classification function in the dimensional space which is gðxÞ = w ⋅ x + b, and the classification surface equation is Normalize the classification function so that both types of samples meet jgðxÞj ≥ 1, that is, jgðxÞj = 1 of the samples closest to the classification plane, so that the classification interval is equal to 2/kwk. Therefore, to maximize the classification interval is to make kwk or kwk 2 minimum. If it is required to classify all samples correctly, it must meet the following requirements: Thus, the classification plane that satisfies the above conditions and minimizes kwk 2 is the optimal classification plane. According to the general theory of reliability design, Φð XÞ = 0 can be set as the limit state of the system as the system stability criterion. Among them, X = fX 1 , X 2 , X 3 , ⋯g can be set as qualified and unqualified according to the dis-crimination requirements of system response and can be calculated: wherein If the system completes the response calculation and gets the system response data, according to formula (12), if P > 1, relative to ∀X ∈ fX = 1, 2 ⋯ X ∈ Rg, all φðXÞ > 0, all the system responses exist in the safety zone and there is no failure probability, which proves that the system structure is reliable: when P < 0, for ∀X l ∈ X l i , all ΦðXÞ < 0, then At 0 < P < 1, aiming at ∀X l ∈ X l i , it is proved that ΦðXÞ > 0 and Φð XÞ < 0 have two possibilities, the system has reliability and unreliability, and the sample delivery value of P indicates the reliability probability of the system, that is, reliability. If the value is larger, the system is more reliable; if the value is smaller, the system is less reliable. Similarly, formula (12) can be changed to obtain the failure probability: When the subordinate relationship between the upper and lower levels of each index is determined, it is necessary for the expert group to judge the relative importance of each level according to the evaluation information of the evaluation index established in this paper. The quantitative judgment method is 0.1~0.9 nine-scale method, as shown in Table 2. The fuzzy vector a k ij = ðl k ij m k ij u k ij Þ means that the k expert compares the index 1i with the index to get the judgment result of importance. Among them, m k ij is the actual score of the expert's judgment on the relative importance of the evaluation index, and l k ij and u k ij , respectively, Index I is obviously more important than index J 0.7 Indicator I is stronger and more important than indicator J 0.9 Indicator I is more important than indicator J 0.2, 04, 0.6, 0.8 Index I is compared with index J, and the result conclusion corresponds to the middle value of 0.1-0.9 scale 7 Advances in Mathematical Physics correspond to the minimum and maximum values of the relative importance score of the evaluation index. According to the operation properties of triangular fuzzy numbers, if After normalizing d n ðx t i Þ, you can wait until the weight of each index: where x t i is the t index of layer i and w t h is the weight obtained by ranking t index levels in layer t − 1 in layer h, which is the index weight of the required solution. Result Analysis and Discussion From the current research, the quantitative analysis of SC logistics capacity is mostly based on a specific field or industry. The comprehensive evaluation methods used in the research mainly include fuzzy AHP, AHP, and fuzzy comprehensive evaluation methods. Each evaluation method has its own advantages and disadvantages and scope of application. On the other hand, the focus of the grey system theory is to process some fuzzy and indistinct information and determine the nature of things through the feature changes between different levels. This method can be seen from the beginning, but the accompanying problem is low resolution. In this paper, considering the respective characteristics of fuzzy theory and grey theory, combining the two theories, developing their strengths and avoiding their weaknesses, and forming a grey fuzzy theory to comprehensively evaluate the logistics capability of SC can achieve a better evaluation effect. The application of grey theory in this chapter includes constructing the factor set of evaluation index system, using gambling method to determine the weight matrix of the factor set, using grey theory to determine the grey fuzzy evaluation weight matrix (i.e., single index evaluation result), and finally calculating the fuzzy comprehensive evaluation result. Regardless of the distance and cost from the source point to the delivery point, suppose a company has five user demand points, and the delivered goods are a product, and design a site selection scheme to determine from the five user demand points that the goods must be delivered to each demand point. According to the proposed five dimension balanced scorecard of supply chain, this paper has selected different key performance indicators as the performance evaluation indicator set of dynamic supply chain. On the premise of meeting the demand, ensure the fixed cost of establishing the distribution center at the selected location, and the total cost of transportation expenses flowing through the distribution center is the lowest, as shown in Table 3. The calculation example is solved by genetic algorithm. The variation of the optimal solution/mean value with the number of iterations is shown in Figure 5. It can be seen that when the number of iterations is 60, the total cost has reached the optimal value, with the minimum value of 1260 yuan, and the distribution is also the demand point 3 among the demand points. The business of SC capability evaluation and optimization system includes system users logging in to the system. After the identity information is verified, the logistics capability optimization module and the logistics capability evaluation module can be used to analyze the SC logistics capability. The analysis results can be obtained by using operational research methods, heuristic algorithms, and grey fuzzy theory, which can provide decision-making basis for system users, as shown in Figure 6. The logistics capability evaluation and optimization system takes the typical H-level supply chain as the research object. The structure of the logistics system includes suppliers, manufacturers, and distributors (including end customers). The inflow data includes the logistics capacity data sheet of suppliers, distributors, and retailers. This paper extracts the data of logistics capability, logistics cost, logistics processing capability, and logistics innovation capability of commonly used supply chain enterprises. The data is preprocessed, and the logistics capability is optimized and evaluated by combining the model data form of optimization analysis and evaluation analysis. 1 0 65 85 100 101 2 62 81 32 125 123 3 88 95 0 100 105 4 98 125 102 0 28 5 102 130 105 Advances in Mathematical Physics Figure 7 shows the effect of the customer customization sensitivity coefficient on the output of the two models. It can be observed from the figure that the output of the two models increases with the increase of the customer customization sensitivity coefficient. When the customized sensitivity coefficient of the customer is less than Δ 2, the output in the technician-dominated model is greater than that in the manufacturer-dominated model. When the customized sensitivity coefficient of the customer is greater than Δ 2, the output in the manufacturer-dominated model is greater than that in the technician-dominated model. As can be seen in Figure 8, in the two models, the price of 3D image processing products decreases with the reduction of the cost-saving coefficient. The price of 3D image processing products in the manufacturer-led model is always not less than that in the technology provider-led model. When the cost-saving coefficient is larger, other conditions remain the same, the production cost decreases, and the manufacturer does not change the price of 3D image processing products, which can increase the income of unit 3D image processing products. However, the enterprise pursues profit maximization. In order to make more profits, it will choose to lower the product price and attract more customers. In Figure 8, it can be observed that no matter how the cost-saving coefficient changes, the profit obtained by the technologist in the technologist-led model is always not less than that in the manufacturer-led model. Therefore, manufacturers will choose to fight for or give up the dominant power according to the situation, while technology manufacturers will always actively fight for the dominant power in order to make greater profits. After calculating the subjective weight, objective weight, and comprehensive weight of all indicators, this section will calculate the comprehensive value of performance evaluation of logistics service SC, solve the evaluation values of all levels of indicators in turn, and analyze the results in detail. Similar to the evaluation module, the logistics capability optimization module can add, modify, and delete logistics nodes, routes, and networks to the optimization algorithm and also pays attention to the forward-looking and expansibility of the evaluation and optimization system. By calling genetic algorithm to optimize the distribution and location, the optimization results are obtained. The interface of example calculation results is shown in Figure 9. Through the above analysis, we can find that in the logistics service SC dominated by company C, we attach great 0 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 22 24 36 38 40 22 44 66 48 50 52 54 56 All-in-cost Change of optimal solution Change of population mean Iterations Figure 9: Distribution center location optimization results. 9 Advances in Mathematical Physics importance to the improvement of our own business capability and internal link control. For example, we strive for excellence in service quality at any stage, spare no effort to expand market share, and increase investment and customer income, but there is still much room for improvement in customer relations. The logistics process cooperation of parts suppliers is based on the general SC cooperation of the industry. Among them, purchase, manufacturing, and delivery are important processes of the SC centered on the value chain. First, the collaborative process of SC logistics has been changed. The supplier handles the purchase, preparation, manufacturing, and real-time delivery according to the assembly of the manufacturer provided by the SC collaborative logistics cloud platform. Second, the supplier connects with the supplier above through the collaborative logistics system, eliminating the persistent planning originally sent to the superior supplier. The supplier manages the inventory in the corresponding parts manufacturing. Finally, professional intermediate logistics companies will realize more and more business services and create appropriate supplier control and evaluation models. On the whole, the SC management is attributed to the strategic management of enterprises, so in the SC management, the problem itself should be analyzed based on the strategic development of enterprises. The development of SC management covers the ideas of enterprise management in content and specifically includes the contents of enterprise culture shaping, organizational strategic management, technology development and utilization, performance management, and other fields under the guidance of business ideas. Therefore, the integration of supplier management in company A under the green SC and the introduction of its information support system, technology development, and performance management must conform to the company's future management strategy. It can be seen that information management is one of the very important contents of SC management, and the foundation of information management mainly lies in the construction of information platform. Therefore, under the green SC, company A should focus on the comprehensive sharing of SC information. The SC logistics capability evaluation and optimization system is constructed from the aspects of system analysis, system function module design, and system architecture design. The overall objective, demand analysis, feasibility analysis, business process, and data process analysis of the system construction are described, respectively, and the overall design and subsystem design of the system are carried out. Relevant enterprises in the logistics SC need to devote special energy to collecting and sorting out the feedback information in these channels and study the mathematical relationship between the number of customers and customer feedback, dig out the real needs and ideas of customers, formulate targeted solutions and respond to them in time, and at the same time examine and improve their own service mechanisms and processes. Compare the correlation between the growth of economic indicators such as corporate profits and market share and this investment, discuss the weak links reflected and formulate corresponding solutions, and output a comprehensive summary with guiding significance to point out the direction for future investment. Conclusions With the rapid development of e-commerce and logistics industry, the competition between enterprises has evolved into the competition between SC. The SC logistics capability is one of the main bottlenecks to improve the performance of the SC, which fundamentally determines the logistics performance of the whole logistics activity process in the SC and its impact on the overall competition of the SC. Creating value for customers is an important condition for enterprises to survive and develop. Therefore, this paper proposes to study the performance evaluation of logistics service SC from the perspective of customer value, which has certain theoretical and practical significance for promoting the service level of logistics service SC and improving the overall performance level of logistics service SC. The performance evaluation method of logistics service SC based on customer perspective is proposed. A performance evaluation method of logistics service SC based on customer perspective is proposed. Taking the collaborative logistics system as the core, it organically combines the actual needs of users, product information of suppliers, and platform operators. Users' needs can be quickly responded by suppliers, thus providing a collaborative working environment for the next product distribution, thus realizing SC integration. The combination of 3D images and software needs further research, development, and implementation. However, there is a lack of research on the results of empirical analysis, and it is necessary to propose targeted countermeasures and suggestions to improve the overall performance of the logistics service supply chain. Further analysis and supplement are needed in the future. Data Availability The data used to support the findings of this study are available from the corresponding author upon request.
9,231.6
2022-10-12T00:00:00.000
[ "Engineering", "Business", "Computer Science" ]
Antibacterial Activity of Glutathione-Stabilized Silver Nanoparticles Against Campylobacter Multidrug-Resistant Strains Campylobacter is the leading cause of bacterial diarrheal disease worldwide. Although most episodes of campylobacteriosis are self-limiting, antibiotic treatment is usually needed in patients with serious enteritis, and especially in childrens or the elderly. In the last years, antibiotic resistance in Campylobacter has become a major public health concern and a great interest exists in developing new antimicrobial strategies for reducing the impact of this food-borne pathogen on human health. Among them, the use of silver nanoparticles as antibacterial agents has taken on increased importance in the field of medicine. The aim of the present study was to evaluate the antimicrobial effectiveness of glutathione-stabilized silver nanoparticles (GSH-Ag NPs) against multidrug resistant (MDR) Campylobacter strains isolated from the chicken food chain (FC) and clinical patients (C). The results obtained showed that GSH-Ag NPs were highly effective against all MDR Campylobacter strains tested. The minimal inhibitory concentration (MIC) and minimal bactericidal concentration (MBC) were in a range from 4.92 to 39.4 μg/mL and 9.85 to 39.4 μg/mL, respectively. Cytotoxicity assays were also assessed using human intestinal HT-29, Caco-2, and CCD-18 epithelial cells. Exposure of GSH-Ag NPs to intestinal cells showed a dose-dependent cytotoxic effect in all cell lines between 9.85 and 39.4 μg/mL. More than 60% of the tested Campylobacter strains were susceptible to GSH-Ag NPs concentrations ≤ 9.85 μg/mL, suggesting that practical inhibitory levels could be reached at low GSH-Ag NPs concentrations. Further works are needed with the purpose to evaluate the practical implications of the toxicity studies and to know more about other attributes linked to the biological compatibility. This behavior makes GSH-Ag NPs as a promising tool for the design of novel antibacterial agents for controlling Campylobacter. INTRODUCTION Campylobacter is the leading cause of bacterial food-borne gastroenteritis worldwide and more than 95% of the infections attributed to this genus are associated with the species Campylobacter jejuni (C. jejuni) and Campylobacter coli (C. coli) (Ganan et al., 2012). Campylobacteriosis has been the most frequently reported cause of human food-borne zoonoses in the EU since 2004 (European Food Safety Authority European Centre for Disease Prevention Control, 2016). Patients may experience mild to severe illness, and symptoms can include gastrointestinal manifestations such as diarrhea, abdominal cramps, nausea, and fever. The severity of symptoms during the disease mainly depends on the infective strain and on the medical condition of the patient (Blaser and Engberg, 2008). Bacteraemia and other extra intestinal complications may develop less frequently. In a reduced percentage of cases, potentially severe long-term complications may occur, such as Guillain-Barré syndrome, Reiter's syndrome or reactive arthritis (Kaakoush, 2015;O'Brien, 2017). Although most cases of campylobacteriosis are self-limiting, antibiotic therapy is generally used in cases with severe or long-lasting enteritis, especially in children and the elderly, immunocompromised patients, and in cases of extra intestinal manifestations. However, antimicrobial resistance in bacteria from food of animal origin, including Campylobacter, has become in the last years a serious public health concern in both developed and developing nations. A rising amount of Campylobacter isolates have become resistant to different antibiotic families such as fluoroquinolones, aminoglycosides, macrolides, and beta-lactams among others (Wieczorek and Osek, 2013;Bolinger and Kathariou, 2017; European Food Safety Authority European Centre for Disease Prevention Control, 2017). The increase in the incidence of infections caused by multidrug resistant (MDR) strains [lower susceptibility to at least three antibiotic families according to epidemiological cutoff values (ECOFFs)] of Campylobacter makes the treatment of this disease increasingly complicated (European Food Safety Authority European Centre for Disease Prevention Control, 2017). For these reasons, it is necessary to find new alternatives to the use of antibiotics in the control of Campylobacter. Improvement of conventional antimicrobials by new technologies to transcend antimicrobial resistance is in development. Nanotechnology-driven innovations offer new perspectives for both patients and professionals to tackle drug resistance. Previous works have shown that antimicrobial formulations in the nanoparticles format could be employed as effective bactericidal materials due to their enhanced reactivity, resulting from their high surface/volume ratio (Choi et al., 2008;Rudramurthy et al., 2016). Particularly, silver nanoparticles are reported to exhibit strong biocidal properties on different bacterial species (Quelemes et al., 2013;Losasso et al., 2014), including MDR bacteria (Lara et al., 2010). In recent years, the use of silver nanoparticles as antibacterial agents has become more important in the medical field (Marambio-Jones and Hoek, 2010), due to the importance to provide alternatives to the resistance that many pathogenic microorganisms exhibit to most widely used antibiotics. Silver nanoparticles can be coated to facilitate their interaction with the environment. In this sense, a coating of glutathione (GSH) increases the solubility and the ability of silver nanoparticles to interact with the environment. However, there are no previous studies about the impact of silver nanoparticles on Campylobacter, in spite of its importance as a food-borne pathogen. On the other hand, cytotoxicity of nanoparticles is a major concern in the use and development of nanotechnology. Toxicity of nanoparticles to eukaryotic cells is associated to the higher reactivity of these particles due to their large surface (Fröhlich and Fröhlich, 2016). In general, in vitro studies suggest that silver nanoparticles may induce cellular death, increased reactive oxygen species (ROS) production, oxidative stress, and DNA damage (Kim and Ryu, 2013). Despite, the scientific evidence on potential adverse effects of nanoparticles severely lags behind the advances of nanotechnologies. For these reasons, the main objective of the present work was to evaluate the in vitro antimicrobial activity of GSH-Ag NPs against several strains of Campylobacter, also studying the effects of these nanoparticles on different human intestinal cell lines. Synthesis of Glutathione-Stabilized Silver Nanoparticles (GSH-Ag NPs) GSH-Ag NPs used in this work were synthetized following a classical approach as described in García-Ruiz et al. (2015). Briefly, AgBF4 (0.096 mmol, 18.7 mg) was dissolved in 50 mL of water. This silver salt was reduced using NaBH4 (0.026 mmol, 0.001 g) with vigorous stirring at room temperature. After 30 min of stirring, 2 mL of a 10 −2 M glutathione solution in water was added dropwise to the silver colloidal solution. The formed glutathione-stabilized silver nanoparticles solution was kept in the dark. The obtained nanoparticles showed a heterogeneous range of diameters between 10 and 50 nm, and a final concentration of silver of 0.197 mg/mL. The UV-Vis spectrum of water solutions of GSH-Ag NPs displayed an intense and sharp localized surface plasmon resonance (LSPR) band at 429 nm (Figure 1). Both characteristics of the absorption band associated with surface plasmon resonance are due to the GSH complex in accordance with the size and dispersion of sizes obtained with the Bacterial Strains, Growth Media, and Culture Conditions The Burgos University kindly donated the different Campylobacter strains used in this work. These strains were isolated from different sections of the chicken meat food chain and from cases diagnosed with campylobacteriosis at Burgos University Hospital. C. jejuni 11168 obtained from National Collection of Type Cultures (NCTC) (London, UK) was used as reference strain. The isolation source, species, strain designation and isolation place of strains used in this study are shown in Table 1. All isolates were performed in the province of Burgos (Spain) between the years 2011 and 2014, and stored at −80 • C in Brucella Broth (BB) (Becton-Dickinson, NJ, USA) plus 20% of glycerol until were used. The agar medium consisted of Müeller-Hinton agar supplemented with 5% defibrinated sheep blood (MHB) (Becton-Dickinson). Liquid growth medium for Campylobacter strains consisted of BB. The frozen strains were propagated by inoculation in MHB and incubation under microaerophilic conditions (85% N 2 , 10% CO 2 , and 5% O 2 ) using a Variable Atmosphere Incubator (VAIN) (MACS-VA500, Don Whitley Scientific, Shipley, UK) at 40 • C for 48 h. Isolated colonies were inoculated into 50 mL of BB and incubated under stirring at 150 rpm on an orbital shaker at 40 • C for 24 h in microaerophilic conditions in the VAIN. These bacterial inoculum cultures (∼1 × 10 8 colony forming units (CFU)/mL) were used for the different experimental assays. BSL2 facilities of CIAL were used for the development of the proposed work. Antibiotic Susceptibility Test The antibiotic susceptibility was assessed following the Kirby-Bauer disc diffusion method based on the performance standards for antimicrobial disk susceptibility test described by Clinical and Laboratory Standards Institute (Clinical Laboratory Standards Institute, 2012). Antimicrobial discs (Oxoid, Basingstoke, UK) were placed on the inoculated MHB plates and they were incubated in the VAIN for 48 h. Nine antibiotics from the most frequently used against Campylobacter, representing five different families, were used: macrolides (erythromycin, 15 µg), quinolones (nalidixic acid 30 µg) and fluoroquinolones (norfloxacin 10 µg; ciprofloxacin 5 µg), tetracyclines (tetracycline; 30 µg), aminoglycosides (streptomycin 25 µg; gentamicin 10 µg), and β-lactam antibiotics (ampicillin 10 µg; amoxicillin-clavulanic acid 30 µg). The control strain used was C. jejuni NCTC 11351. Media, incubation times and temperature used for campylobacters were the same described above. Breakpoints used were chosen based on the antibiotic tested. Interpretation of the results for ciprofloxacin, erythromycin, and tetracycline was performed using the resistance breakpoint for campylobacters according to The European Committee on Antimicrobial Susceptibility Testing (2018). Breakpoint for amoxicillin-clavulanic acid was evaluated in accordance with interpretive criteria provided by the Comite' de l'antibiogramme de la Societe' Francaise de Microbiologie (2017). Breakpoints used for nalidixic acid, norfloxacin, gentamicin and ampicillin were those reported by Luangtongkum et al. (2007). In the case of streptomycin, no breakpoints were available for campylobacters, and susceptibility categorization was carried out using the breakpoints established by CLSI for the family Enterobacteriaceae, as reported by others (Giacomelli et al., 2014). Antibacterial Activity of GSH-Ag NPs Against Campylobacter Strains The antibacterial activity of GSH-Ag NPs against C. jejuni 11168 was evaluated following the procedure described by Silvan et al. (2013). Briefly, 1 mL GSH-Ag NPs (0.61-39.4 µg/mL final concentrations) was transferred into different flasks containing 4 mL of BB. Bacterial inoculum (50 µL of ∼1 × 10 8 CFU/mL) was inoculated into the flasks under aseptic conditions. Culture was prepared in triplicate and incubated microaerobically under stirring at 150 rpm at 40 • C for 24 h in the VAIN. Positive growth controls (bacteria without nanoparticles) were prepared by transferring 1 mL of sterile water to 4 mL of BB and 50 µL of bacterial inoculum. After incubation, serial decimal dilutions of mixtures were prepared in saline solution (0.9% NaCl) and they were plated (20 µL) onto fresh MHB agar and incubated microaerobically at 40 • C in the VAIN. The CFU was assessed after 48 h of incubation. Results were expressed as log CFU/mL. A micromethod was used for the assay with the different Campylobacter strains. With this purpose, the minimal inhibitory concentration (MIC) and the minimal bactericidal concentration (MBC) were determined as follow: BB (240 µL), GSH-Ag NPs (60 µL) at different concentrations and bacterial inoculum (3 µL of ∼1 × 10 8 CFU/mL) were dispensed into sterile 96well flat-bottom microplate. A control growth (bacteria without nanoparticles) for each strain was also prepared. Microplate was incubated microaerobically under stirring at 150 rpm at 40 • C for 24 h in the VAIN. In order to determine the MIC and MBC, 5 µL of culture from each well were plated onto MHB and incubated microaerobically at 40 • C for 48 h in the VAIN. MIC was defined as the lowest amount of GSH-Ag NPs that provokes a decrease in viability respect to the control growth (visual reduction of growth) after 24 h of treatment (calculated in approximately 4 log of inhibition). MBC was defined as the lowest bactericidal concentration of GSH-Ag NPs after 24 h of treatment. Results were expressed as µg/mL. Cytotoxicity of GSH-Ag NPs The cell viability was determined by the MTT reduction assay in colon tumoral cell lines (HT-29 and Caco-2) and colon regular cell line (CCD-18) obtained from American Type Culture Collection (ATCC) (Manassas, VA, USA). Cells were cultured in DMEM supplemented with 10% FBS and 1% penicillin/streptomycin. Cells were plated at densities 1 × 10 5 cells in 75 cm 2 tissue culture flasks and maintained at 37 • C under 5% CO 2 in a humidifier atmosphere. The culture medium was changed every two days. Confluent stock cultures were trypsinized (Trypsin/EDTA) and cells were seeded in 96-well plates (∼5 × 10 4 cells per well) and incubated in culture medium at 37 • C under 5% CO 2 in a humidifier incubator for 24 h. Briefly, cell medium was replaced with serum-free medium containing different concentrations of GSH-Ag NPs (0.61-39.4 µg/mL final concentrations) and the cells were incubated at 37 • C under 5% CO 2 for 24 h. Control cells were incubated in serum-free medium without GSH-Ag NPs addition. The cells were then washed twice with PBS and added 200 µL of serumfree medium. Thereafter, 20 µL of a MTT solution in PBS (5 mg/mL) was added to each well for the quantification of the living metabolically active cells after 1 h of incubation. MTT is reduced to purple formazan in the mitochondria of living cells. Culture medium was removed and formazan crystals formed in the wells were solubilized in 200 µL of DMSO. Absorbance was measured at 570 nm wavelength employing a microplate reader Synergy HT (BioTek Instruments, Winooski, VT, USA). The viability was calculated considering control cells incubated with serum-free medium as 100% viable. Data represent the mean and standard deviation of three independent experiments (n = 3). All experiments were carried out between passages 10-30 to ensure cell uniformity and reproducibility. Statistical Analysis The results were reported as means ± standard deviations (SD) performed in triplicate. The data were subjected to statistical analysis by one-way analysis of variance (ANOVA) followed by Dunnett's method for multiple comparisons. Differences were considered significant at p < 0.05. All statistical tests were performed with IBM SPSS Statistics for Windows, Version 21.0 (IBM Corp., Armonk, NY, USA). RESULTS AND DISCUSSION Antibacterial Activity of GSH-Ag NPs Against C. jejuni 11168 The results of the antibacterial properties of GSH-Ag NPs against C. jejuni 11168 are showed in the Figure 2. The nanoparticles, in a final concentration range of 9.85-39.4 µg/mL, were bactericidal after 24 h of incubation. Small concentrations of GSH-Ag NPs (1.23 and 4.92 µg/mL) significantly (p < 0.05) inhibited the growth of C. jejuni 11168 strain. These results demonstrate the strong capacity of the GSH-Ag NPs with a range of 10-50 nm of particle size to inhibit Campylobacter growth. Previous studies have described a greater susceptibility of some foodborne pathogens such as Escherichia coli, Listeria monocytogenes, Pseudomonas aeruginosa, or Salmonella to silver nanoparticles (Crespo et al., 2012;Taglietti et al., 2012;Tamboli and Lee, 2013). Ag + released from nanoparticles reacts with sulfur-containing proteins, mainly on the cell surface, and phosphorous-containing nucleic acids. They are known to produce ROS inside the cell, eventually leading to cell death (Rudramurthy et al., 2016). It is well known that the differences in the material employed in the synthesis of the nanoparticles can play an important role in their antimicrobial activity. The biological application for silver nanoparticles requires an appropriate coating of nanoparticle surface, because it could favor interactions with biosystems and enhance the solubility in water-based environments. GSH has proved to be a good candidate for this purpose since this biomolecule displays a thiolic function, capable of being anchored to silver surfaces, and the presence of functional groups (carboxylates and amine) that promote water solubility and interactions toward more complex biostructures (Taglietti et al., 2012). These GSH-Ag NPs have proved to be more effective for Gram negative bacteria, possibly because their cell wall contains a thinner peptidoglycan layer than Gram positive bacteria (Taglietti et al., 2012;García-Ruiz et al., 2015). Also, it is well known that the particle size and distribution FIGURE 2 | Antibacterial activity of glutathione-stabilized silver nanoparticles (GSH-Ag NPs) against C. jejuni 11168. Results represent the mean ± SD of Log 10 CFU/mL (n = 3). Bars marked with asterisk indicate significant differences (p < 0.05) compared to the control growth (sample without nanoparticles) by one-way analysis of variance (ANOVA), followed by Dunnett's method for multiple comparisons. can play an important role in their antimicrobial activity. It has been described that nanoparticles with a smaller size tend to be more effective as antimicrobials (Gogoi et al., 2006). However, in this work the antibacterial effect of GSH-Ag NPs against C. jejuni 11168 was higher than those reported in previous studies against other bacteria using smaller sized silver nanoparticles (Guzman et al., 2012;Losasso et al., 2014). Antibiotic Susceptibility Test of Campylobacter Strains The results of the antimicrobial resistance of Campylobacter strains are presented in Table 2. All isolates were susceptible to erythromycin, except the clinical strains C5 and C19 (C. coli). Erythromycin is the first therapeutic option for the treatment of severe Campylobacter infections, thus the prevalence of resistance to this antimicrobial drug should be a cause for particular concern. Studies on the susceptibility of Campylobacter strains to macrolides, such as erythromycin, have been shown that the percentage of resistant strains is currently at a low level (Wieczorek and Osek, 2015;Bolinger and Kathariou, 2017). However, recent reports have documented the emergence of some Campylobacter strains showing erythromycin resistance (Florez-Cuadrado et al., 2016;Bolinger and Kathariou, 2017). In some European countries, up to a third to half of C. coli isolated from humans were resistant to erythromycin (European Food Safety Authority European Centre for Disease Prevention Control, 2017). In this work, two clinical isolates of C. coli strain were resistant to erythromycin, suggesting that C. coli may represent an underestimated potential health risk for consumers. Most of the strains (97.5%) were resistant to nalidixic acid, norfloxacin, ciprofloxacin and tetracycline ( Table 2). It has been previously defined that resistance to these antimicrobials is predominant among strains of Campylobacter from poultry meat, finding also in clinical cases very high proportions of strains resistant to ciprofloxacin and tetracyclines (European Food Safety Authority European Centre for Disease Prevention Control, 2017). Considered in the past as one of the most effective antibiotics against Campylobacter, nowadays the level of acquired resistance to fluoroquinolones preclude the use of these antimicrobial agents for routine empirical treatment of human campylobacteriosis (European Food Safety Authority European Centre for Disease Prevention Control, 2017). As expected, a high level of resistance was found for ampicillin (67.5%) and somewhat more moderate resistance level was found for amoxicillin/clavulanic acid (32.5%). However, resistance levels found for streptomycin (35%) were relatively high compared to those commonly reported (European Food Safety Authority European Centre for Disease Prevention Control, 2017), and all resistant strains, both from the chicken meat food chain and hospital isolates, were C. coli. This may be due to the clonal expansion of resistant populations, and is in agreement with growing concern about the emergence of C. coli strains with high rates of antibiotic resistance (European Food Safety Authority European Centre for Disease Prevention Control, 2017). Only one strain (C. coli C5) was resistant to gentamicin, which is consequent with the low resistance levels described for this antibiotic. All Campylobacter isolates tested showed resistance to three or more antimicrobials families used in the study. These strains can be considered as multidrug resistant (MDR), defined as those strains with resistance or non-susceptibility to at least three different antimicrobial classes (Magiorakos et al., 2012). In all these strains, most of which possessed multidrug resistance, were evaluated the antimicrobial effect of GSH-Ag NPs. Antibacterial Activity of GSH-Ag NPs Against Food Chain and Clinical Campylobacter Strains Table 3 shows the antibacterial activity of GSH-Ag NPs against Campylobacter MDR strains of different origins and species. The antibacterial effect of GSH-Ag NPs had a strain-dependent character. GSH-Ag NPs were bactericidal for most of the strains (87.5%) in a MBC range from 19.7 to 39.4 µg/mL. Food chain isolates showed a higher susceptibility to GSH-Ag NPs (60% isolates with MBC between 9.85 and 19.7 µg/mL) than clinical isolates (100% isolates with MBC between 19.7 and 39.4 µg/mL), suggesting that clinical isolates can be better adapted to counteract the GSH-Ag NPs effect. We have seen in these clinical strains a higher resistance to hydrogen peroxide and oxidative stress than in food chain isolates (unpublished data). Although the antibacterial mechanisms of nanoparticles are still unclear, at least four fundamental pathways in the mechanism of action of silver nanoparticles have been considered: they can adhere to microbial cell surface, resulting in membrane damage and changes in transport activity. They can penetrate inside the cell, affecting the cellular machinery. In addition, they can modulate cellular signal system causing cell death, and finally, they can cause increase in reactive oxygen species (ROS) inside the microbial cells leading to cell damage (Dakal et al., 2016). This last point is consequent with the results obtained in the present work and with many studies that attribute the antibacterial activity of silver nanoparticles to oxidative stress or ROS, including hydrogen peroxide (Wang et al., 2017). No differences were found among species, noting that C. jejuni and C. coli strains showed a similar MBC range (from 9.85 to 39.4 µg/mL), and the slight variation observed was mainly due to the strain tested. The MICs showed a similar behavior than MBC and were strain-dependent. Most of the strains (82.5%) had a MIC range from 9.85 to 19.7 µg/mL, being the MIC50 (MIC at which 50% of the isolates are inhibited respect to control growth) of 19.7 µg/mL and the MIC90 (MIC at which 90% of the isolates FIGURE 3 | Cytotoxic effects of glutathione-stabilized silver nanoparticles (GSH-Ag NPs) on HT-29, Caco-2, and CCD-18 human intestinal cells. Cells were treated for 24 h, and cell viability was assessed by MTT assay. The results are expressed as percentage of control (cells without nanoparticles) and are represented by mean ± SD (n = 3). Bars marked with asterisk indicate significant differences (p < 0.05) compared to the control group by one-way analysis of variance (ANOVA), followed by Dunnett's method for multiple comparisons. Frontiers in Microbiology | www.frontiersin.org are inhibited respect to control growth) of 39.4 µg/mL. As same as MBC, the clinical isolates showed a higher MIC (80% isolates with MCI range of 19.7-39.4 µg/mL) than most of the food chain isolates (80% isolates with MCI range of 9.85-19.87 µg/mL). The current search for new and effective bactericidal compounds is a significant goal with the purpose to fight against MDR strains, and nanoparticles have been established to date as a promising approach to deal with this problem. Silver nanoparticles have shown to be effective against others MDR bacteria, such as P. aeruginosa, methicillin resistant S. aureus (MRSA), A. baumanni, and K. pneumoniae (Leid et al., 2012;Kasithevar et al., 2017). Although both, chromosomal and plasmid-mediated silver resistance are known in bacteria, the fact that silver nanoparticles likely possesses several bactericidal mechanisms in parallel, may explain why bacterial resistance to silver nanoparticles is rare (Natan and Banin, 2017), making its use a promising alternative to cope with MDR strains. This is especially interesting for campylobacteriosis treatment, taking in account that the antibiotics currently used are becoming less effective in the last years. Effect of GSH-Ag NPs in the Viability of Human Intestinal Cells In vitro experiments using human intestinal epithelial cells facilitate initial investigations into the toxicity of exposures and results inform in vivo experiments. This is especially important in the case of nanoparticles ingestion, which is a poorly understood route of exposure. In this work, human intestinal cell lines HT-29, Caco-2, and CCD-18 were used and seven different concentrations of the GSH-Ag NPs (from 0.61 to 39.4 µg/mL) were assayed. Similar viability was observed in all cells lines for the different GSH-Ag NPs concentrations tested. Exposure of GSH-Ag NPs to epithelial cells showed a dose-dependent cytotoxic effect (Figure 3). GSH-Ag NPs concentration up to 4.93 µg/mL showed no significant toxicity (p > 0.05). However, cell viability impairment was observed at GSH-Ag NPs concentrations greater than 9.85 µg/mL, reaching the intestinal epithelial cells a death rate ≥ 30% (Figure 3). This behavior is in accordance with most of the reports for silver nanoparticles toxicity, which use to be in the range of 10-100 µg/mL (Chernousova and Epple, 2013). Cytotoxicity is one of the major concerns in the development of silver nanoparticles, sometimes with controversial results, because many studies consist of a wide range of nanoparticles concentrations and exposure times, making it extremely difficult to determine whether the extent of cytotoxicity observed is physiologically significant (Doudi et al., 2013;Rudramurthy et al., 2016). However, biocompatible and non-toxic silver nanoparticles, suitable for biological applications, have been also reported (Gautam and van Veggel, 2013;Rudramurthy et al., 2016). In the case of human intestinal cells, it has been described that silver nanoparticles could be cytotoxic at concentrations of 10 to 50 µg/mL (Chernousova and Epple, 2013;Vazquez-Muñoz et al., 2017). However, at concentrations lower than 10 µg/mL, silver nanoparticles have been reported to be nontoxic to human cells (Chernousova and Epple, 2013). In the present work, the MIC50 value obtained was 19.7 µg/mL, these results indicate that the mean efficacy of GSH-Ag NPs against Campylobacter is in a range that is toxic for the three epithelial cell lines studied. This demonstrates the need for further toxicity studies to assess the practical implications of the results obtained and to evaluate other attributes linked to the biological compatibility. CONCLUSIONS In conclusion, this study suggests that GSH-Ag NPs could have potential applications to be used as antimicrobial against Campylobacter. It has shown to have antimicrobial properties against MDR strains, but very close to or above the toxicity levels determined in this work for human intestinal epithelial cell lines. Although it is clear that further toxicity studies are needed, the emerging practice to combine silver nanoparticles with other compounds is especially promising, because it would make it possible to use lower concentrations of nanoparticles. Particularly in Campylobacter, silver nanoparticles could help to enhance antimicrobial strength of antibiotics or other natural bioactive compounds, contributing to reduce therapeutic doses and therefore the putative toxicity. Furthermore, in addition to therapeutic alternatives, these GSH-Ag NPs would be potentially applicable in the different places of the food chain where Campylobacter is present, for example in the production and processing of poultry meat, or as an alternative to disinfectants in the Campylobacter biofilm control. AUTHOR CONTRIBUTIONS JS: conception, design, acquisition, analysis, and interpretation of data for the work, edition the manuscript and preparation of the tables and figures. IZ-P: analysis and interpretation of data for the work. DG: analysis and interpretation of data for the work. MM-A: conception, design, and interpretation of data for the work. AM-R: conception, design, analysis, interpretation of data for the work, edition the manuscript and preparation the tables and figures. FUNDING This work was founded through Project AGL2013-47694-R and AGL2015-64522-C2-R from the Consejo Superior de Investigaciones Científicas (Spain).
6,214.6
2018-03-16T00:00:00.000
[ "Medicine", "Biology" ]
Emerging Requirements for Technology Management : A Sector-based Scenario Planning Approach Identifying the emerging requirements for technology management will help organisations to prepare for the future and remain competitive. Indeed technology management as a discipline needs to develop and respond to societal and industrial needs as well as the corresponding technology challenges. Therefore, following a review of technology forecasting methodologies, a sector-based scenario planning approach has been used to derive the emerging requirements for technology management. This structured framework provided an analytical lens to focus on the requirements for managing technology in the healthcare, energy and higher education sectors over the next 5-10 years. These requirements include the need for new business models to support the adoption of technologies; integration of new technologies with existing delivery channels; management of technology options including R&D project management; technology standards, validation and interoperability; and decision-making tools to support technology investment. stringent national and international legislation to reduce carbon emissions have the potential to result in higher liabilities for companies involved in the fossil fuels exploration and production business and consequently this can drive the need for technology solutions that reduce such emissions. In addition to contributing to new product development, technology forecasting can provide companies with the evidence to make strategic decisions in regard to the development of technical capabilities and this includes the prioritisation of internal R&D activities as well as external investments with research suppliers.Other organisations, such as governmental agencies, research foundations and charitable bodies may use technology forecasting to support research investments in areas that have societal benefits, for example, research on developing improved algorithms for cyber-security applications. Literature review on technology forecasting In order to identify the emerging requirements for technology management that lie ahead and principally over the next 5-10 years, it is useful to first consider the approaches that are currently used to forecast technologies themselves.This is because future technology management issues are likely to be related to the corresponding technologies under development in the future.In terms of the current practice for forecasting technologies, there are a number of different processes that are in use.Requirements capture (Cooper et al., 1998) and subsequent analysis can be undertaken for any given field of interest, for example, the capture of the industrial requirements for new or improved structural aerospace materials, such as metal-matrix composites.Requirements engineering through a scenario-based software tool has been previously reported and this method is based on the initial acquisition and modelling of a use case (Sutcliffe et al., 1998).Comparison of the use case with a collection of abstract models corresponding to different application classes is carried out, where the models are associated with different sets of generic requirements.Consequently, through focusing on the class(es) associated with a particular use case, the generic requirements can therefore be reused.There are a range of other structured methodologies also available to support technology forecasting and this includes using the Soviet-originated theory of inventive problem solving (TRIZ) methodology (Mann, 2003) based on assessing technology and business evolution trends. Introduction Technology is a valuable resource for many organisations both in the industrial and non-for-profit sectors.Companies need to have access to leading technologies in order to remain competitive and especially in knowledge-intensive industries such as the pharmaceutical, oil & gas, aerospace and industrial engineering sectors.The adoption and implementation of technologies, whether through in-house development by the company or via acquisition from an external source, is not just related to the characteristics of the technology itself but can be equally associated with the effectiveness of the corresponding technology management practices used within the company.Similarly, in other sectors such as healthcare and higher education, the performance of both companies and not-for-profit organisations will be impacted by the availability of technology.Technology management provides the structures, processes and tools to allow this technological resource to be deployed in order for an organisation's strategic objectives to be delivered. In the oil and gas sector, new technologies are allowing previously inaccessible reserves of crude oil to be extracted and areas such as microbial enhanced oil recovery are attracting significant levels of investment.Conversely, in the pharmaceutical industry there is a recognised pattern of major investment in the drug discovery and development process in areas such as oncology research and development, which is required in order to ensure 'big pharma' have sustainable portfolios of drug compounds to bring to market.In the aerospace industry, there is investment in new materials technologies, such as high performance composite materials that offer improved stiffness and reduced weight when compared to traditional metallic materials used for structural components. Organisations need to be able to forecast new and emerging technologies for a number of different reasons.Companies are interested in identifying emerging technologies to contribute to new product development as well as the development of improved manufacturing processes.This is required to ensure companies remain competitive so they are able to benefit from emerging technologies, which can be integrated into the company's supply chain.These industrial drivers will benefit from an improved understanding of the relationships between different technologies as part of the innovation process and the factors that contribute to technology adoption, which extend to social, environmental, economic and legislative influences.For instance, within the oil and gas services sector, companies are developing improved technologies to ensure that carbon dioxide emissions can be reduced as part of the upstream operations of international oil companies.In this context changes in the environmental landscape and the introduction of more ing the company to mitigate the effects of the global oil price shock in the 1970's.The original planning approach involved the development of simulation models that incorporated data across areas such as industrial development, materials usage and demographics along with different scenarios in regard to economic, social, environmental or technical trends.Although the methodology has been criticized as involving an overly subjective element, it does nevertheless provide a powerful lens to consider different future states (or scenarios), which can be effectively moderated or tweaked to accommodate the impact of changes within a particular area. Methodology The methodology selected was based on the scenario planning approach, which allowed fi ndings to be generated based on established perspectives and practice as well as emerging viewpoints on technology development and adoption.In order to develop an improved understanding of the emerging requirements for technology management, three sectors were identifi ed as being leading indicators for a broad range of technological advances and these were healthcare, energy and higher education.This methodology provides the context for technology management issues thereby enabling both practitioners and researchers to apply the fi ndings from the research.A schematic view of the methodology used is provided in Figure 1. Stage 1 involved the collection of data and information on the three sectors.A comprehensive literature review was undertaken on emerging technologies in the three sectors, which allowed secondary data to be gathered according to a systematic approach.This was accompanied by the collection of data from a series of other activities, such as attendance by the author at lectures, workshops and conferences proach can also be highly contingent on the people dimensions associated with the culture of the organisation as well as the roles and responsibilities of those involved with the process implementation (Gerdsri et al., 2009).Although the resulting roadmaps can be useful visual outputs of the roadmapping process, the real value associated with this technique is often the planning that is carried out in order to generate the roadmaps and such planning will be dependent on both the quality of the available information as well as the participation of those involved. The Delphi method has been used for many years as a structured forecasting tool (Okoli and Pawlowski, 2004), which involves capturing the views of a group of experts through an iterative process that seeks to converge on a common view.Although the method has been used widely it does have weaknesses due to the dependence on the quality of the expert's knowledge and also creative or disruptive thinking can be inhibited by the process.There are a range of other related techniques that can be viewed in the broader context of forecasting and planning and these include a number of statistical methods, such as bibliometrics, demographic modelling, techno-economic analysis and trend analysis. Scenario planning has been used widely to forecast not just technologies but also on a more strategic level to help identify business strategies and organisational direction (Amer et al., 2013).The process benefi ts from being systemic in nature thereby encompassing a broad view of the area under consideration.The methodology was deployed extensively within the international oil company Royal Dutch Shell (Wack, 1985), where it was used to help the company set long-term plans across the various business areas (such as exploration & production; petrochemicals; oil trading; marketing; etc.) and the method has even been reported as help-Figure 1. Overall methodology for establishing emerging requirements for technology management. Information collection Sector--based assessment Analysis peutics, and this is an area where technology developments can play a signifi cant role.The development of new drug compounds continues to be a scientifi cally and technologically challenging area.Whilst the biotechnology business model has yielded some major innovations in the way drug compounds are developed and drug discovery processes, such as high-throughput screening (Casalino, 2012), have also helped in this regard.Nevertheless, pharmaceutical companies continue to make major levels of investment in scientifi c research and increasingly this is with external research suppliers.For example, the open innovation approach to research development being pursued by Pfi zer as part of its CTI (Centers for Therapeutic Innovation) partnership model with university clusters, which are focused on a particular therapeutic area (Ratner, 2011).Furthermore, the commercialisation of full genomic sequencing (Bashir, 2013) will provide massive levels of genetic data on individuals and there are likely to be major implications in regard to the provision of new and improved preventative therapies, i.e. in terms of personalised approaches to medicine.This 'Big Data' challenge will require both scientifi c and technology solutions and this part of the healthcare arena will need to benefi t from the participation of established companies, start-up enterprises as well as universities and other research institutions. There are also major challenges in regard to harnessing ICT (information and communications technology) in the healthcare sector and so called e-health technologies (Morrison, 2012) offer signifi cant promise to improve the provision of healthcare through, for example, improving the quality of information provision to the patient.Other benefi ts can be associated with potentially improving the effi ciency of held in the three sector areas (carried out over a 12-month period).This approach allowed the literature fi ndings to be supplemented by additional insights and viewpoints acquired from the expert presentations at these events.Stage 2 was based on consideration of the information gathered allowing identifi cation of the sector-based drivers for technology. Finally stage 3 involved deriving the technical and social factors underpinning the requirements for new and improved technologies within each of the sectors followed by a process of deductive reasoning to ascertain the emerging requirements for technology management. Results The results on the emerging requirements for technology management obtained from the scenario planning approach are provided according to the three sectors (healthcare, energy and higher education) investigated within this research study. (a). Healthcare sector There are a range of key drivers in the healthcare sector that have the capacity to infl uence emerging technology management requirements.The sector has for a number of years been generally subject to rising costs (Orszag and Ellis, 2007) driven by the complexity of healthcare provision and the availability of technologies, for example, the use of PET-CT imaging to support the therapeutic treatment for oncology patients (Boellaard, 2010).Therefore, rising costs are driving the need for more affordable approaches to the provision of healthcare, which includes diagnostic systems (Yager, 2006), medicines (Bjerrum, 2002) and other thera- Through consideration of these sector-based drivers for technology and the emerging requirements that may arise over the next 5-10 years, it is possible to identify both the technical and social considerations, which in turn contribute to specific technology management requirements for the healthcare sector.This analysis is provided in Table 1. (b). Energy sector The key drivers for technologies related to the energy sector can potentially be viewed in the context of an increasing level of global demand for energy across industrial and domestic markets (and especially from Asia and other developing regions) along with pressure on the supply of both conventional and alternative energy sources.Although the global oil supply may have already reached or be approaching peak supply level (Lutz, 2012), there are still vast supplies of other forms of oil, e.g.heavy oil, tar sands and oil shale (in addition to natural gas and coal).However, the impact of carbon emissions on the environment and the spectre of global warming are leading to a position where such energy-rich resources will be increasingly subject to regulatory frameworks in order to limit environmental exposure.This will drive the need for new technologies, for example, in the area of carbon capture and storage (Anderson and Newell, 2004).Carbon capture and storage (CCS) technologies are targeted on the removal of carbon dioxide from either fossil fuel exploration or production sources through injection of the gas into either deep underground aquifers or other geological formation under the seabed.However, such ap-primary healthcare since medical practitioners would, for instance, be able to replace some follow-up visits with electronically enabled consultations (i.e. through an appropriate online portal) thereby freeing up physician time to consult with more patients as well as providing higher quality care to existing patients.Although these benefits are still to be proven through adoption of supporting business models, there do appear to be significant opportunities for technology to make a major impact in the delivery of healthcare. In regard to primary healthcare, the adoption of electronic medical/health records (EMRs/EHRs) is likely to be a major factor over the next 5-10 years that drives innovation in healthcare IT and systems enabled to utilise increasingly complex medical data and information.Such records could potentially contain all the information related to a particular patient, including pathology, radiology and clinical data and information in a suitable digital format.Data storage, analysis and interpretation, transfer and communication as well as data security are just some of the technology challenges associated with implementation of these records.In this context Figure 2 ency of policy frameworks as well further substantial capital investment being required to support development of the sector.Solar power (Zweibel, 1990) through photovoltaics (PVs) offers a readily available solution in certain countries as well as very low environmental impact.However, again there are expected to be high capital outlay costs and the investment costs may not be covered by the lifespan savings unless a preferential feed-in tariff is offered by the local grid network.Other technological issues include a limited power density and since the energy is not available at night and is less available in cloudy weather conditions there can be a need for an electrical power storage system.Moreover, nuclear power (through nuclear fi ssion processes) continues to be a major source of power in many countries although there are obvious environmental issues associated with this energy supply.In the future, nuclear fusion offers signifi cant promise in regard to vast levels of energy potentially available although the technological obstacles here, such as those regarding materials and engineering considerations, are immense (Zinkle, 2005). As can be observed, there continue to be many sources of energy each with their own set of technology challenges, which will need to be addressed if the growth in demand and shortage in supply are to be met whilst mitigating the effects of climate change.Indeed the total (world) net electricity generation for both OECD and non-OECD countries is expected to approximately double from the 2008 fi gure (19.1 trillion kilowatts) to the expected level in 2035 (35.2 trillion kilowatts) (U.S. Energy Information Administration, 2011).Moreover, this data indicates that coal, natural gas proaches are not without controversy and environmental, social, political and technical barriers all need to be overcome in order for CCS technologies to be implemented on, for example, coal or gas burning power stations. In regard to transport applications, petroleum and diesel are likely to remain in the short/medium term as major energy sources but science and technology offer the prospect to lower carbon emissions and also provide access to a larger section of available fossil fuels, e.g.heavy tar sands and oil shales.Gas to liquids (liquefi ed natural gas, or LNG) can potentially provide cleaner solutions than oil or coal but combustion still results in carbon dioxide emissions.Other benefi ts for LNG include the vast supplies and ease of transportation as well as the discovery of new sources of deep gas seams.Furthermore, coal gasifi cation (Irfan et al., 2011), which involves oxidation of coal to produce syngas, a mixture of carbon monoxide and hydrogen, can be undertaken to enable the resulting syngas to be converted to petroleum via the catalytic Fischer-Tropsch process.These areas all require new technologies to be developed along with the corresponding management processes. Renewable approaches to energy supply and production offer environmentally sustainable solutions to meet the growing energy needs, however, there are many technology challenges that remain with renewable energy sources (Trainer, 2012), such as the variability of wind energy as well as the need for massive capital investment.There are also policyrelated challenges (Peidong, 2009) and in China, for example, these include a need for more coordination and consist- support the introduction of these technologies as there could be the need for signifi cant capital expenditure as well as uncertainty over rates of return and the long-term profi tability of new technologies. Through consideration of these sector-based drivers for technology and the emerging requirements that may arise over the next 5-10 years, it is possible to identify both the technical and social considerations, which in turn contribute to specifi c technology management requirements for the energy sector.This analysis is provided in Table 2. (c). Higher education sector The key drivers related to technology development in the higher education sector span a broad range of areas. The increasing costs of higher education has been report-and liquid fossil fuels (such as petroleum and kerosene) will continue to be in signifi cant usage over this timeframe, with expected large increases in the use of both coal and natural gas (68% and 100% increases respectively from 2008 to 2035).In this period renewables are expected to increase by 122% representing the largest proportional increase in usage, although by 2035, renewables will still only represent 23% of global net electricity generation with nuclear being 14% and the remaining 63% from fossil fuels (coal, natural gas and liquids).Consequently, there will be a signifi cant need to manage the prioritisation of technologies to support the growth of renewables and the implementation of technologies to mitigate climate change, e.g. through designing and introducing carbon capture facilities on existing coal fi red power stations and also on new gas fi red stations.Innovation in new business models will be required alongside development of fi nancial instruments that addressed.Furthermore, there needs to be further development of the supporting business models to allow universities to better understand how such teaching provision can be integrated into the operations of universities, for instance, will such activities be viewed as revenue generating in their own right, or will they be more of a profile-raising initiative to effectively advertise the university more widely to students that may then wish to apply to study at the university?Nevertheless, MOOCs represent a highly innovative extension to education, which will further contribute to the globalisation of higher education through enabling students from developing nations to access high-quality teaching resources from developed countries. As the competition between universities increases, there will also be competitive pressure on research activities and not just education.Universities have been able to attract funding from industrial companies for many years but such translation activities need to be fully aligned with the company's requirements for research if the technical outputs are to contribute to improved products, services or manufacturing processes.This will require improvements in the way such university-industry research collaborations are structured and managed, including the development of process models that allow both parties to secure the required benefits (Philbin, 2008).Consequently, there may be further polarisation of higher education, with greater levels of research funding (industrial, governmental and charitable) being concentrating at fewer research-intensive universities with other academic institutions having to concentrate on teaching and translation being restricted to technology transfer through consultancy. Through consideration of these sector-based drivers for technology and the emerging requirements that may arise over the next 5-10 years, it is possible to identify both the technical and social considerations, which in turn contribute to specific technology management requirements for the higher education sector.This analysis is provided in Table 3. Discussion The adoption of existing and emerging technologies is an important source of value for all organisations and especially industrial companies, which require access to leading technologies in order to remain competitive.Technology management provides the vehicle to allow the development, acquisition and deployment of technology to take place and it is therefore useful to consider the emerging requirements for technology management so that organisations are prepared for the future.There are a range of methodologies available to forecast technologies themselves, ranging from roadmapping and requirements capture through to trend analysis and scenario planning.This paper has described the results from the use of a sector-based approach to scenario ed for many years (Johnstone et al., 1998) as driving the need for greater efficiency and higher levels of quality at universities and tertiary education institutions.In order to improve the level of quality there needs to be robust quality assurance and supporting qualification systems introduced to ensure degree programmes provide students with the necessary knowledge, skills and competencies required for today's society.There also needs to be accountability for change at higher education institutions and this requires effective governance and leadership, which will be contingent on the quality and performance of university leaders and their staff.These matters will likely be under increased pressure as the university sector expands, due to the growth of developing countries and also from the globalisation of the university sector (Maringe and Foskett, 2010).These changes will provide opportunities for some universities to grow and prosper and yet others may struggle to meet the increased levels of competition in terms of securing funding, remaining attractive to increasingly mobile students and also being able to attract high-calibre academic faculty in a free labour market. The impact of ICT and digital technologies has the capability to significantly enhance the way in which education is delivered at universities although the provision of technology can also potentially disrupt conventional delivery methods (i.e.lectures, tutorial, etc.) and so greater attention is required on the complexities of introducing digital technologies at universities (Lea and Jones, 2011).Although university campuses are unlikely to disappear, there will be greater opportunities for technology-driven innovation in teaching practice, for example, the use of tablet computers and mobile applications ('apps') as well as activities such as podcasting and webinars.The effectiveness of e-learning practice will likely be dependent on the input of teachers, students and university administrators, and adoption of e-learning technologies will also be related to the demographic characteristics of the university (Nawaz and Kundi, 2011). The rise of online provision of courses at universities has been continuing to increase in recent years and data from the United States (see Figure 4) shows that participation by students at degree-granting postsecondary institutions in online courses as a percentage of total enrolment has increased from 10% in 2002 to 31% in 2010 (Allen and Seaman, 2011).An increasing level of online provision leads to a distributed approach to education and so called massive open online courses (MOOCs) have also been launched by often prestigious universities (such as Stanford University, USA), which provide entire courses online and free of charge (Cooper and Sahami, 2013).However, such innovative methods present a number of challenges in terms of whether certification will be provided and how assessment is carried out as well as validation and plagiarism issues to be alongside a deeper awareness of the barriers to technology adoption in all three of the sectors.The need to understand the emerging requirements for technology management is not restricted solely to industrial companies, as this research has outlined, not-for-profit organisations, such as universities, also face challenges if they are to benefit from technology opportunities and avoid the corresponding risks.Therefore, technology management as a discipline needs to develop and respond to the arising technology issues as well as societal and industrial needs.There should be a greater awareness of the future requirements for the management of technology and this paper has provided insights into the emerging requirements for technology management for three important sectors. The limitations of this paper lie with the research methodology employed.However, the findings have been established through a structured approach that draws on a comprehensive literature review supplemented by other data and information gathering activities.The use of a sector-based approach to scenario planning has provided a systematic and focused method to allow qualitative information and supporting quantitative data on sector trends to be analysed. Future work is suggested on refinement of the sector-based scenario planning approach through capturing additional technical inputs in order to improve the richness of the material provided, for example, the use of a survey instrument with a group of experts from the sector areas.The use of systems dynamics, such as cause and effect modelling, is also recommended in order to provide further mechanistic insights into the emerging requirements for technology management. planning in order to generate a view on the emerging requirements for technology management that will potentially need to be addressed over the next 5-10 year timeframe. Through a structured approach to information gathering based on literature reviews augmented by attendance at sector specific events, it has been possible to establish a qualified perspective on the drivers for technology across the healthcare, energy and higher education sectors.This has been extended through socio-techno analysis to enable identification of the emerging requirements for technology management for each of these three sectors. The healthcare, energy and higher education sectors all represent knowledge intensive parts of the global economy and they are being impacted by an increasingly dense and more diverse data and information landscape.The rapid development of ICT along with the pace of digitisation is driving new practices along with the challenges associated with 'Big Data', i.e. understanding how organisations can leverage massive levels of data that are now becoming available.Therefore, technology management needs to provide the tools to support technology implementation and improved decision-making to meet societal needs, e.g.development of carbon capture technologies for clean power generation, or development of personalised medicine approaches through genomic technologies.The development of decision support tools that integrate technology availability factors with market analysis data will be useful in this regard and such systems need to be made available to industry practitioners allowing implementation and subsequent benchmarking to take place.There is also a need to integrate technologies alongside existing delivery channels, e.g.adoption of communications and mobile technologies in healthcare and education.Understanding the process, structural and cultural implications of such integration will need to be undertaken Figure 2 . Figure 2. Level of offi ce-based physicians with EMR/EHR systems in United States from 2000 to 2012 (Source: Hsiao and Hing, 2012). ness models to support online medical provision as well as the need for recognised IT systems standards to govern the technology implementation pathways. Figure 4 . Figure 4. Total and online enrolment in degree-granting postsecondary education institutions from 2002 to 2010 (Source: Allen and Seaman, 2011). Table 1 : highlights the increasing level of EMR/ EHR usage by office-based physicians in the United States.It should be noted that the data from 2001-2009 is based on the US National Ambulatory Medical Care Survey interviews and mail survey sources, whereas data from 2010-2012 is preliminary and based on mail survey only sources.Nevertheless, this data clearly shows the rapidly increasing level of EMR/EHR usage in the United States (from 18% in 2001 to 72% in 2012) and as this technology becomes embedded into the fabric of healthcare delivery, the scope and provision of services is likely to be significantly changed.In this scenario there are a range of associated technology management requirements, such as the need for new busi-Results from healthcare sector scenario planning. Table 2 : Results from energy sector scenario planning. Table 3 : Results from higher education sector scenario planning.
6,589.8
2013-09-24T00:00:00.000
[ "Computer Science" ]
Artificial Intelligence in Public Health: Current Trends and Future Possibilities Artificial intelligence (AI) is a discipline that studies whether and how intelligent computer systems that can simulate the capacity and behaviour of human thought can be created [...]. 1. It must act in a similar way to human beings: the product of the procedure performed by the artificial intelligent system should be indistinguishable from the one followed by a human being. 2. It must think in a similar way to human beings: the sequential activity that leads the intelligent system to face and solve a problem is comparable with the one followed by a human being. 3. It must behave (act and think) in a rational way: the method that leads the intelligent system to solve a problem is a formal structured process following the logic. 4. It must obtain the best possible result: the process that leads the intelligent system to solve the problem is the one that allows it to obtain the best-expected outcome based on the provided information. AI has its roots in and includes scientific disciplines such as computational sciences and neural networks. Starting in the 1980s [2] we began to speak specifically of AI when Deep Blue-type system emerged and were found to be capable of dealing with the human beings in the game of chess and again later on, when NASA used dedicated applications such as Remote Agent to manage the activities of a spacecraft. From these first applications, it was clear that the use of AI has important implications for the user and on the environment; therefore, particular attention must be paid to the ethical, environmental, regulation, and social aspects of AI and to the need to increase transparency and responsibility in the process of use. In some previous studies focused on AI in digital radiology and digital pathology we have seen this; that is, the real integration of AI in the health domain cannot be based on scientific development alone and cannot ignore ethical, regulatory, and social aspects, such as the acceptance of citizens and professionals [3][4][5][6]. In 2017, following a conference of world artificial intelligence experts promoted by the Future of Life Institute, a vademecum with 23 principles was drawn up, with broad consensus to address the ethical, social, cultural, and military issues of AI. The document was immediately signed by over 800 experts and later by thousands more [7]. In the health domain, when dealing with the health of the individual, all of this is particularly felt. This Special Issue, "Artificial Intelligence in Public Health: Current Trends and Future Possibilities" [8], aims to act as a collector among scholars on issues related to both the development and integration of AI in the health domain. This is addressed both with reference to current trends and by trying to grasp future developments. This represents a new and hot research topic, as verified through a search on Pubmed, a reference database on issues related to biomedicine and the health domain. The high percentage of reviews around these issues certainly denotes a great interest in the cultural and scientific mediation actions dedicated to the intensification of research in this area aimed both at the development and present and future integration of AI on the part of scholars. This is comforting and corroborates the idea of developing a Special Issue dedicated to this [8]. Much is expected of AI both today and tomorrow, both from a development point of view and via integration with the health domain, which must also pass regulations, ethical scrutiny, social acceptance. Research in this area is strategic for the development of health systems and is inextricably linked to the development of digital health, both in regard to this collection, monitoring, and management of information and in regard to the management of hospital and connected government information systems. Think, for example, of the opportunities presented by wearable monitoring, big data, robotic assistance, rehabilitation, and surgery. The applications of artificial intelligence have received growing interest in many sectors, such as in those related to organ, functional tissue, and cell diagnostics [3][4][5][6]; care robotics, which assist in interventions, rehabilitation, and supporting the communication and assistance of disabled people [11][12][13][14]; the biomedicine sector, where it is implemented in applications from genetics to modelling [15][16][17][18]; and precision and personalized biomedicine [19][20][21][22][23][24]. The consolidation of technologies based on artificial intelligence in the health domain is intended to bring benefits to everyone, from the stakeholder to the patient, in the form of equity of care. In the future, artificial intelligence is expected to have a strong impact on: • The prevention of the onset of diseases in the individual and in society • The provision of personal care and assistance. • Society trends regarding diseases and the impact of biological and behavioural factors. • The organization of hospital activities with regard to treatment, diagnostics, and decision-making processes. Thanks to artificial intelligence, on the one hand, big data [25,26] will help us to predict diseases on an individual and collective basis and to identify and correct population behaviours; on the other hand, wearable technologies will allow us to monitor and collect individual medical information and to calibrate the care process. The integration of artificial intelligence with virtual reality and augmented reality [27,28] will allow us to create both virtual medicine services that citizens can access in a simple and direct way as well as robotic surgery applications that are increasingly effective and safe. Many professionals will be involved in the process of developing and integrating AI into the health domain in the present as well as in the future. The sectors of research, diagnosis, and clinical therapy will be called upon to offer ideas and experts in this area. The government, educational, and political sectors will also be called upon to make important decisions. Therefore, stakeholders will have to take on important responsibilities. The research sector, to which this editorial is addressed, will have a leading role since it is from this sector that not only algorithms and proposals for AI solutions but also concrete ideas of transfer to the health domain for a stable use in clinical/biomedical practice will emerge, accompanied by the actions of the regulators acting at all envisaged levels. This key role will also be assumed by the disciplines responsible for the certification of medical devices with artificial intelligence content, which will have to be harmonized and consolidated, particularly at the international level. It is hoped that this Special Issue [8] will be useful for this purpose and that it will be capable of making important contributions in the levels of intervention related to these issues. Funding: This study was supported and funded by the Italian Ministry of Health-Ricerca Corrente. Conflicts of Interest: The author declares no conflict of interest.
1,576.4
2022-09-21T00:00:00.000
[ "Computer Science" ]
Polyaniline cryogels: Biocompatibility of novel conducting macroporous material Polyaniline cryogel is a new unique form of polyaniline combining intrinsic electrical conductivity and the material properties of hydrogels. It is prepared by the polymerization of aniline in frozen poly(vinyl alcohol) solutions. The biocompatibility of macroporous polyaniline cryogel was demonstrated by testing its cytotoxicity on mouse embryonic fibroblasts and via the test of embryotoxicity based on the formation of beating foci within spontaneous differentiating embryonic stem cells. Good biocompatibility was related to low contents of low-molecular-weight impurities in polyaniline cryogel, which was confirmed by liquid chromatography. The adhesion and growth of embryonic stem cells, embryoid bodies, cardiomyocytes, and neural progenitors prove that polyaniline cryogel has the potential to be used as a carrier for cells in tissue engineering or bio-sensing. The surface energy as well as the elasticity and porosity of cryogel mimic tissue properties. Polyaniline cryogel can therefore be applied in bio-sensing or regenerative medicine in general, and mainly in the tissue engineering of electrically excitable tissues. 3D structures can involve, for example, conducting materials of a nanofibrous character prepared via electrospinning, which provide interconnected pores facilitating cell attachment and growth. The electrospinning of CP alone, however, is difficult because their backbones are rigid and they exhibit low solubility in solvents. Therefore, they are usually mixed with standard, thermoplastic polymers to achieve materials suitable for electrospinning 8,9 . The coating of independently prepared scaffolds by conducting polymer is another means of preparing conducting 3D structures 10,11 . All of the above-mentioned techniques, however, suffer from an inhomogeneous distribution of conducting components. Polyaniline-based 3D cryogels are materials swollen with water, which contain a conducting polymer together with a supporting polymer as their constituents 12 . The polymerization takes place in the frozen state and the prefix "cryo" thus refers to the method of preparation; hydrogels are obtained after thawing. Polyaniline is the conducting part of hydrogels, while various water-soluble polymers can be used as the carriers providing the mechanical properties 13 . The utilization of CP-based cryogels is a novel approach affording a solution to the problem of the inhomogeneous distribution of conducting components in the bulk of common conducting hydrogels. It should be noted, however, that research into CP-based 3D materials is in its infancy, and studies dealing with their biological properties are rare despite the fact that cell responses are known to differ on planar and 3D materials. Polyaniline is generally considered as a polymer with limited biocompatibility 14 . The term biocompatibility refers generally to the ability of a material to coexist with living organisms and tissues without harming them, and its testing can be conducted in a variety of ways. According to the prospective application of the material, defined sets of specific tests are used. As the prepared polyaniline cryogels are considered for applications in bio-sensing and tissue engineering, cytotoxicity, embryotoxicity, stem cell adhesion and growth, and the impact of CP on cardiomyogenesis and neurogenesis have been chosen in order to reveal the basic biocompatibility parameters of this novel and promising material. Experimental Sections The interaction of any material with cells or tissues depends on its surface and bulk properties. The surface energy, pore-size distribution, and elasticity expressed by Young moduli were determined for polyaniline cryogel. In addition, impurities leaching from polyaniline cryogel were characterized by chromatography. These characteristics, together with biological properties, provide a comprehensive view on the applicability of polyaniline cryogel in a variety of applications. Preparation of Polyaniline cryogel. Polyaniline was prepared by the oxidation of the respective monomer with ammonium peroxydisulfate 15,16 . Aniline hydrochloride (2.59 g) was dissolved in a 5 wt.% aqueous solution of poly(vinyl alcohol) (PVAL; molecular weight 61,000; Sigma-Aldrich) to 50 mL volume, and ammonium peroxydisulfate (5.71 g) was also dissolved separately in the same solution to the same volume 12 . Both solutions were pre-cooled to 0 °C and mixed, then drawn into polyethylene syringes. The solutions were quickly frozen in a solid carbon-dioxide suspension in ethanol at −78 °C and subsequently placed in a freezer at -24 °C, and aniline was then left to polymerize for 7 days. The concentrations of reactants were 0.2 M aniline hydrochloride, 0.25 M ammonium peroxydisulfate, and 5 wt.% poly(vinyl alcohol) 12 . The originally white ice changed to dark green/ black as polyaniline was produced. After thawing, the cryogels were removed 12 from the syringe and left in water for one week to remove any low-molecular-weight reactants or by-products. The cryogel was composed of ≈2 wt.% polyaniline, 5 wt.% poly(vinyl alcohol), and 93 wt.% water 12 . The cryogel thus has a composite nature where polyaniline affords the conductivity and poly(vinyl alcohol) the mechanical integrity 12 . Material Properties. Thermal conductivity. The thermal conductivity was measured by TCi Thermal Conductivity Analyzer (C-THERM Technologies, Ltd., Canada) with heat conductivity range 0.01-10 W m −1 K −1 , at 25 °C and instrument regime for porous materials. The cylindrical samples were prepared with diameter 1.5 cm and thickness 0.5 cm. Surface energy. Contact angle measurements and the determination of surface energy were conducted with the aid of the Surface Energy Evaluation System (Advex Instruments, Czech Republic). For polyaniline cryogel, deionized water, ethylene glycol, and diiodomethane (Sigma-Aldrich) were used as test liquids. The droplet volume of the test liquids was set to 2 μL in all experiments, these conducted on a gently dried flat surface of polyaniline cryogel. Ten separate readings were averaged to obtain one representative contact angle. The substrate surface free energy was determined by the "acid-base" method and calculated according to procedure described in work of van Oss 17 . The acid-base theory enables the determination of the polar and dispersive contributions to the total surface free energy as well as the electron-donor and electron-acceptor components of the polar part of the surface free energy (equation 1). where γ TOT is the total surface energy, the superscript LW denotes the total dispersion Lifshitz-van der Walls interaction and AB refers to the acid-base interaction. According to Lewis, the acid-base interaction can be determined by equation (2). where γ + is the electron-donor and γ − is the electron-acceptor component of the acid-base part of the surface energy. The surface free energy γ TOT can be calculated using Young-Dupré equation (3). Here j refers to the studied material, i the testing liquid and Θ is the measured contact angle. Pore-size distribution. Pore size was estimated by analysis of scanning-electron micrographs of planar sections of freeze-dried polyaniline cryogel by the image analysis. The specific surface of the material was computed based on the assumption that the pores were closed; in reality, pores are partially open, and the real specific surface area is therefore slightly smaller than computed. It should be noted that the pore-size distribution in freeze-dried cryogels may differ somewhat from the pore-size distribution in native hydrogels. For calculation, method of chords recommended in ASTM E112-13 standard was employed. The system of random lines was drawn to planar section of image from which the lengths of individual chords were measured by automatic image analysis. From the mean chord length the mean pore size was subsequently estimated. For pore size estimation 10 images with magnification of 1000x were used and about 500 of chords were measured on each image. The porosity was computed as 1−V d /V w , where V d is the volume of the polymer estimated from the mass density of the polymer and the mass of the dry sample, V w is the total volume of the sample computed from the geometric parameters of the cylindrical sample and independently of the mass of the wet sample (difference was less than 5%). Young modulus. Young modulus was determined on a Shimadzu Autograph AG-X tensile tester. The analyses were performed on four cylindrical samples with a length 10 mm and a diameter 9.2 mm. Each sample was inserted between two horizontal plates and deformed with a rate of 1 mm min −1 . Measurement time was of 3 min and within this short period of time the humidity of the sample was considered constant. The Young modulus was determined as a slope of the liner part of the stress-strain curve. The experiments were performed in compression. Impurity profile. Concentrations of residual impurities were determined using a modular HPLC system consisting of a Waters 600E pump, a VD 040 vacuum degasser (Watrex, Czech Republic), and a UV200 ultraviolet detector (Watrex, Czech Republic). A reversed-phase C18 column X-select (300 mm × 7.8 mm; Waters) was employed. The analysis was performed in isocratic mode with an acetonitrile/acetate pH 4 buffer at a ratio of 60/40 (v/v) as the mobile phase. A flow rate of 0.8 mL min −1 and 20 μL injection volume were employed. Analytes were monitored at 235 nm by the UV detector. Data acquisition and analysis were performed using a Clarity Chromatography Station. For HPLC analysis, polyaniline samples were extracted in accordance with ISO 10993-12 in the ratio of 0.1 g of polyaniline cryogel per 1 mL of ultrapure water. Medicine Masaryk University. Used cell lines. Primary mouse embryonic fibroblasts (MEF) were used to determine the cytotoxicity of individual extracts. They were isolated from 13.5 days post coitum (dpc) mouse embryo, mouse strain C57BL/6. Mice were obtained from the Laboratory Animal Breeding and Experimental Facility of the Faculty of Medicine, Masaryk University, Brno, Czech Republic and kept under controlled conditions; standardized diet pellets and UV light-treated tap water were available ad libitum. Experiments were performed in the accordance with national and international guidelines on laboratory animal care and with the approval of the Institute Ethical Committee. Embryos were washed in Phosphate Buffered Saline (PBS) and decapitated, and the inner organs removed. For the separation of fibroblasts, the embryos were trypsinized and mechanically disrupted by pipeting. A single-cell suspension was seeded onto a tissue culture dish in complete high glucose Dulbecco's modified Eagle's medium (DMEM) supplemented with 100 U mL −1 of penicillin, 0.1 mg mL −1 of streptomycin, 15% Foteal Bovine Serum (FBS) (all from Invitrogen-Gibco) and 0.05 mM 2-mercaptoethanol (Sigma-Aldrich). The first seeded MEFs were designated as passage 0. In our experiments, only MEFs up to passage 3 were employed. The embryonic stem cell ES R1 line 18 was propagated in an undifferentiated state by culturing on gelatinized tissue culture dishes in complete DMEM media. The gelatinization was performed using 0.1 wt% porcine gelatin in water. Dulbecco's Modified Eagle's Medium containing 15% fetal calf serum, 100 U mL −1 penicillin, 0.1 mg mL −1 streptomycin, 100 mM non-essential amino acids (all from Gibco-Invitrogen; USA), 0.05 mM 2-mercaptoethanol (Sigma Aldrich) and 1 000 U mL −1 of leukemia inhibitory factor (Chemicon; USA) was used for the cultivation. Preparation of ESc R1 line-derived cardiomyocytes from the HG8 clone were described previously 19,20 . Purified cardiomyocytes were seeded onto reference tissue culture dishes and onto all tested polyaniline surfaces. DMEM:F12 (1:1) supplemented with 5% fetal calf serum, antibiotics as above, and insulin, transferrin, and selenium supplements (ITS; all from Gibco-Invitrogen) was used as the growth medium. Cytotoxicity and embryotoxicity. The cytotoxicity testing of extracts of polyaniline cryogel was performed according to ISO 10 993-5. Samples were extracted according to ISO 10993-12 in the ratio of 0.2 g mL −1 of relevant cultivation medium. Extraction was performed in chemically inert closed containers using aseptic techniques at 37 ± 1 °C under stirring for 24 ± 1 h. The parent extracts (100%) were then diluted in a culture medium to obtain a series of dilutions with concentrations of 50, 25, 10, 5 and 1%. All extracts were used within 24 h. Cells were pre-cultivated for 24 h and the culture medium was subsequently replaced with polyaniline extracts. As a reference giving 100% cell proliferation, cells cultivated in the pure medium were used. To assess cytotoxic effects, the MTT assay (Invitrogen Corporation, USA) was performed after one-day cell cultivation at 37 ± 0.1 °C. The absorbance was measured at 570 nm by an Infinite M200 PRO (Tecan, Switzerland). All tests were performed in quadruplicates. Dixon's Q test was used to remove outlying values. The morphology of the cells was observed using an inverted Olympus phase contrast microscope (Olympus IX81, Japan). Polyaniline cryogel embryotoxicity was analysed as the likelihood of the formation of beating foci within spontaneous differentiating ES R1 cells carrying Nkx2.5-GFP reporter construct (NKX2-5-Emerald GFP BAC reporter), which mediated cardiomyocyte specific expression of green fluorescent protein (GFP) was used 22 . An 5-day-old EBs 20,23 derived from ES R1 cells clone NK4 were seeded onto polyaniline cryogel or tissue culture plastics coated with gelatin. After 13 days (day 18 of overall differentiation), the appearance of both GFP positive cells and beating foci was checked. Cytocompatibility. Adhesion and growth of stem cells: ES R1 cells were seeded onto tissue culture plastics (reference) or onto polyaniline cryogel at a density of 4 × 10 4 cells per cm 2 . They were uploaded by calcein AM (10 μM; Invitrogene) or after ES R1 cell growth for two days, then fixed by 2% formaldehyde and visualized through nuclei staining by 4′,6-diamidine-2′-phenylindole dihydrochloride (DAPI; 10 ng mL −1 , Sigma). The number of ESc was quantified by counting of viable cells (counterstained by calcein) visible on photomicrographs. Adhesion and growth of cardiomyocytes: Purified ES-derived cardiomyocytes were seeded onto tissue culture plastics (reference) or onto polyaniline cryogel. After two days, cells were fixed by 2% formaldehyde and counterstained by antibody against cardiomyocite specific myosine heavy chain (MF20 antibody, developed by Donald and Fischman, was obtained from the Developmental Studies Hybridoma Bank developed under the auspices of the National Institute of Child Health and Human Development and maintained by the University of Iowa, Department of Biological Sciences). Anti-mouse IgG antibody conjugated to Alexa 568 fluorochrome (Invitrogen) was used as the secondary antibody. Cell nuclei were counterstained by DAPI (10 ng mL −1 , Sigma) 20 . When the visualization of viable cardiomyocytes was required, the ES R1 cell clone NK4 carrying Nkx2.5promotor-GFP reporter (RP11-88L12 NKX2-5-Emerald GFP BAC Reporter, from BACPAC Resources Children's Hospital and Research Center at Oakland), which is specifically expressed only in cardiomyocytes 24 , was used 20 . The number of cardiomyocytes was quantified by counting of viable cells (counterstained by calcein) visible on micrographs. Neural progenitors: Neural stem/progenitors cells (NSCs) were isolated from the embryonic ganglionic eminence (GE) of the forebrain of C57/BL6 mice at 13.5 dpc. C57BL/6 mice were obtained from the Laboratory Animal Breeding and Experimental Facility of the Faculty of Medicine, Masaryk University, Brno, Czech Republic. The mice were kept under controlled conditions; a standardized pelleted diet and HCl or UV light-treated tap water were available ad libitum. Experiments were performed in accordance with national and international guidelines on laboratory animal care and with the approval of the Institute's Ethical Committee conforming to the guidelines from Directive 2010/63/EU of the European Parliament on the protection of animals used for scientific purposes 21 . Three-day-old neurospheres were seeded onto tissue culture plastic with gelatin coating or onto polyaniline cryogel. Neurosphere differentiation was mediated by the withdrawal of neural supplements (B27, N2) and growth factors (EGF, FGF-2), and by supplementation with 2% bovine serum. Differentiating cells were stained by calcein AM (10 μM) or fixed by 2% formaldehyde, and F-actin was stained by phaloidin-FITC (Sigma-Aldrich). Nuclei were counterstained by DAPI (Sigma-Aldrich, 10 ng mL −1 ). Cell pictures were taken using an Olympus digital camera (E-450) mounted onto an Olympus inverted microscope (IX51). The number of neural progenitors was quantified by counting of viable cells (counterstained by calcein) present on micrographs. A fluorescent pictures or videos of cells were taken using an Olympus E-450 digital camera (photos) or INFINITYLite camera (videos) mounted onto an Olympus inverted epifluorescent microscope IX51. Results and Discussion Material Properties. Materials properties of cryogel can be classified into surface and bulk properties. While surface properties are crucial with respect to the first contact of a material with biological fluids and cells, bulk properties, such as porosity and elasticity, play a crucial role in its long-term interaction with cells and tissues. The coefficient of thermal conductivity for swollen cryogel sample was (1.3 ± 0.1) W m −1 K −1 . Surface energy, as a basic surface characteristic of polyaniline cryogel is shown Table 1. The obtained values indicate that the surface of the cryogel is extremely hydrophilic as the only value of contact angle, which can be taken into consideration refers to the methylene iodide. The other liquids created sessile drops below the adequate level. This result is in agreement with expected data for PVAL systems. Cell adhesion is a process that is affected by the properties of the surfaces to which the cells adhere. These include a broad range of characteristics γ tot (mN m −1 ) γ LW (mN m −1 ) γ AB (mN m −1 ) such as topography, porosity wettability, softness, roughness microstructure as well as presence of characteristic functional groups on the material surface 25 . Key role in the cell response plays protein adsorption, which exhibits the first step preceding attachment of cells. Thanks to their amphiphilic character, proteins can absorb on both hydrophilic and hydrophobic surfaces. On the hydrophobic surfaces they adsorb through hydrophobic patches present on their surface. In the case the surfaces are hydrophilic, they interact with the polar and charged functional groups 26 . Hence the protein composition and conformation are critical characteristics with respect to subsequent cell adhesion and influence also nature of cells to be adsorbed on the surfaces. Detailed description of mechanism of adsorption of proteins on polymers surface is beyond the scope of the current manuscript. However taking into account the published studies the hydrophilicity of the polymer is considered preferable for adsorption of the cells 27 . Porosity is a crucial factor not only influencing the ability of cells to migrate and grow within the structure but also providing biomechanical stimuli and influencing the microenvironment (e.g., with respect to the release of biofactors or efficient nutrient exchange). Moreover, porosity affects vascularization and facilitates mechanical interlocking between scaffolds and surrounding tissue 28 . As can be seen from the scanning-electron micrographs (Fig. 1), polyaniline cryogel has a highly macroporous structure. The mean pore size obtained from the image analysis of planar sections of cryogel was estimated to 159 μm and corresponding specific surface area to 0.020 m 2 cm −3 . Pore size distribution was not computed; however, from images it can be concluded that it is quite narrow and covers sizes from tens to hundreds of microns. The porosity of sample estimated from comparison of swollen and dry mass of cryogels reveals value of 95.5 vol%. Therefore, polyaniline cryogel meets the criteria for scaffolds. Elasticity is the second important bulk property influencing tissue reactions to the scaffold. Metallic neural interfaces can serve as a good example as their mechanical and structural properties are highly dissimilar to those of neuronal tissue, a situation which can lead to irritation and adverse immune responses 1 . Polyaniline cryogel has properties much closer to those of native soft tissues than for example mentioned metallic devices. The mean value of Young modulus of polyaniline cryogel was determined to (9.7 ± 0.5) kPa; the example of stress-strain curve recorded on the cryogel is given in Fig. 2. With respect to obtained value of Young modulus and the fact that typical soft tissues have a Young modulus ≈1 MPa 29 or even lower, it can be concluded that the cryogel exhibits properties of elastic material. Moreover, the elasticity of swollen polyaniline cryogel is demonstrated in Fig. 3. Impurity Profile. With respect to the preparation procedure, residual aniline hydrochloride and ammonium peroxydisulfate are the main expected impurities in polyaniline cryogel. Both of these substances are considered as potentially harmful 30,31 . Chromatographic analyses revealed that the concentrations of residual aniline hydrochloride and ammonium peroxydisulfate in native polyaniline cryogel were 12.8 ± 0.5 μg g −1 and 2.3 ± 0.3 mg g −1 of the cryogel, respectively. In addition to these two impurities, two other unknown peaks were observed on chromatogram. With the aid of appropriate standards and their retention times, these two peaks were identified to be oxidation products of aniline, namely hydroquinone and p-benzoquinone 32 . Their concentrations were of 8.4 ± 0.3 μg g −1 and 36 ± 3 mg g −1 of the cryogel, respectively. Chromatogram of impurities detected in cryogel extract is presented in Fig. 4. It can be assumed that the cryogels also contain residual ammonium sulfate originating from polymer synthesis 11,15 . However, this impurity can't be detected by UV detector and, moreover it falls to a group of substances generally recognized as safe (GRAS) by U.S. Food and Drug Administration 33 and hence it is not expected to have any significant harmful effect to the cells. To reveal if the additional purification can lead to elimination of low molecular impurities the polyaniline cryogel was further purified by the repeated extraction of 5 g of cryogel with 50 mL ultrapure water for 24 h until a pH 7 of the extract was achieved. Additional purification of the sample through its extraction with ultrapure water further reduced the content of impurities leached from it. The concentrations of aniline hydrochloride and p-benzoquinone were lower than 1 μg g −1 of cryogel and ammonium peroxydisulfate and hydroquinone were not detected. Additionally purified cryogel was used only for determination of impurities and cytotoxicity. All the other tests, however, were performed on native cryogel without additional purification not to bias the biocompatibility of native polyaniline cryogel. Biocompatibility. Cytotoxicity and embryotoxicity. Cytotoxicity is the basic parameter of biocompatibility, which demonstrates the ability of cells to survive in the presence of foreign materials -in present case, the extracts of the tested polyaniline cryogel. It is known that polyaniline powder displays significant toxicity 14,19 . Cytotoxicity is mainly connected with the presence of low-molecular-weight impurities 34 . Cryogel contains of 2 wt.% of polyaniline in the matrix; thus its toxicity should be lower than in the case of polyaniline powders, which was confirmed in the current work. As described above, the cytotoxicity tests on MEF were performed not only on native but also on additionally purified polyaniline cryogels. Cell viability was similar for native and purified samples, and 5, 10, 25, 50 and 75% extracts caused no cytotoxicity (cell survival was higher than 80%, compared to the reference). Only the parent 100% extracts showed mild cytotoxicity (cell survival 60 to 80%). The cytotoxicity of extracts is clearly illustrated in Fig. 5. It can be concluded that although purification leads to a decrease in the amount of low-molecular-weight impurities present in polyaniline cryogel, the impact of washing on cytotoxicity is negligible. In all the other tests, The appearance of beating cardiomyocytes in EB outgrowths is used as one of toxicological end-points to assess the embryotoxic potential of tested samples. In this experiment, beating foci within spontaneous differentiating ES R1 cells through formation of EBs were observed both on tissue culture plastics and on polyaniline cryogel, the foci beating in the same manner and frequency. The beating is presented in micrographs of beating foci on polyaniline cryogel (Fig. 6) and in a supplementary video. Combining the results of cytotoxicity testing on differentiated fibroblasts with visual assessment of the morphology of beating cardiomyocytes in differentiating embryonic stem cells is considered as a standard procedure for the evaluation of embryotoxicity 35 . It can therefore be concluded that polyaniline cryogels do not exhibit embryotoxicity. Cyto-compatibility. The application of any material in regenerative medicine, tissue engineering, or biomedicine presumes that its surface will be in contact with cells. Although the adhesion of cells on surfaces in vivo depends on the adhesion of proteins, which can significantly alter the surface properties of the material, the adhesion of cells under in-vitro conditions is the technique generally accepted for evaluating the interaction between surfaces and cells. It is well known that cell types require different surface properties for their growth. Therefore, the responses of cells related to electro-sensitive tissues were tested on polyaniline cryogel. Namely, stem cells, embryoid bodies, cardiomyocytes, and neural progenitors were used. Stem cells are generally considered for application in regenerative medicine and tissue engineering. In general, ES R1 cells were able to adhere, grow and form compact colonies on tissue culture plastics (Fig. 7A,B). A slightly different situation arose on the surface of polyaniline cryogel. Thanks to DAPI counterstaining, which visualizes the nuclei of cells (Fig. 7C,D), it can be seen that the number of stem cells on the cryogel surface was lower in comparison with the reference. This was confirmed by counterstaining with calcein which determined only viable cells (Fig. 7). On the cryogel, ES R1 cells formed more compact colonies, which was probably the result of their lower capacity to adhere to its surface. This was also demonstrated by the presence of more compact colonies and also by the smaller number of viable cells. A similar situation was encountered after previously formed EBs were seeded on the polyaniline cryogel (Fig. 8). After the seeding the EBs either attached on the surface or stayed floating. The number of attached and floating EBs were similar in case of TC plastic or polyaniline cryogel. The floating EBs were removed when cultivation medium was changed. From the EBs which attached to the surface, the cells can further migrate and proliferate. As can be seen from Fig. 8A,B, where only viable cells are visualised, the migration and proliferation of cells from EBs is more intense in case the EBs are attached to TC plastic than to polyaniline cryogel. This is depicted by the green spots visible outside the EBs which are present in higher quantity on TC plastic than on cryogel. EBs seeded onto polyaniline cryogel maintained their compact morphology better, and cell growth outside EB boundaries was relatively rare (Fig. 8B). Comparison of cell behaviour visualized in Fig. 8A and B indicates that polyaniline cryogel allows for the adhesion of EBs, but the migration is less intensive than on reference TC plastics. The weakest interaction between the cryogel surface and cells was observed in the case of cardiomyocytes and neural progenitors. Cardiomyocytes seeded onto TC plastics adhered well to this surface. In contrast, they were not able to adhere to polyaniline cryogel (Fig. 9). The differences in the responses of ES R1 cells, EBs, and cardiomyocytes can be explained by the specific properties of cardiomyocytes. In comparison with the other types of cells used, cardiomyocytes were already beating when they came into the contact with the surface of the cryogel. In general, such beating can have an impact on their ability to adhere to surfaces. The behavior of EBs, however, shows that cardiomyocytes can spontaneously differentiate within the EB on the cryogel. It can therefore be expected that spontaneously differentiating cardiomyocytes will probably be able to grow on the cryogel surface. Differentiating neural progenitors were seeded onto TC plastics or polyaniline cryogel and visualized using phalloidin-FITC (binding to F-actin), or by calcein AM (determined viable cells). The micrographs show that neural progenitors formed compact colonies of well-spread cells on tissue plastic both with and without gelatin (Fig. 10A,B). When seeded onto polyaniline cryogel, rare colonies of poorly-spread cells were observed (Fig. 10C,D). Corresponding results were also observed for cells stained with calcein AM, which visualizes only viable cells (not shown). Any polymer intended for application in regenerative medicine or biosensors must fulfil a number of criteria including good biocompatibility, the presence of appropriate bulk and surface properties and, in particular cases, also the presence of more specific properties, such as conductivity. Some of these properties can be achieved through the use of conducting polymers; however, such polymers do not possess appropriate bulk properties as they can only be prepared in the form of thin films (in case of polyaniline 40-400 nm thick) 36 , powders, and colloidal dispersions, which all have only limited application potential. Due to these shortcomings, it is advantageous to combine conducting polymers with other polymer biomaterials. The efficacy of this strategy has recently Figure 9. Isolated cardiomyocytes seeded onto TC plastics (A) and polyaniline cryogel (B). Cardiomyocytes were visualised using antibody against cardiomyocyte-specific myosine heavy chain (red). Individual cells were visualised through nuclei counterstaining by DAPI (blue). Number of cells at the starting point is not presented as the cardiomyocytes did not proliferate after the seeding. The number of cardiomyocytes adhered on the cryogel was significantly smaller than on TC plastic. The micrographs were taken on day 2 after seeding. been confirmed by a number of research studies 9,37-40 . Here, the bulk properties of the studied polyaniline cryogel correspond to those of hydrogels and are ideal for contact with soft tissues. One of the advantages of using polyaniline is the ability to modify its surface properties by a variety of simple methods including reprotonation with various acids, the grafting of functional groups, and copolymerization with various co-monomers, etc. It is also well known that any type of cell requires specific surface properties, as was unambiguously confirmed by the different behaviours of PC12 cells on polyaniline-grafted with bioactive peptides 41 . The limited ability of ESc, EBs, cardiomyocytes, and neural progenitors to adhere, grow and proliferate on polyaniline cryogel corresponds to previously published findings 41 and confirms that polyaniline-based materials require post-preparation surface modification, which is tailored to the specific cells used in order to improve such materials' cyto-compatibility. Polyaniline cryogel is a new form of biomaterial. The purpose of present study is therefore to describe basic biological properties of its native form. In context of future studies, the cytotoxicity and embryotoxicity of native polyaniline cryogel is more important than surface properties influencing the cell adhesion, proliferation and migration, as surface properties can be easily modified by various techniques to achieve the desired interaction with concrete cell lines. Polyaniline cryogels combine poly(vinyl alcohol) and conducting polyaniline. To determine the cytotoxicity of polyaniline, it is best to study polyaniline powder, as it has the highest content of potentially hazardous components compared to thin films or colloidal dispersions. A previously published study dealing with the biocompatibility of standard powder polyaniline hydrochloride prepared by the IUPAC-approved procedure 15,16 through the oxidative polymerization of aniline hydrochloride with ammonium persulfate indicated that cytotoxicity can be related to low-molecular-weight compounds accompanying the polymer 14 . A reduction in cytotoxicity was observed after purification procedures aimed at the removal of impurities found in pristine powder polymer. Whether such procedures involved reprotonation/deprotonation 14 , reprecipitation 42 or Soxhlet extraction 34 , all such purification methods pointed to residual monomers or low-molecular-weight by-products as being responsible for cytotoxicity. In standard polyaniline powder, impurities related to residual precursors used for polymerization, i.e. aniline hydrochloride and ammonium peroxydisulfate, were determined in the respective extracts. HPLC analyses showed that polyaniline hydrochloride polymer leached out residual aniline hydrochloride and ammonium peroxydisulfate in concentrations of 0.95 ± 0.03 mg g −1 and 96.1 ± 1.9 mg g −1 of polymer powder, respectively. The sample exhibited cytotoxicity against two different cell lines, human immortalized non-tumorogenic keratinocyte cell line (HaCaT) and human hepatocellular carcinoma cell line (HepG2). In both cases, the cytotoxicity was dependent on the concentration of impurities in the extract and the type of cells to which the extract was applied. When graded according to the requirements of EN ISO 10993-5, the cytotoxicity of parent 100% extract of polymer was assigned as severe for HaCaT cells (a survival rate lower than or equal to 40%) and moderate for HepG2 (a survival rate of 40-60%). After the parent extract was diluted, the first entirely non-cytotoxic concentration appeared in the case of 1% extract, with cell survival higher than 80% for both cell lines. Interestingly, the impurity profile of polyaniline gel was completely different compared to standard polyaniline powder. The fact that polyaniline cryogel does not express significant cytotoxicity or embryotoxicity, that various cell types are able to adhere and grow on its surface, and that it can undergo simple surface modification in order to improve its biointerfacial cytocompatibility opens the door to its potential application in regenerative medicine and biosensing. Concluding remark. Electrical conductivity, based on the presence of conducting polymer, is an important parameter of cryogels. Though not reported or discussed in the present study, preliminary results suggest that the conductivity of native polyaniline/poly(vinyl alcohol) cryogel swollen with water is of the order of 10 −3 S cm −1 12 . In the solutions of electrolytes, especially of acids, such conductivity will be higher due to the contribution of ionic charge-transport. In contrast, polyaniline becomes non-conducting under alkaline conditions when the salt converts to a base, and the contribution of electronic conductivity becomes negligible. Conclusions Polyaniline cryogels supported by poly(vinyl alcohol) are novel macroporous soft conducting materials. They not only have good mechanical integrity represented by Young modulus of 9.7 ± 0.5 kPa but they are also macroporous and highly hydrophilic. All these properties are prerequisites for any application in tissue engineering or biosensing. On the basis of the results of cytotoxicity testing and stem cell differentiation, it can be concluded that polyaniline cryogel also has appropriate biological properties and is therefore suitable for application in tissue engineering and biomedicine in general, where the electrical monitoring or stimulation of tissue is required.
7,737.8
2018-01-09T00:00:00.000
[ "Biology", "Materials Science", "Engineering" ]
Mood Extraction Using Facial Features to Improve Learning Curves of Students in E-Learning Systems Students’ interest and involvement during class lectures is imperative for grasping concepts and significantly improves academic performance of the students. Direct supervision of lectures by instructors is the main reason behind student attentiveness in class. Still, there is sufficient percentage of students who even under direct supervision tend to lose concentration. Considering the e-learning environment, this problem is aggravated due to absence of any human supervision. This calls for an approach to assess and identify lapses of attention by a student in an e-learning session. This study is carried out to improve student’s involvement in e-learning platforms by using their facial feature to extract mood patterns. Analyzing themoods based on emotional states of a student during an online lecture can provide interesting results which can be readily used to improvethe efficacy of content delivery in an e-learning platform. A survey is carried out among instructors involved in e-learning to identify most probable facial features that represent the facial expressions or mood patterns of a student. A neural network approach is used to train the system using facial feature sets to predict specific facial expressions. Moreover, a data association based algorithm specifically for extracting information on emotional states by correlating multiple sets of facial features is also proposed. This framework showed promising results in inciting student’s interest by varying the content being delivered.Different combinations of interrelated facial expressions for specific time frames were used to estimate mood patterns and subsequently level of involvement of a student in an e-learning environment.The results achieved during the course of research showed that mood patterns of a student provide a good correlation with his interest or involvement during online lectures and can be used to vary the content to improve students’ involvement in the e-learning system.More facial expressions and mood categories can be included to diversify the application of the proposed method. Keywords—Mood extraction; Facial features; Facial recognition; Online education; E-Learning; Attention state; Learning styles INTRODUCTION The main problem arises in E-Learning as there is no supervisor to assess how students are physically and emotionally responding to the delivered content.Usually when the students taking any course online, they may lose concentration and focus resulting in poor academic performance.Tackling this issue can advance the e-learning process many fold as each student"s interest can be assessed and necessary improvements can be made to the content to engage the user during the online lecture.In order to circumvent the problem of observing student on an e-learning platform, this research is conducted with a view to analyze the relationship between facial expressions of a student enrolled in an e-learning system and the ways to improve upon learning attitude of such students using information extracted from these features.E-learning is a medium for imparting education anytime and anywhere, and due to recent advances in information technology, online education systems can be considered as a blessing and an important information technology asset.Knowledge transfer via informational technology tools requires careful planning and execution as the learning environment provided to the student during e-learning offers complex insight on the student"s learning curve.In order to improve the e-learning experience, the process of learning becomes imperative as it majorly governs how much and how well a student can absorb knowledge during online lectures [1].Delivery of content, examinations and student feedback are important measures that have a direct effect on the learning curve of students as well as the e-learning objectives.Still the time frame required for relating and observing all these measures must be long enough to account for every possible detail [2]. These measures are also the same as that are used in traditional or on campus learning where teacher has a direct interaction with students.Initially computers and information technology was used as tools to improvise learning.This concept subsequently evolved to full-fledged e-learning systems.Universities have now started offering online courses and have developed e-learning platforms catering to the need of almost any student.E-learning has allowed off campus students to get educated at homes or simply anywhere in the world. Knowledge delivery through e-learning offers numerous advantages but most of its features can only be fully utilized if the student"s involvement and interest remains continuous throughout the course of online education [3].As a student has a personal preference for acquiring knowledge at one"s own time and pace, this allows people from all walks of life to have an opportunity to learn and educate themselves without any restrictions of time and space. With this evolution in the e-learning technologies and the increasing number of students, requirements for improving online education experience are getting more and more demanding.It"s understood that more in depth studies are www.ijacsa.thesai.orgneeded in order to ascertain the variables which can really affect online educational environment in a positive way [4]. Natural feedback on the content being delivered can be taken automatically from learners by using their facial expressions as a tool to measure interestingness of the content and engagement of student in the online lecture [5].Facial expressions can provide critical information on student"s interest and participation in online educational learning.Faces provide detailed information about an individual's state of mind, mood and also emotional state.Studies throughout history have shown that facial expressions are the prime representation of human emotions.Facial expressions can be considered as the main source of information, after words, in estimating an individual"s thoughts and state of mind [6]. Facial Recognition has proven to be an important tool in automate tutoring as it helps in the improvement of students learning outcomes as well as in the development of the learning experience [7] [8].In the end, this leads to improvement in the learner"s involvement in the learning environment. This research aims at enhancing students" learning outcomes while studying online courses.This can be considered as analyzing the real-time interaction between student and machine, and assessing student"s engagement during Elearning session, which is constantly changing over the passage of time.This variable of engagement can be plotted against time and can be considered as a function of time.This engagement function will be called a student"s learning curve in the rest of this paper as its variation as a function of time directly affects the learning aptitude and interest of the students. The basic claim made in this study is that lack of students" involvement/engagement during online classes due to the lack of the physical presence of teachers is the main factor that hinders learners from achieving on-line courses" learning outcomes.This is largely due to the absence of any direct teacher supervision of the students learning process who in such learning context may be distracted in many ways from what they are studying ,with there is no one present to supervise them in what they are learning. A student studying using online resources cannot participate in a verbal communication, then the major attributes that can be observed to ascertain a student"s mood and attitude are his facial features and body language [9]. The prime motive behind this study was to devise a methodology to identify major mood patterns with high probability in an e-learning environment.Data continuously pile up when visual data is recorded in real-time.The sample space becomes large and takes more computational power.To address this specific issue, the secondary goal of this research was to integrate a sequential mining technique which can identify mood patterns with high probability.Rules were extracted using Apriori algorithm to reduce the mood sample space by tagging frequent facial feature patterns into predefined five mood categories. In the subsequent sections, literature is reviewed followed by a discussion of research methodology for applying the proposed technique, and in the end the results are presented with concluding remarks. II. LITERATURE REVIEW E-learning presents a lot of learning opportunities for people unable to attend regular schools, colleges or universities.Given the importance of E-learning in this information age, a lot of research has been carried out to improve the performance and adaptability of e-learning.This section will present past, present and prospective studies undertaken for the purpose of improving the e-learning ecosystem. Online teaching and e-learning methodologies have transcended to new levels after the boom of information technology age.As a result, the quality of education and number of online learners has increased substantially.Still, the modernized way of e-learning creates problem that affects a student"s learning curve due to unavailability of any direct supervision [10]. An instructor can provide some insight into student"s satisfaction during lectures [11], therefore student"s involvement in class has direct correlation with the professional aptitude of the instructor [9].Direct supervision not only facilitates learning but also keeps the student synchronized with the course objectives due to instant communication with the instructor at any time during the lecture.Lack of communication has shown that affected students may experience high levels of frustration [11]. As supervised teaching is very critical to the learning curves of the students, online courses present a different set of challenges to instructors and students.Online students may never visit a physical campus location and may have difficulty establishing relationships with faculty and fellow students.Researchers who study distance learners must understand and account for these differences when investigating student satisfaction [12],mentioned three important types of interaction in online learning courses: (a) learner-content,(b) learnerinstructor, and (c) learner-learner.He emphasized that instructors should facilitate all types of interactions prompting attentiveness in their online courses as much as possible.E-learning requires use of video, audio, text to simulate the traditional class and learning environment as closely as possible.E-learning environments may be used for a numerous educational purposes.Modern trends indicate that e-learning based education will come at par with traditional education methods in the near future.In an e-learning environment, teacher and student are not in direct interaction and content is provided by the instructor thorough online platforms using multimedia and software interfaces. As there is no means of instant communication, machine can only understand what it records using standard man machine interfaces.As there is no verbal communication between the student and the e-learning platform, facial expressions are the only means that can provide concrete information about a student"s mood and involvement during the class [13].For example, when students show confused expressions, one of the common mood patterns may be one or a combination of the following facial features i.e. eyebrows www.ijacsa.thesai.org lowered or drawn together, vertical or horizontal wrinkles on the forehead, and inconsistent eye contact etc.In order to understand whether the student is grasping what is being delivered, a lecturer must sense the subtle nonverbal indicators exhibited by the expressions of the students [14]. Facial features and there relevance to emotions has been rigorously investigated by Ekman et.al [26][27] [28][29]in various publications and their work is regarded as one of the most significant contribution to facial attributes based emotion analysis.Facial acting coding system can provide information about instantaneous facial emotional reactions, but still the need to ascertain a complete mood based on various action units as they vary from person to person and situation to situation. Facial features (Forehead, eyes, nose, mouth, etc.) are the fundamental attributes that are extensively used in face recognition systems as their movements help determine the construction of expression on a human face [15]. Facial recognition can be efficiently used to identify and categorize facial expressions in real-time.Machine learning algorithms have also been employed for facial recognition to enhance accuracy and detection time [16].Facial expressions are basically emotional impulses translated into physical muscle movements such as, wrinkling the forehead, raising eyebrows or curling of lips.Authors in [17] presented the beneficial prospects of using intelligent methods to extract facial expressions to improve the processing speed of image analysis.Database of facial expressions have been populated in various studies to develop interesting algorithms for various applications. Emotion recognition study can be broadly categorized into three steps: Face detection, Facial feature extraction and Emotion classification.Detailed research has been carried out in each of these.These three categories are concerned with the central background pertaining to the issue of facial emotion recognition. In an image, detecting the presence of a human face is a complex task due to the possible differences attributed to different faces.The varying physical attributes of a face are the major cause for this variation.The emotions which are the combination of facial action units [31] in a human face also affect facial appearances. Neural networks can be actively used to classify a learner"s orientation in predetermined categories, which can be associated using Apriori algorithm to allow for real-time HMI intervention for improved involvement.The aim was to assess in real-time whether the e-learning systems can be improvised to recognize the facial expressions and attention state of a learner using classification and data association algorithms.These systems can then be used to improve content delivery of e-learning platforms through real-time mood extraction.Appropriate learning materials and activities for a learner can be incorporated to alter his mood state during e-learning activity. Appearance-based approach maps the human face in terms of a pixel intensities.Since only face patterns are used in its training process, the efficiency is not good.Even the time taken is lengthy, as the number of patterns which needs to be tested is large. A neural network was found to be quite effective in capturing complex facial patterns from facial images.Both supervised and unsupervised learning approaches are used to train the neural network.Since finding a sufficient training data set is questionable, unsupervised neural networks are more preferable.Apart from neural networks, Support Vector Machines (SVM) [37], eigenfaces, Distribution based approaches, Nave Bayes classifiers, Hidden Markov Models (HMM) [38] and Information theoretical approaches can also be used for face detection in the appearance-based approach [33][34] [35][36].Rather than minimizing the training error as in neural networks, SVM operate by minimizing the upper bound on the generalization error limit instead of minimizing training error as in neural networks.Eigen faces uses Eigen space decomposition and has proven an accurate visual learning method.Nave Bayes classifier is more efficient in estimating the conditional density functions in facial subregions.The HMM differs from template-based and appearance-based approaches as it does not require exact alignment used in these approached rather HMM constitutes a face pattern as a series of observation vectors. A student involvement in e-learning is directly based on how he can be engaged to focus and listen to the content being delivered.Facial expressions over short instants can be misleading and a time frame based analysis to ascertain emotional states can provide interesting results.For example, confusion and frustration was studied using temporal and order based patterns using continuous affect data [30].A similar study was carried out by Craig et al [32] which also included boredom.Authors reported that confusion is affiliated with indirect tutor dialogue moves and negative tutor feedback.Similarly, frustration was found to be affiliated with negative tutor feedback, and boredom did not appear to be detectable from the set of three dialogue features [32].Timing of an emotional state can also play an important role in automated tutoring as reported by authors in [31].This study investigated the relationship between affect and learning.However, identifying the exact places where emotion occurred during the learning process was not covered limiting the efficacy of Auto Tutor system. III. RESEARCH OBJECTIVES A coherent information exchange between learner and machine is imperative for effective E-learning and is based on the learning curve of the student.Research objectives in this study were formulated to develop a practical technique for understanding student interest during the E-learning session.A student interest can thus be enhanced vide engagement techniques.The research objectives for achieving this objective are listed as follows: www.ijacsa.thesai.orgFirst objective was to investigate whether facial expressions are the most pertinent means of nonverbal expression mode during e-learning and can in turn assist the e-learning system to identify the interest and comprehension level of the students. Second objective was to list most common facial features that describe the involvement of a student in a lecture.A list of 54 features were compiled and used in a survey to identify most pertinent facial features for describing student"s expression. The third objective was to develop a methodology to relate facial features to understand expressions of a student during various emotional states describing his involvement in the lecture with high probability in real-time. Next section consists of the methodology pursued to identify important facial features and will present details on how facial features recorded over certain time frames can provide sufficient information regarding moods of a student in real-time with reduce computational power and sample space. IV. RESEARCH METHODOLOGY Research methodology pursued in this research was conducted phase wise.First, a survey was carried out among instructors involved in e-learning to inquire and identify most probable facial features that represent the facial expressions, and over certain time, the mood patterns of a student.A neural network approach is then used to train the system using facial feature sets to predict specific facial expressions.Data association based algorithm was selected in proposed approach to extract information on emotional states by correlating multiple sets of facial features using support and confidence levels.This was done to improve the clustering of the relevant datasets.The methodology was designed to analyze a student"s interest by varying the content being delivered.Different combinations of inter-related facial expressions for specific time frames were used to estimate mood patterns and subsequently level of involvement of a student in an e-learning environment. A trained dataset of facial features representing student"s emotional state is the primary requirement to assess a student"s involvement in an e-learning environment.The data has to be collected first, correlated with emotional indicators and is then reused as training data to extract different expressions describing facial features.The data association algorithm is applied to relate features into expression over a time period to discover negative mood patterns of a student during online lecture.The methodology presented here in order to pursue above objectives consists of three major phases, which are as follows: A. Categorization of Facial Features Using a Survey Instrument: Before embarking on a detailed investigation into efficacy of observing facial features to asses a student"s interest during an online lecture.A survey is conducted to evaluate whether facial feature analysis is the most pertinent means of understanding a student"s behavior during e-learning.Secondly, the survey recorded observations from academics regarding which facial features partially describe mood or emotional state of a student. In order to construct a baseline for the facial features, a survey was conducted in 2014/2015 academic year and 198 academics from various universities were approached for their response.Experts in online instruction were approached with minimal 2 years of teaching experience at postsecondary level. Survey was forwarded with a brief explanation of the research objectives with two instrument questions which were,  Will the process of measuring the learners" degree of engagement/involvement during studying online courses help in learner to focus more and as a result improve the learning outcomes?Which facial expressions you think are most obvious and recurrent in lectures?  List the most pertinent facial features for eyes, eyebrows, lips and head that constitute facial expressions of student, representing his state of mind and involvement during a lecture. B. Training and Classification of Facial Features Dataset using Neural Networks: Classification algorithms make use of supervised learning techniques to predict the class of previously unobserved data by using a training model from existing data [18].An efficient way to define a classification model is to characterize it as a set of comprehensive classification rules to provide relevancy and accuracy, simultaneously.An extensive "Cohn-Kanade" data was selected for training and classification of our neural network model. The Cohn-Kanade AU-Coded Facial Expression Database [19] is available for research purposes online and is used in facial image analysis and for perceptual studies.This database consists of 486 sequences from 97 faces.Each sequence starts with a neutral expression, gradually leading to the peak Feature collection can be carried out using feature extraction algorithms presented in literature.Recognition and www.ijacsa.thesai.orginterpretation mood or learning attitude of a student is carried out by analyzing facial features during the lecture.Neural networks are used to train our system on "Cohn-Kanade" facial expression database and same was used to identify a student"s involvement state during an online lecture.Facial features were characterized into four main categories based on the response of the survey: eyes, eye brows, lips, head (incl.hand/fingers on face). A Radial basis function NN algorithm [20]which was used in this study to classify facial expressions based on facial features(Table -1). Figure -1 shows the distance points for eyes, eyebrows and lips that were used to define facial features for the training of NN algorithm.Any change in the distance metrics point to a certain facial feature instance and combination of these facial features can be used to classify the five facial expressions. As distance provides certain thresholds to make decisions related to facial expressions, these thresholds can be used to classify unknown patterns [25].Figure -2 shows the higher level decision model of NN used in our proposed model for classification of facial features. In this figure the pixels and combinations of two distance attributes are shown which provide information on eyebrow position relative to the eye.The matrix shown here can be used to plot a feature if D1 and D2 are plotted over a 2-D plot.The proposed classification model is already trained to identify the affinity of the region belonging to the acquired D1 and D2 values, and it correlates the D1 and D2 values to a certain feature, i.e., in this case Feature "21" based on the prior training data.Therefore it is necessary to train the proposed model with a larger database to improve the probability of detection of a certain feature. C. Mood Extraction Using Facial Features Associative Data Mining is considered as an important data mining technique and it has been extensively researched and used for data mining by researchers.Data Association helps in mining of association based rules between items based on item set transactions and it is regarded as an important tool for rule discovery in very large datasets [21].Data association can provide an estimate of unknown relationships and decision rules in a dataset which can greatly improve the process of decision making and prediction [22].A student"s mindset can be well communicated through his facial expressions during an online lecture.Change of mood of a student can be observed using following instruments: facial expressions, hands and body language.These instruments can be observed individually or in combination, however, in both the cases, the data association patterns can be extracted to get a better understanding of the behavior of the student during online learning.This data association approach is very efficient and provides accurate result in instances when one category alone cannot be used to assess accurate understanding of the state of mind of a student [24].A combination of facial feature categories which have a high probability of occurrence reduces the decision space by many folds.Facial expressions for each student for every online course can amount to very large data.Therefore, a well-established algorithm is presented here to extract moods from a large set of facial features.The algorithm is modified to be used for identifying moods of students and subsequently making accurate decisions about their interest level during the delivery of an online course.Mood extraction using data association is carried out in two stages [24] [25]. Fig. 2. A distance based radial basis function decision approach for facial features Approximating categories or item-sets, that occur frequently, and data association for extracting rules that are based on the relationships between these items is the first step of classification.In the subsequent stage,, items are evaluated to segregate sets of items that occur frequently and have a ratio of occurrence greater than the minimum support threshold [23].www.ijacsa.thesai.org In the second phase, all possible rules are extracted from the item set, and the number of rules will depend upon the all possible combinations of the items in a given item set e.g. if an item set is of the form {a1, a2, a3}, then the rules that can be extracted are {a1→a2, a3},{ a2→a1, a3}, { a3→a1, a2}, { a1 , a2→ a3}, { a1 , a3→ a2}, { a3 , a2→ a1} etc. A rule {X→Y}, where X and Y are facial features can be verified using confidence and support threshold levels.Support and confidence thresholds are used as constraints or limits for rule extraction.Support and confidence thresholds provide a measure for pruning the rules which does not meet the threshold criterion.In a nutshell, associative data mining is used for mood extraction by employing user specified support and confidence levels for related facial features and this approach can be used to devise a gauge forassessing extent of correlation between facial features in a dataset. Apriori algorithm [24] is an efficient technique for data association which can be used to generate frequent feature sets from database facial features.The algorithm [25] makes iterative estimation of most frequent items based on support and confidence metrics.Other metrics may also be used but these are the standard metrics in assessing frequency of an item set and objectiveness of a student"s mood during online learning. Support level is used to estimate that how often a relationship is established in between various facial features in a dataset, while confidence level provides a measure to determine the frequency of "B" facial feature in observed features also containing feature "A" during time "t".Time period for observing a student"s involvement and attentiveness is dependent upon the content being delivered and significant parts from the content that require complete attention from the students for understandability.Support determines how frequent is a distance attribute appearing in SET "A" also appear in SET "B" for a given number of samples, whereas, confidence determines how frequently distance attributes from SET "B" correlates with distance attributes in SET "B". Abovementioned support and confidence metrics can be mathematically represented as Support: Confidence: Following steps are proposed for robust mood extraction architecture to evaluate interest and attention of a student towards educational content being delivered online.These steps will be form the basis of the proposed algorithm for mood extraction using facial features.  Frequent Feature-sets Generation: Considering N transactions, all frequent feature-sets are estimated based on support levels.This is an iterative process to identify and generate candidate feature-sets.This part of the algorithm involves two phases.In first phase, it checks each feature-set starting from single facial feature from the feature-set to the maximum size feature-set.In the second phase, new feature-sets are estimated from the previous iteration and support is tested against the support threshold.Number of iterations in this step depends upon the maximum size of item set i.e. (kmax + 1) is the total number of iterations and kmax is the largest size of a frequently occurring feature-set.  Candidate Generation and Pruning: In this step, new candidate feature-sets are generated based on the (k-1) feature-sets found in the previous iteration followed by pruning by using support levels.  Support Counting: In this step, occurrence frequency of candidate feature-sets after pruning is determined and support levels are updated.  Mood Extraction: A level-wise approach is used to discover rules based on data association between consequent and antecedent facial features in frequent feature-sets.At first, all the rules with a single consequent are selected to generate new candidate rules.The selection of these rules is based on respective confidence levels.Rules generated by Apriori algorithm can be large in number depending upon the database being searched. As a test case, 30 students from a mathematics class were observed during a one hour session of e-learning and expressions were extracted using Apriori method explained above.Lecture session was divided into 10 minutes subsessions, where each sub-session addressed a particular mathematics problem.Students with an average age of 15 years were selected for the study.All students were from grade 10 in a private school.Students were selected based on their academic performance and sufficient exposure to e-learning environments.No prerequisite information was provided to them regarding nature of this exercise. A 35mm digital camera was used with a 10 fps frame rate to record facial features.Using association between facial features, facial expressions were sought out to extract mood of a student for 6 x 10 minutes time frame during learning.Radial basis function based NN algorithm [20] was employed to classify moods based on facial features.Apriori algorithm is subsequently used to create frequent feature sets or mood sets from which most pertinent rules can be extracted to declare a mood pattern valid.www.ijacsa.thesai.orgA written feedback was acquired from every student after each 10 minutes session comprising of the following two questions: 1) Mention in which parts of the lecture you were  Happy 2) Which parts of the 10 minute frame did you not understand or were inattentive (1 to 10)? Based on this topology all three phases were executed sequentially, and results for the study are presented in the next section 102 out of 200 participants provided their feedback based on which result of the survey were formulated and was used to categorize five major expressions and 23 facial features.These are listed in Table -2 and Table-3.An astounding 88 percent of the respondents agreed that facial expressions do reveal the involvement of a student in the class and can be used to assess a student"s response to the content being delivered.This provided strong basis for our subsequent analysis, which was carried out to classify facial features using neural networks. Table-4 shows the comparative performance of the Radial Basis Neural Network model [20], Hidden Markov (HMM) [38]model and Support Vector Machine (SVM) [37] model tested in this study.All these algorithms were implemented in Matlab and integrated with LabView vision module to process and classify image data.All the models were trained using facial feature"s distance attributes from Cohn-Kanade database and also from a custom database which was populated using facial expressions of sample space of 30 students.The NN model [20] used in this study outperformed HMM and SVM for the Cohn-Kanade database in this study and proved to be most reliable given sample space is large.Based on different distance sets, feature sets were populated and tested for reliability using Cohn-Kanade dataset.Expressions were readily classified with high accuracy, when test images were selected from the same database used for training our NN model.However, accuracy dropped but to acceptable level when custom image set of 30 students was used as a test data set.This problem can be circumvented by using a custom template for images and iteration of neural network training with on new datasets.A similar trend was observed for the SVM and HMM classification carried out on the Cohn-Kanade dataset.For the custom dataset, SVM and HMM results showed random classification rates which can be attributed to small sample space of facial features. The response of the students was recorded by asking them to provide a score out of 10 for their attentiveness during each 10 min session and based on their feedback; attentiveness was correlated with the extracted facial expression sequences or simply mood patterns.Mood extraction was carried out during every 10 min session of the one hour mathematics lecture for all the 30 students.The total extracted mood patterns using Apriori and correct patterns based on the correlation results are shown in Table - Results showed that mood patterns extracted had a high correlation with the feedback provided by the students.In all the cases, our proposed algorithm showed a success rate of over 70% in assessing the student"s mood.This showed that a student"s expression over a 10 minute timeframe can be used to predict and extract a student"s mood, which in turn can be used to assess student"s attentiveness in the class.The results showed that the proposed approach was very robust due to integration of neural network based classification and Apriori algorithm for mood extraction.The difference in success rates for each mood can be related to basic test settings, incomplete database and simpler NN training settings. The proposed classification and mood extraction method do not attempt to address complete theory of emotions in context of e-learning, rather it is intended to devise a methodology in identifying any mood that is persistent is affecting a student"s attention in an e-learning environment, which is an important consideration as highlighted in [31] [32] for boredom, confusion and frustration states.The results provided in this research shows that the proposed technique is promising in assessing five moods in an active e-learning environment which were selected using a survey.The success percentage for assessing each emotional state is above 70%.In future work, more emotional states can be tested and based on the results from this study, a similar success ratio is expected given an extensive facial feature database is used. V. CONCLUSION The art of understanding how different students comprehend educational content during an online study session requires detailed investigation on the behavior and emotional Percentage Error Percentage Error www.ijacsa.thesai.orgstate of the student throughout the lecture [33][34].This research was carried out to determine possible ways to observe and analyze behavior of a student with an aim to understand the events triggering his emotional detachment during an online class. Visual data acquired using high definition cameras contains a lot of information when stored over a long period of time, and it needs to be continuously recorded thus accumulating into very large data.Data mining approaches can help in similar can help in mining patterns from such large data.Important rules based on correlation characteristics of classification attributes like distance can be acquired to characterize mood swings and changes that affect the learning curves of a student in an e-learning environment.The results can help in better understanding the complete eco-system of an E-learning environment where learner-machine active interaction and raised level of student"s engagement is of prime concern.The delivery of e-learning content as well as attitude discrepancies in a student can be then adequately addressed to enhance the student"s involvement and attentiveness during eleaning.This can be done during or after the e-learning session based on the preference of the student and/ or the E-learning administrator Main contribution of this research is the integrated approach with neural network facial recognition and Apriori based mood extraction, which showed a probability of over 70% for detecting 5 emotional states or moods. Facial expressions describe the emotional state of the learner and analysis of content and delivery methods can be carried out to achieve optimal experience in an e-learning environment [35].However it is difficult to devise universal standard content delivery systems for every learner, therefore, specific testing sessions may be incorporated in an e-learning system allow customization as per student"s learning curves. The results assimilated using a survey response from various academics showed that facial features are the best method to observe changes in the mood of a student and relevant causes can be extracted by relating the timeline with the content delivery and student"s successive changes in facial expressions.The problem addressed in this study is limited to determining how a mood can be extracted by associating feature sets comprised of various facial expressions.The cause of alterations in mood and mental state are altogether another problem and is not discussed in this research. Mental state of a student can be observed using his facial expressions as facial features tend to change and provide the best depiction of what student have in mind [36].As involvement of a student during unsupervised learning is critical in improving his learning potential, it is pertinent to learn the problems faced by students in an e-learning environment. Finally, most contribution of this research lies in the results which showed that facial expressions extracted using neural networks, and the reduction of sample space using Apriori algorithm can be actively used to derive student"s emotional state during content delivery in an e-learning system.The proposed integrated approach showed a high probability of positive mood detection rate (>70%) for five moods.Happy, sad, confused, disturbed, and surprised moods or emotional states For future work, the resultant data can be used to optimize e-learning content delivery to engage learner more actively in real-time when a mood leading to inattentiveness is detected. Fig. 1 . Fig. 1.Distance attributes helping in measuring facial features expression.This database provides solid foundation for our trained NN model, speeding up the face recognition process. Following facial expressions were targeted for training the proposed model.a) Happy b) Sad c) Confused d) Disturbed e) Surprised Fig. 4 . Fig. 4. Classification accuracy on custom database Table-1lists the 10 minute divisions of a one hour lecture. TABLE II . FACIAL EXPRESSIONS DEFINING MOOD OF A STUDENT IN A CLASSROOM TABLE IV . CLASSIFICATION ACCURACY IN CUSTOM AND EXISTING DATABASES TABLE V . VALID MOOD PATTERNS EXTRACTED DURING 6 X 10 MINUTE TIME FRAME IN ONE HOUR SESSION FOR 30 STUDENTS
8,492.4
2016-01-01T00:00:00.000
[ "Computer Science" ]
Fast spatial behavior in higher order in time equations and systems In this work, we consider the spatial decay for high-order parabolic (and combined with a hyperbolic) equation in a semi-infinite cylinder. We prove a Phragmén-Lindelöf alternative function and, by means of some appropriate inequalities, we show that the decay is of the type of the square of the distance to the bounded end face of the cylinder. The thermoelastic case is also considered when the heat conduction is modeled using a high-order parabolic equation. Though the arguments are similar to others usually applied, we obtain new relevant results by selecting appropriate functions never considered before. Introduction Parabolic high-order (in time) equations arise in the study of viscoelasticity, fluid mechanics or heat conduction. We can cite the work of Lebedev and Gladwell [16], where the authors propose high order in time viscoelastic solids. We can also consider the generalized Burgers fluids [33], which correspond to a parabolic third order in time equation (anti-plane shear). Moreover, we can recall the recent theories concerning dual-phase-lag [34] and three-phase lag [4] for the heat conduction. In short, we can say that parabolic high-order equations model a big quantity of thermomechanical problems. The knowledge of the spatial behavior of the solutions for equations and systems is an important topic in mechanics and mathematics. From a mechanical point of view, it is related to the Saint-Venant principle and, from a mathematical point of view, with the Phragmén-Lindelöf principle. Mathematical studies about the spatial behavior have been proposed for elliptic, hyperbolic and parabolic equations [2,[5][6][7][8][11][12][13][14]19,20,[23][24][25]. The list of contributions in this theory is huge, but we want to focus our attention to the parabolic case. Perhaps, the first contribution in this line was done by Knowles [15], where the exponential decay for the solutions was obtained. However, it is worth recalling the work of Horgan et al [9] and extended by Horgan and Quintanilla [10] for functionally graded materials. These contributions provide spatial decay estimates of the kind of the exponential of the "square" of the distance to the boundary where the perturbations hold. They represent an improvement in the sense that the spatial decay for the transient classical heat equation is faster than the spatial decay for the static heat equation. Later, some extensions to these contributions were proposed [28,29]. Furthermore, the combination with the elastic equation has been also considered [17,31]. However, the first contribution concerning the spatial behavior for high order (n-order) of a partial differential equation was given in [32]. In this last contribution, the parabolic (and hyperbolic) transient problem was studied with the help of a weighted Poincaré inequality. In the parabolic case, an exponential decay (linear in the distance to the bounded boundary) was obtained. In this paper, we want to improve this last result. We are going to obtain a Phragmén-Lindelöf alternative for a function defined on the cross-section and we will prove that the decay is of the type obtained in [9,10]. We also study the thermoelastic problem when the heat conduction is determined by a high-order parabolic equation. It is worth recalling that in a recent paper [30] the author showed that the decay would be faster than any exponential of a linear expression of the distance. Here, we give a new precise decay improving the ones presented previously. Although the arguments proposed have been considered in many other contributions by different authors, in this work we introduce new functionals which allow us to obtain the improvement in the knowledge of the decay. In the next section, we propose the parabolic high-order problem that we will study later. To this end, we need to recall several inequalities which are summarized in the third section. In the fourth section, we obtain a Phragmén-Lindelöf alternative for a cross-sectional measure. In the fifth section, we prove a faster decay estimate. In the sixth section, we consider the thermoelastic problem and we prove a decay estimate of the type of the exponential of a second order polynomial. Finally, we give some examples where the results obtained can be applied. The problem In this paper, we study the spatial behavior of the problem determined by the equation: in a semi-infinite cylinder (or strip) B = [0, ∞) × D, with the boundary conditions: and the initial conditions: In Eq. (2.1) we have used the notations: We refer the reader to Sect. 7 for some specific examples of these higher-order equations. As usual, we need to impose that to guarantee the compatibility of the conditions. In this paper, we assume that a n+1 > 0 and b n > 0. Of course, the case a n+1 < 0 and b n < 0 can be considered in a similar way. We note that the existence of the solutions to problem (2.1)-(2.3) as well as their regularity can be obtained in view of the results in [21], once we combine these ideas with the ones presented in the appendix of [22]. Some useful inequalities To obtain our results it will be useful to recall (and to deduce) several inequalities. First, we recall the weighted Poincaré inequality which states that where ω > 0 and f (0) = 0. From the above inequality we can deduce several inequalities which will be useful in our approach. In view of the inequality (3.1), the systematic use of the Hölder and A-G inequalities allows us to obtain where C 1 is a positive calculable constant. The next inequality we consider is the following one: where ω is large enough, which can be obtained in a similar way. We will also need the following inequalities: where ω is again large enough. Phragmén-Lindelöf alternative In this section, we obtain a Phragmén-Lindelöf alternative for the solutions to our problem (2.1)-(2.3) for a measure defined in the cross section of the cylinder. We first define the function But we find that +b n u (n) (a 1u + · · · + a n u (n) ), Therefore, in view of the inequalities provided in the previous section, it follows that whenever ω is large enough. Thus, it leads where λ 2 is the known Poincaré constant for the cross section D. Inequality (4.3) has been previously studied in the context of spatial estimates (see [18]). From here we can obtain that either for every z ≥ z 0 and where G(z 0 , t) > 0, or the exponential decay is satisfied. So we can deduce the following. Fast decay In this section, we prove a decay estimate for the solutions to problem (2.1)-(2.3) of the type of the exponential of the distance to the part of the boundary where the perturbations are imposed. First, we note that From (3.2) and (4.2) we also find that where C is a computable positive constant depending on the constitutive coefficients and ω of the form: Inequality (5.1) is well-known (see Equation (3.16) in [10]). If we denote by P (z, t) = G(z, t) 1/2 we can write We also note that P (z, 0) = 0 for z ≥ 0 and where g(0) = 0. Let P (z, t) = exp −λ 2 tC Φ(z, t). It then follows that An upper bound for Φ(z, t) follows from the maximum principle by using the solution to the problem η t = Cη zz with the initial condition η(z, 0) = 0, when z ≥ 0, and the boundary condition: We know that P (z, t) ≤ exp −λ 2 tC η(z, t). The function η(z, t) is well known (see Carslaw and Jaeger [3, p. 64]) and so, we have Therefore, Since we know that holds. We remark that we can choose ω large enough to guarantee that the decay is "almost" of the type exp − a n+1 z 2 8b n t . Thermoelastic system In this section, we extend the estimates obtained in the previous section to the thermoelastic case, that is, a fast decay of the decaying solutions. Thus, we consider the system: with the boundary conditions: It is worth noting that here u i is the displacement vector, θ is the temperature, λ and μ are the Lamé constants, ρ is the mass density, c is the heat capacity and β is the coupling coefficient. To make the calculations easier we assume that We also assume that a n+1 > 0 and b n > 0. In order to study the problem it is worth writing the displacement equation as We can define the function In this section, we want to obtain a new spatial decay estimate. Therefore, we assume that We obtain that i,j (a 1ui,j + a 2üi,j + · · · + a n u In a similar way, we also havẽ i,i (a 1uj,j + a 2üj,j + · · · + a n u We then obtain i,i )û j,j + cb n θ (n) (a 1θ + · · · + a n θ (n) ) −c(b 0θ + · · · + b n−1 θ (n) )θ. ZAMP Fast spatial behavior in higher order Page 9 of 13 102 By choosing ω large enough we can obtain We consider now and so, we have From here, the argument is again standard (see, for instance, [26,27]). We can obtain the existence of two positive constants β 1 and R such that An argument similar to the one proposed in the previous section shows that where N (z, t) = z (4πR) 1/2 , and a change of variable implies that N (z, t) = erfc z (4Ct) 1/2 . Therefore, we conclude that where Rs G 1 (0, s). We remark that we can obtain upper bounds for this function A(t) in terms of the boundary conditions following the arguments already used in [17,27]. It is clear that these estimates imply that the decay at the infinite is of the type of exp −z 2 4Rt which we summarize as follows. We note that, for ω large enough, we can choose R as near as we want to the value 2 b n ca n+1 . Therefore, asymptotically the rate of decay that we have obtained for the function G 1 approaches to exp − ca n+1 z 2 8b n t . A few examples In this section we give several elementary examples where the results obtained in this paper can be applied. Parabolic equation We give here several examples of parabolic equations of higher order. The first example corresponds to the linearized form of generalized Burgers' fluid. From [33] we know that the system determining the evolution of this fluid is given by where ρ, λ 1 , λ 2 , η 1 , η 2 and η 3 are positive constants.
2,510.2
2022-04-27T00:00:00.000
[ "Mathematics", "Physics" ]
Single-step additive manufacturing of silicon carbide through laser-induced phase separation Omer Karakoc ( <EMAIL_ADDRESS>) Oak Ridge National Laboratory https://orcid.org/0000-0001-9512-6156 Keyou Mao Oak Ridge National Laboratory Jianqi Xi University of Wisconsin-Madison Takaaki Koyanagi Oak Ridge National Laboratory https://orcid.org/0000-0001-7272-4049 Jian Liu Polaronyx Company Izabela Szlufarska University of Wisconsin–Madison Yutai Katoh Oak Ridge National Laboratory https://orcid.org/0000-0001-9494-5862 Introduction Silicon carbide (SiC) has potential as a structural material for use in extreme environments such as space and nuclear applications owing to its strong corrosion resistance, high-temperature strength, excellent damage irradiation tolerance, adequate scattering cross-sections, and low neutron absorption 1,2,3,4 . Unlike metals or alloys 5,6 , however, machining, near net shaping of SiC via-conventional machining are impractical and extremely difficult due to their brittleness and chemical stability 7,8,9 . Conventional machining consumes large amount of energy to shape SiC due to high sintering temperatures above 2000 °C 9 . Additive manufacturing (AM) promises a cost-and energy-effective approach to solving these issues and is a strategy for developing nextgeneration parts for advanced nuclear applications 7 , because it significantly reduces the amount of waste produced in the process 10 and enables rapid prototyping and fabrication of parts with complex geometries. Thus, additive manufacturing of SiC is fast-growing technology for wide variety of applications 11 . AM technology of SiC will be revolutionary, but dense and high purity SiC part by AM have not been realized due to strong covalent nature of SiC, SiC sublimation rather than melting at high temperatures 7 . Full capability of SiC component is only achievable with nuclear-grade SiC, which is highly crystalline and dense and pure 12 . AM of SiC generally involves preforming a green body and densification step 7 . Most widely used processing options for AM of SiC are wet processing (sterolithography, gel casting, and direct ink writing) and dry processing (SLS, laminated object manufacturing, and binder jet printing) 7 . Another AM process is laser-induced chemical vapor deposition (LCVD), which uses reagent gases 13 . Heat of focused laser results in decomposition of reagent gases in which AM part is produced. Technological challenges lie in SiC densification process: part size for LCVD; high densification temperature for liquid-phase sintering; and volume shrinkage for powder sintering and pre-ceramic polymer pyrolysis 7 . AM of SiC by SLS involves reaction sintering of silicon and carbon, which results in formation SiO2 impurities 7 . Therefore, in the present study, single-step additive manufacturing of SiC has been developed using laser powder bed fusion (LPBF) without use of any sintering additives. Pulsed-LPBF joins materials by consolidating successive layers of powder and selectively sintering them using a highenergy pulsed laser to fabricate final components from 3D model data 14,15,16 . Thus, it is possible to make objects with arbitrary geometries without the need to adapt the conventional production process itself. This approach enables LPBF to fabricate complex 3D parts with high accuracy without extensive tooling and without the geometric limitations inherent in typical subtractive manufacturing processes 17,18 . The capability to process a wide variety of materials with a large range of mechanical and physical properties will enable a broad range of applications in the aerospace, nuclear, biology, and medical industries 17,18,19,20,21,22 . Despite its great advantages, concerns over AM object quality and consistency limit the widespread utilization of LPBF 23 . Large differences in the mechanical properties of AM objects pose challenges for certification authorities 24 and designers 25,26 . Sintering involves neck formation between adjacent powders to lower the free energy while powder particles grow. These regions can occur many times in a single AM part-typically close to the fusion of powder particles, where the influences on chemical, mechanical, and physical properties are the most pronounced. Thus, providing insight on the lasermatter interaction process in those regions could lead to remarkable outcomes for the quality and consistency of SiC parts fabricated by AM 27 . Nonetheless, there has not been extensive research to obtain significant surface information such as the microstructural evolution and binding mechanisms of single SiC powder particles under high-energy short-pulse laser irradiation. We conducted detailed microstructural characterization, which led to findings that explains the physical process of SiC AM and important laser-material interactions. Those topics are ideal for investigation by transmission electron microscopy (TEM), transmission Kikuchi diffraction (TKD), Raman spectroscopy, and scanning electron microscopy (SEM). In this study, XRD, TEM, TKD, SEM, and Raman spectroscopy are carried out to explore fiber laser-SiC powder particle interactions during AM processing and elucidate the binding mechanisms that result in the consolidation of SiC powders. Complementary microstructural characterization enables a deeper understanding of general trends in laser-SiC powder interactions and elucidates the binding mechanism and phase separations detailed in the experimental efforts. A successful mitigation has been implemented to consolidate SiC powders through phase separation of SiC (Fig. 1). The experimental observations, demonstrated herein, significantly improves the reliability of parts made by LPBF. Our work will lead to AM of SiC of unprecedented quality/performance and application of LPBF to refractory ceramics that are difficult to sinter. Also, this technique is very crucial for advances in the fabrication of SiC-based materials for various structural/thermal/medical applications and the semiconductor industry. Results Additive manufacturing of silicon carbide by LPBF. High-energy, short-pulse femtosecond fiber laser 28 (Supplementary Fig. 1) is used to fabricate dimensionally accurate SiC components from computer-aided designs ( Supplementary Fig. 2). The starting SiC powders consisted of polycrystalline 6H-SiC with particle sizes of 20-40 µm and 99% nominal purity, confirmed by xray diffraction (XRD) patterns and Raman spectroscopy (Fig. 2). To identify the appropriate processing parameter set, various laser powers and scan speeds were applied to produce 12 SiC tubes (Supplementary Table 1). The laser-sintered compounds were viewed by SEM to assess the material structure and porosity level ( Supplementary Fig. 2). As seen in Supplementary Fig. 2, the high-power femtosecond fiber laser fuses SiC powders. The structures of the top and surfaces of sintered objects appear very similar. Thus, one SEM image was selected to represent the surface structures of the others. This approach was extended to other figures throughout the paper. The AM objects had a high level of porosity in a random pattern. Buoyancy and caliber methods were applied to measure the porosity and density of laser-sintered objects. The investigated AM objects indicated porosities from 49.8% to 53.2% (Supplementary Table 2). The AM objects investigated in this study had bulk densities from 1.50 g/cm 3 to 1.61 g/cm 3 . All density measurements were performed based on the theoretical density of SiC, 3.21 g/cm 3 , which was assumed in deriving the porosity. To evaluate the effect of laser power and scanning speed, these two parameters were varied and were found to have insignificant effects on the porosity level and density of AM objects implying that the different processing parameters used in the LPBF process likely induced the same effects on the powder surface. Porosity content was ascribed to incomplete sintering in the powder layer. Fig. 2. Powder x-ray diffraction pattern of a feedstock SiC powder and b laser-sintered SiC. c Powder Raman spectroscopy of feedstock SiC and laser-sintered SiC. Four phases-6H-SiC, 3C-SiC, silicon, and carbon-were identified, as marked by symbols. The probe size of the laser was between 500 nm and 1 µm. One representative spectra are shown for laser-sintered to demonstrate characteristic peaks of 3C-SiC and 6H-SiC found separately at two different R1 and R2 regions. (c) Raman spectroscopy, were utilized for the phase analysis. XRD provides structural analysisinformation regarding how atoms of molecules are packed in the crystalline structure-while Raman analysis is designed to examine thin structure electronic levels and vibrational modes present in a sample. Hence, the combination of XRD and Raman provided complete information regarding the structural aspects of the samples. The analysis allowed us to identify the phase separation of 6H-SiC into silicon (Si) and carbon (C) and the subsequent nucleation of 3C-SiC and spheroidal graphite (Fig. 2). XRD measurements determined structural changes and phase separation following laser irradiation ( Fig. 2a, b). The as-received SiC powders were mainly identified as hexagonal 6H-SiC crystal structures and much smaller phase fractions of rhombohedral 15R-SiC. There was no detectable Si phase and SiO2 (Fig. 2a). In addition to 6H-SiC peaks, coinciding cubic 3C-SiC and Si diffraction peaks emerged for powders obtained from laser-sintered objects (Fig. 2b). XRD results were the first experimental evidence of phase separation during laser-material interaction. Fig. 3. Raman mapping of additively manufactured SiC part showing phase separation induced by high-energy short-pulse laser irradiation. The Raman scanner is capable of carrying out rapid point-topoint mapping of a the laser-irradiated particle surface and b the polished cross-section of the neck region where particles bind. For the area shown by each color, the corresponding Raman spectrum is demonstrated. Univariate images were constructed by bracketing bands of ~520, 780, and 1350 cm -1 with cursors for Si, SiC, and C, respectively. The intensity between those cursors at each data point is demonstrated in the Raman map. Raman spectroscopy measurements were carried out on as-fabricated SiC powder and lasersintered particles (Table S1) A univariate image was rendered using green brackets to enclose the area around 520 cm -1 , red brackets around the area between 766 and 788 cm -1 , and blue brackets to enclose the area around 1350 cm -1 . The intensity between the bands selected by the cursors at each data point was calculated to construct the Raman image. Thus, the green, red, and blue areas in the Raman image predominantly correspond to Si, 6H-SiC, and C, respectively. Fig. 3a demonstrates a laserirradiated particle surface. Phase separation is clearly distinguishable on the particle surface. The green and blue areas in the Raman image are associated with strong Si and C Raman peaks, respectively. At some locations, the nucleation of the 3C-SiC polytype occurred primarily subsequent to the thermal decomposition of SiC. In Fig. 3b, Raman spectra in the red color area indicate unirradiated 6H-SiC powder with an accompanying intensity peak at 520 cm -1 that is the characteristic peak of Si. The Raman scattering efficiency of crystalline Si is about ten times greater than that of the crystalline SiC peaks 31,33 . Thus, Si content is negligible in the red area. The blue area predominantly consists of C and small amounts of 3C-SiC and Si. The green area is indicated by two separate Raman spectra. In addition to a Si signal at around 520 cm -1 , one spectrum has the characteristic peak of 3C-SiC and the second contains the Raman spectrum of 6H-SiC. The important takeaway point from the Raman data is that following the laser-driven solid-state phase separation of 6H-SiC, the solidification process favored the reaction of Si and C to form 3C-SiC and 6H-SiC, depending on the equilibrium conditions and temperature. Further characterizations were performed via SEM, TKD, STEM, and HRTEM to provide insight into the occurrence of phase separation and the nucleation of 3C-SiC, 6H-SiC polytypes, and graphite. The microstructural features after laser sintering were investigated to assess the binding mechanism and phases at junctions using backscattered-electron (BSE) imaging and energydispersive x-ray spectroscopy (EDS) elemental distribution mapping. Supplementary Fig. 3 indicates a mirror-polished cross-section of a laser-sintered part. The presence of different phases is clearly distinguishable, particularly at the locations where particles bind. Further composition analysis using SEM-EDS maps of elemental distribution illustrates the C and Si variation across the polished cross-section of the AM object (magnified in Supplementary Fig. 3b, c). It reveals that Si is enriched at the particle-particle interface, where C is depleted. The high-intensity red area is a consequence of using a C polymer during the polishing of the AM parts. Silicon enrichment at some locations was ascribed to laser-heating-induced SiC decomposition. The elemental distribution map revealed that a Si-rich phase played a significant role in the fusion of 6H-SiC powder particles. These regions occurred thousands of times throughout the laser-sintered parts and consolidated the SiC tubes. To obtain a better understanding of the underlying mechanism that governs phase separation and fusion of SiC particles, three TEM lamellas were prepared from three different locations where two SiC particles bind together and the Si phase exists as an interface (Supplementary Fig. 3). Fig. 4 demonstrates the results of TKD mapping and corresponding STEM-EDS analysis. The images at the top, middle and bottom in Fig. 4 represent region 1, 2 and 3, respectively. The XY-plane refers to the layer that is fabricated parallel to the building direction in the laser sintering process. Identification of cubic and hexagonal crystal structures and undetected region was carried out through TKD mapping, and spatial distribution mapping of Si and C was performed by STEM-EDS. The combination of TKD mapping and STEM-EDS enabled precise phase identification during laser-material interactions. The phase separation at the irradiated area is clearly distinguishable. Region 1 indicates that the reaction layer is composed of two crystal structure, cubic and hexagonal. The phase map exhibits the cubic phase connecting two hexagonal powders. Moreover, relatively large pockets of hexagonal phase grains are dispersed inside the cubic phase, with grain sizes ranging from 500 nm to 1.5 µm. The TKD phase map of region 2 and 3 shows that the reaction layer consists of cubic, hexagonal nano-precipitates and some undetected areas. To resolve the undetected region, a corresponding STEM-EDS analysis was performed in the same region where the TKD mapping was obtained. The intensity of silicon phase is relatively uniform across region 1, while carbon is depleted in some parts of the region investigated where two SiC powders are apparently joined by a cubic phase but is enriched in the relatively large pockets of hexagonal phase inside the Si interface and hexagonal powders. STEM-EDS analysis of region 2 and 3 indicated that areas undetected by TKD mapping were C phase. In the C-rich region, there is a lack of Si content. The analysis showed this region almost completely took the form of C. TKD mapping is unable to differentiate the cubic Si and 3C-SiC due to similarity in Kikuchi pattern of these phases, while hexagonal phase is identified as 6H-SiC. To differentiate cubic phases and provide understanding of the structural order of the C phase, TEM analysis was performed in the modes of STEM, HRTEM, and bright-field TEM (BFTEM). Our results also indicated that the 6H-SiC nano-precipitates was formed on the order of ~2-20 nm. We called these repetitive nanoscale 6H-SiC patterns "nanobreathing". The TKD phase map shown in Fig. 6a clearly demonstrates red dots (6H-SiC) in the yellow region (Si). Detailed examination of these small features was performed using HRTEM inside the cubic Si phase on the [101] zone axis (Fig. 6b, c). The TKD phase map exhibits densely populated, uniformly dispersed nanoscale 6H-SiC precipitates inside the cubic Si. The distribution of these nanoscale patterns appears homogenous across the Si phase. The white box in the TKD phase map denotes where HRTEM was performed. The 6H-SiC nanoprecipitates in the [101] zone are marked with arrows to indicate the difference in lattice spacing between Si and the incipient of 6H-SiC (Fig. 6c). The lattice spacing in the Si and 6H-SiC a-direction was 5.10 Å (RLS ~5.43 Å ) and 3.01 Å (RLS ~3.08 Å), respectively, which was near the Si and 6H-SiC a-direction lattice parameter 34 . Based on SADPs, simulated crystal models are constructed using CrystalMaker®, as overlays on the HRTEM images of silicon and 6H-SiC for better understanding (Fig. 6d, e). Femtosecond laser irradiation resulted in the formation of well-ordered and highly oriented PGSs through a solid-solid transformation (Fig. 7e, f, g). The diameter span range was 200-600 nm. The HRTEM images show the graphitic degree of the C materials. The interplanar spacing near the periphery of the PGSs is about 3.43 Å. The fringes in each sector near the periphery are mostly parallel straight lines, exhibiting a high graphitic degree. The PGSs were found to disperse nonuniformly in the region where the phase separation of SiC took place (Fig. 7a, b). In the STEM-HAADF images, the brightest areas correspond to heavy Si atoms and the darkest areas represent light C atoms. The contrast of the SiC grains is between Si and C. The TEM-BF images indicate that the shapes, morphology, and structural order of the PGSs are quite similar (Fig. 7a). HRTEM images indicate that the peripheries of the PGS exhibit a higher degree of graphitization than the central regions (Fig.7f, g). The STEM-EDS results were highly consistent with the STEM-HAADF analysis (Fig. 7c, d). Silicon is absent, whereas C is enriched, which is sign of PGS formation. These results confirmed the graphite synthesis whose characteristic peaks were also detected through Raman analysis. Even though pyrolytic graphite is commonly obtained through gas-solid transformations like chemical vapor deposition, this study proves that laser-induced solid-state disintegration of SiC can also be used to synthesize spheroidal pyrolytic graphite. Fig. 1 schematically illustrates the laser-induced phase separation of 6H-SiC into Si and C and subsequent formation 6H-SiC nanoprecipitates, spheroidal graphite and small pockets of 3C-and 6H-SiC. Discussion In this study, single-step AM of SiC via powder sintering routes is demonstrated, followed by extensive microstructural characterization. Fabrication of AM SiC was achieved without the use of sintering additives or binder elements. In addition, undesired SiO2 formation was not observed during the laser-material interactions (Fig. 2). The mechanism responsible for consolidation of SiC powders may lie in inertial confinement fusion. Very short pulse and high power laser may result in highly localized high pressure state on the process. Thus, highly volatile Si reacts with carbon rather escaping under vacuum environment due to laser confinement. Juodkazis et al. showed formation of nano-cavities in sapphire by single, 800 nm, 150 fs, 120 nJ pulses 35 . Single laser pulse (100 nJ, 800 nm, 200 fs) produced high temperature (5 x 10 5 K) and pressure ( ~10 TPa) 35 . This study supports the possibility of high pressure state during material-laser interaction. Fig. 8. a Temperature dependence of stability diagram for many different SiC polytypes 36 . b Temperature dependence of the decomposition free energy for the reaction of 6H-SiC → Si + C at 1 atm. The decomposition reaction occurs when the free energy become positive above ~2500 K. There was a narrow process window that was satisfactory for the fabrication of SiC tubes as designed 7 . In this narrow gap, varying AM parameter sets, such as laser powers and scanning speeds, had insignificant impacts on the properties of the AM parts. Tubes made with different parameters sets were consolidated. Density measurements revealed almost equivalent porosity levels and densities for all AM parts. This equivalence can be ascribed to sufficient laser energy and scanning speed delivered to the powder bed to bind the SiC powder particles via the disintegration of SiC. The laser is only connecting the neighboring powder particles without much changing the particle shapes and how the particle are stack together. There is very little or no displacement of particles. To adequately reveal the nucleation mechanism of the 6H and 3C polytypes after the (Fig. 8a) 36,37 . The high-energy short-pulse femtosecond laser fiber induced high nonequilibrium cooling conditions, which yielded a rich variety of microstructures and often preferentially selected nonequilibrium growth modes (Fig. 3) 41 . Thus, the present study results are highly consistent with the phase stability diagram of SiC (Fig. 8a) 36 . Small quantities of impurities and non-stoichiometry also had a great impact on polytype stabilization. The partial pressure of Si vapor was several times higher than that found in C 42 . The multiplicity of low-energy surfaces and the high-symmetry nature of 3C-SiC may account for its occurrence in the initial stages of growth over a broad range of temperature (1400-2000 °C). These factors could have given rise to rapid growth and easy nucleation along several directions, which led to large crystals bounded by low-energy forms 37 . While this kinetic argument can be linked to the occurrence of 3C-SiC over a large temperature range, its high symmetry presumably increased the vibrational entropy contribution to the free energy, hence making a contribution to the equilibrium stability of 3C-SiC at elevated temperatures. Besides that, the temperature-dependent free reaction energy for the decomposition reaction of 6H-SiC → Si + C has been calculated through the density functional theory (DFT), as shown in Fig. 8b. The positive free reaction energy means that the decomposition reaction is energetically favorable. From Fig. 8b, we can see that the decomposition reaction occurs above ~2500 K. This is consistent with the previous experiments, in which the decomposition of SiC into solid C plus liquid Si begins at ~2840 K 43 . The slight difference of the DFT results with the experimental value could arise from the approximation of the anharmonic effect at high temperatures. This study establishes a fundamental understanding of the phase separation mechanism of a complex SiC compound material during high-energy short-pulse laser-material interactions. Extensive microstructural observation by XRD, Raman spectroscopy, SEM, TEM, and TKD revealed the decomposition and surface reconstruction of SiC. Thus, phase separation was confirmed by multiple characterization tools. It was found that femtosecond laser irradiation yielded a rich variety of microstructures and phases-thin Si and C nanomaterials, multiscale 6Hand 3C-SiC pockets, and highly ordered PGSs. The polytype 6H-SiC decomposed into Si and C, and subsequently Si(l)+C(s)→ α or β SiC(s) reactions occurred to form multiscale 6H-and 3C-SiC pockets (Fig. 4). For the first time, densely populated, uniformly dispersed nanoscale (~2-20) 6H-SiC precipitate-nanobreathing was formed inside a Si phase following the phase separation of 6H-SiC by laser irradiation. This remarkable discovery can be exploited in many different ways, To the best of author's knowledge, for the first time, highly oriented PGSs were reported during the phase separation of SiC using high-energy laser irradiation. Fig. 7 shows the HRTEM analysis of the 002 fringes of a PGS. A high degree of graphitization occurring near the periphery of graphite sphere can be deducted from the fringes, which are mostly aligned as parallel straight lines. Elemental mapping through the cross-section of the focused ion beam foil revealed the formation of a C sphere; Si was absent when C was enriched. Spheroidal graphite was produced through solid-to-solid transformation. High-energy short-pulse laser-derived graphite aggregate tended to extend in the c-direction rather than the a-direction, generating spheroidal (nodular) graphite. Graphite spheroids are widely found in spheroidal graphite cast iron 44,45 . Spheroidal graphite acts as a "crack arrester" because its rounded shapes induce fewer stress points- and 22×27 µm 2 at the powder neck region and powder surface, respectively. The Raman image was constructed using green-colored brackets enclosing 520 cm -1 , red brackets enclosing between 766 and 788 cm -1 , and blue brackets enclosing 1350 cm -1 . SEM. Laser-sintered components were analyzed by SEM (Tescan Mira3) to gain knowledge of the porosity level and binding mechanism of the SiC powders. The cross-sectional microstructure analysis at powder neck region was carried out using BSE imaging at an accelerating voltage of 10 kV. Elemental distribution mapping was performed using EDS analysis to determine the distribution of Si and C. A thin foil with a high-quality polished surface was prepared using an FEI Quanta focused ion beam with a low accelerating voltage of 5 kV and 2 kV at the final thinning step. TKD maps was generated using an Oxford Instruments Nordlys detector mounted on a Tescan Mira3 with an accelerating voltage of 20 kV in high current mode. TKD mapping was conducted at a working distance of about 4 mm with a tilting angle of -20° and step size of 20 nm. TEM. Electron-transparent TEM lamella were prepared using an FEI Quanta focused ion beam at here is 3N vibrational modes. is the anharmonic free energy. In order to estimate the anharmonic free energy, we followed the approach of Wallace 47 who showed that the anharmonic part of the free energy can be written as = 2 2 . Experiments for different crystals showed that there is an empirical relation between the average Gruneisen parameter 〈 〉 and 2 , which is given per atom by 2 = 3 Θ ∞ � (0.0078〈 〉 − 0.0154) 48 . The values of Gruneisen parameter for 6H-SiC, Si, and C are 1.23 49 , and 2.28 50 , respectively. Θ ∞ is the high temperature harmonic Debye temperature defined by Θ ∞ = ℏ( 5〈 2 〉 3 � ) 1/2 � 48 . The setting of DFT calculations have been discussed elsewhere 51 . Data availability Supporting Information is available in the supplementary materials and more data can be obtained upon reasonable request from the corresponding author.
5,728.8
2021-11-09T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Investigation of suitable sites for wave energy converters around Sicily ( Italy ) An analysis of wave energy along the coasts of Sicily (Italy) is presented with the aim of selecting possible sites for the implementation of wave energy converters (WECs). The analysis focuses on the selection of hotspot areas of energy concentration. A third-generation model was adopted to reconstruct the wave data along the coast over a period of 14 years. The reconstruction was performed using the wave and wind data from the European Centre for Medium-Range Weather Forecasts. The analysis of wave energy allowed us to characterise the most energetic zones, which are located on the western side of Sicily and on the Strait of Sicily. Moreover, the estimate of the annual wave power on the entire computational domain identified eight interesting sites. The main features of the sites include relatively high wave energy and proximity to the coast, which makes them possible sites for the implementation of WEC farms. Introduction Currently, renewable energy supplies 20 % of the total world's energy demand, and this percentage continues to grow (IEA, 2014).Among various sources, wave energy has attracted the attention of the scientific community and the energy industry from 1973 due to its numerous advantages such as (i) greater high energy density than solar and wind energy (Falnes, 2007); (ii) the ability to reliably predict waves; (iii) wave energy travels with small losses in depth water; and (iv) minimal environmental impacts, especially for the offshore devices.Due to these advantages, wave energy con-verters (WECs) will likely become diffuse in the near future, thus impacting the further transformation of our coastal zones (Azzellino et al., 2013a, b).However, the costs to implement WECs are currently much higher than those of other renewable energy technologies.Therefore, a solution to reduce such costs is to move from stand-alone devices to hybrid systems embedded in other coastal or offshore structures (Kallesøe et al., 2009;Vicinanza et al., 2014).Today, more than 1000 WECs have been patented and approximately 170 companies are working to improve WEC technology (for a detailed description see www.emec.org.uk).An analysis of the location of these companies shows that 50 % are located in Europe.This is primarily due to the high amount of the wave energy that characterises the north and west sides of European coast.For example, in Galicia, the region in the NW of Iberia, the offshore wave power is approximately 22 kW m −1 (Iglesias and Carballo, 2010a). As shown in Fig. 1, waves around Italy have a relatively low energy.However, previous studies have shown that wave farms could be implemented at some sites.For example, Vicinanza et al. (2011) reported the offshore wave energy potentials of the Italian seas.This study was carried out using records from the buoys of the Italian National Wave Recording Network (NWRN), managed by the Agency for Environmental Protection and Technical Services.The results highlighted that the west coasts of both Sardinia and Sicily are the most energetic among the Italian coasts.Indeed, the highest energy values were obtained for the buoys of Alghero and Mazara del Vallo, which corresponded to 9.05 and 4.75 kW m −1 , respectively.In addition, Liberti et al. (2013) presented a high-resolution assessment of the wave energy resources in the Mediterranean Sea.In particular, a thirdgeneration model of the ocean waves was used to derive the wave climate over the entire Mediterranean Basin consistently with results of Vicinanza et al. (2011). The study of potential wave energy is important for selecting and designing WECs.It is necessary to understand how the energy is distributed with respect to wave height, period and direction.An appropriate wave climate analysis will reveal the best configuration of device and location to be selected.However, to this aim, a long period (not less than 10 years) of wave data is necessary.In general, it is better to utilise wave data gathered by buoys, as the data are of good quality with a low relative error.However, the Italian buoys are characterized by periods of lack of records.For this reason, it is useful to use data delivered by forecast centres, such as those of the European Centre for Medium-Range Weather Forecasts (ECMWF) or of the National Oceanic and Atmospheric Administration (NOAA).The data from these sources have high spatial and temporal resolution but underestimate peak events (Cavaleri, 2009).However, the nearshore wave data from these sources may not be used because the wave propagation was performed using the WAM (Hasselmann et al., 1988) or WAVEWATCH III (Tolman and Chalikov, 1996) models, which do not consider the phenomena as triad interactions.Moreover, the grid resolution of the wave model is too large to select suitable sites for locating wave energy converters; therefore, it is necessary to use advanced numerical codes that allow the wave propagation in intermediate-depth and shallow waters to be appropriately modelled.The use of such a model allows for the selection of sites, called hotspots (Iglesias and Carballo, 2010b), where energy is concentrated due to wave transformation phenomena, such as wave refraction. In this framework, starting with a large set of offshore wave and wind data, the present paper discusses results related to estimating nearshore potential wave energy around the coast of Sicily. This paper is organised as follows: the first part describes the adopted methodology selected to analyse wave propagation, and the second part focuses on the analysis of wave energy for the few selected sites along the coast of Sicily.The paper ends by summarising with some concluding remarks. Numerical model The wave propagation is carried out using SWAN, which is a third-generation spectral model developed by Delft University of Technology (Booij et al., 1999).The model estimates the variations of the action density in space and time according to the following equation (expressed in Cartesian coordinates with the x axis directed toward the coast): where N is the action density that is equal to the energy density spectrum divided by the relative frequency.Equation (1) estimates the effect of N in five dimensions (space x and y, where S in represents the momentum transfer of wind energy to wave generation, S nl is the energy transfer due to nonlinear wave-wave interactions, S ds is the dissipation of the energy due to white-capping (deep water wave breaking), S bot is the dissipation of the wave energy due to bottom friction, and S surf is the energy dissipation due to depth-induced wave breaking.In this study, S bot was not considered.Bottom friction was assumed negligible because the analyses are mainly focused for depths greater than or equal to 10 m.Stationary simulations were conducted using the bathymetric, wave and wind data as inputs.The wave data are defined in terms of significant wave height H s , peak period T p and mean direction θ .The wind data are defined in terms of the components of the wind. Input data The data used to reconstruct the morphology of the seabed were obtained from the charts of the Italian Navy Hydrographic Institute (NHI) and from the archive of General Bathymetric Chart of Oceans (GEBCO).The scale of the NHI charts is 1 : 1 000 000. GEBCO (released 2010) provides global bathymetry data sets for the world oceans with a resolution equal to 30 arcsec (equivalent to 8.33 • × 10 −3 or approximately 1 km) (GEBCO, 1999).The NHI charts cover a limited area of the computational domain and for this reason the data were integrated with the information of the GEBCO archive.More precisely, the seabed data up to a depth of 100 m were extracted from NHI, and the data for areas deeper than 100 m were extracted from the GEBCO archive.Figure 2 shows the final bathymetry used to simulate wave propagation. Wind and wave input data were obtained from the ECMWF.The ECMWF is an independent intergovernmental organisation aimed at producing accurate climate data and medium-range forecasts, which are estimated using numerical models and validated according to data acquired via satellites, ships, buoys, etc.The estimate of offshore wave data is made up of the integration of the atmospheric model and the two-dimensional spectral wave numerical model, WAM.The resolution of the model in the Mediterranean Sea is equal to 0.25 • for both latitude and longitude.The ECMWF operational archive starts in 1989 for wind data and 1998 for wave data, with a time resolution equal to 6 h. Wave data were validated using records from the buoys of the Italian National Wave Recording Network (NWRN) managed by the Agency for Environmental Protection and Technical Services.The NWRN is composed by 15 buoys: eight of them were installed in 1989, while the remaining buoys were placed in the period 1999-2004.During the period 1989-2002, buoys acquired data for 30 min every 3 h and the measure becomes continuous for storms characterized by a significant wave height over a threshold.Since 2002, buoy measurements are continuous and the data are produced every half an hour.In the present study, available wave data to validate the ECMWF data are those of the three buoys placed near Catania, Capo Gallo (Palermo) and Mazara del Vallo (Trapani) (see Fig. 1 for the buoy locations).For such buoys the recording period periods are the following: (i) Catania July 1989-October 2006, with a to- The reliability of the ECMWF data was measured by evaluating the following parameters: bias (bias, mean error between model and measurement), root mean square error (RMSE; root mean square discrepancy between the two sets of data), scatter index (si, normalised root mean square deviation in one of the sets of data), slope (slope, slope of the best fit line passing through the origin approximating the distribution of the two sets of data), Willmott index (Willmott, 1982) (d range limited to 0 and 1, where 1 indicates a perfect matching), and coefficient of correlation (R, measure of the linear correlation between two sets of data).Such parameters are defined by the relationships shown in the Appendix. The values assumed by the parameters in the present comparison are shown in Table 1.Regarding bias and RMSE, the differences between the two data sets are relatively small.It was observed that the higher value of the significant wave hight scatter index for the Catania buoys (si = 0.85) is more likely due to a poor ability of the ECMWF numerical model to reproduce waves generated by local winds coming from the northeast.The values of the parameter slope are less than 1 and thus the ECMWF data tends to underestimate the actual sea status; however, this is limited to only certain events.Generally, such an underestimation occurs in closed basins, as in the present case.In such areas, the hindcast numerical models tend to underestimate the peak velocity of the wind and therefore lead to an underestimation of the significant wave height.The cause of this error is not fully understood, but as revealed by a study conducted as part of the WW-MEDATLAS (Cavaleri and Bertotti, 2004), it could be related to the modelling of the orography and of the marine boundary layer.The values of the parameter d indicate generally good correspondence between the two data sets.The aim of this study was to estimate the average wave power.Therefore, the analysis performed using the ECMWF data to estimate onshore wave energy can be assumed to be conservative. Setting up the computational grid The computational domain here analysed was discretised using an unstructured grid.For the present case, the computational domain around Sicily was discretised with 4700 nodes and 89 666 triangular elements.The grid resolution has been assumed constant for the depths shallower than 50 m and deeper than 100 m, while it varies linearly in range 50-100 m.Accordingly, the mesh sizes are 400 m for the depths shallower than 50, and 1000 m for the depths deeper than 100 m, and varies linearly between 400 and 1000 m for the depths in the range of 50-100 m (see Fig. 2).The domain boundary was chosen to coincide with the polyline passing through 34 ECMWF grid points around Sicily.The 34 grid points were selected at depths on the order of 100 m (see Fig. 3).The wave data of such points were used to define the boundary conditions of the computational domain.Furthermore, to estimate wave regeneration during propagation, 32 additional ECMWF grid points were selected to define the wind field over the entire computational domain (see Fig. 3).At each node of the SWAN domain, the wind data were defined by interpolation using the inverse distance interpolation weighted method.The wave input at each boundary segment was defined using the JONSWAP (Joint North Sea Wave Project) spectrum.The spectrum was discretised into 36 directions and 40 frequencies in a range of 0.04-0.5 Hz, which corresponds to range of 2-25 s in terms of time. Validation of the output data Validation of the significant wave height estimated using the models was conducted by performing a comparison with data collected from several satellites and processed by the French Research Institute for Exploitation of the Sea (IFREMER). The IFREMER database provided wave heights at the global scale over the period of 1991-2013.In particular, the wave heights are derived from measurements made by seven satellites (ERS2, Envisat, Topex-Poseidon, Jason-1 and 2, GeoSat FO, CryoSat-2) calibrated according to the method developed by Queffeulou (2004).For additional details, interested readers are referred to Queffeulou and Croizé-Fillo (2013).Satellite data from the Envisat, ERS-2, and Jason-1 and 2 were used for the operational assimilation of wave height data in the ECMWF model.Satellite data from ERS-2 were used over the period 1995-2003, from Envisat since 2003, from Jason-1 since 2006 and from Jason-2 since 2009.These assimilation periods will be excluded for the validation of SWAN data.The selected observation points are shown in Fig. 4. The validation of the data from the SWAN model with the satellite data was performed using the parameters defined in the Appendix, and the results are shown in Table 2. The comparison shows a fairly good agreement.In fact, the values of RMSE are under 0.5 m, and a maximum value of 0.50 m was reached for the data acquired from the ERS-2 satellite.The values of slope are all less than 1, indicating that the model data tended to underestimate the values of the significant wave heights.These results are due to the boundary conditions gathered for the ECMWF data, which tend to underestimate the peak events, as described above.Figure 5 shows a scatter plot of the output of the SWAN model and the significant wave heights estimated by the Jason-1 satellite. 3 Wave energy resource Method The components of wave energy transport P are defined as where E is the energy spectral density.For deep waters, the total wave energy transport can be rewritten as where ρ is the density of water, g is the acceleration due to gravity, H m0 is the significant wave height, and T e is the energy period.The significant wave height H m0 and the energy period T e are defined by the following relationships: where S is the variance density spectrum and m n represents the spectral moment of order N. It was noted that, in some cases, to estimate the wave energy resources, Eq. ( 4) is used indiscriminately for both h L > 1/2 (deep waters) and h L < 1/2 (intermediate and shallow waters).However, Barbariol et al. (2013) reported that the use of Eq. ( 4) underestimates the value of the wave energy if it is applied for the case of h L < 1/2.Extending the analysis conducted in Barbariol et al. (2013), we compared the two methods assuming a TMA (Texel-Marsen-Arsloe) spectrum.According to a previously reported formulation (Tucker, 1994), the TMA spectrum can be expressed as follows: where S J (σ ) is the JONSWAP spectrum, k is the wave number, h is the water depth and the function φ(kh) is defined as Figure 6 shows the relative difference P between the wave energy transport estimated using Eq. ( 3) and calculated using Eq. ( 4).The relative difference P is defined by the following relationship: where P dw and P sw are the wave energy transports estimated according to Eqs. ( 4) and (3), respectively.In particular, in Fig. 6a, the relative difference is plotted as a function of the peak period and the depth (the lines indicate the ratio between the depth and the wavelength).In Fig. 6b the relative difference is plotted as a function of the peak period and the ratio h L .For h L greater than 0.4, there are no differences between the two methods.The difference approaches −15 % when the h L is within the range of 0.12-0.26,whereas the difference increases for values of h L lower than 0.12.For h L less than 0.07, Eq. ( 4) overestimates the value of the wave energy transport.The graph cannot be generalised because it changes with the input spectrum, although if the sea state corresponds to a value of h L greater than 0.4 and in the range approximately between 0.07 and 0.10, either method may be used.Conversely, if h L falls outside of the previous range, the maximum relative error is approximately −15 %, and it is recommended to use Eq.(3).Moreover, as shown in Fig. 6b, the difference is minimally influenced by the peak period only for a peak period near 2 s. For the present study, Fig. 7 shows the comparison between the energy transport estimated according to Eqs. ( 3) and (4).The comparison was affected along the red line indicated in Fig. 7 for a sea state with an offshore significant wave height of 2 m and a peak period of 10 s.Note that the maximum difference between the two formulas is approximately 10 %.Such a difference is relatively low and of the same order of uncertainty that is present in the input wave data. Analysis of results For each sea state propagated up to the coast (one every 6 h from 01 January 1999 to 31 December 2012), the associated energy flux was obtained (see Fig. 8).On the boundary of the domain the wave energy flux is consistent with the results of Liberti et al. (2013).In detail, an energy flux close to 8 kW m −1 is observed on the western side, whereas in the Strait of Sicily, a flux in the range 4-6 kW m −1 is detected. The wave energy flux is further reduced to 2-3 kW m −1 on the north and east sides of Sicily, respectively.As shown in Iuppa et al. (2014), where preliminary results of the present study are reported, the areas with highest wave energy have a low variation in wave power over the period studied.For these zones, the ratio between the standard deviation and the average of the yearly mean wave power flux is below 0.35. Figure 9 shows the seasonal distribution of the wave energy flux.The data are regrouped according to the following months: (a) December, January and February (DJF); (b) March, April and May (MAM); (c) June, July and August (JJA); and (d) September, October and November (SON).As expected, the energy flux in the DJF period is higher than in the other periods.The JJA period shows a significant reduction compared to the DJF period, which ranges approximately from 60 to 80 %. Figure 10 shows the comparison of the average power estimate corresponding to the bathymetric lines at depths of 10, 20 and 50 m.According to a coarse analysis at a regional scale, we identified four zones with nearly homogeneous values: the first between Capo San Vito and Capo Granitola (zone I), the second between Capo Granitola and Capo Isola delle Correnti (zone II), the third between the Capo Isola delle Correnti and Capo Peloro (zone III), and finally, the fourth between Capo Peloro and Capo San Vito (zone IV).In the first zone, the energy flux does not exhibit a substantial variance from depths of 50-10 m and the reduction is approximately 1-2 kW m −1 .However, the presence of small islands provides coastal protection by reducing the nearshore wave energy.This part of the coast is characterised by waves that primarily come from the sector of 260-290 • N.Such waves are almost perpendicular to the coastline; therefore, when they travel from offshore to the shoreline, they suffer from little energy dispersion (due to refraction).In the second zone the energy spatial dispersions (due to refraction) are more sensitive and the values of the wave energy flux are lower. However, from the depths of 50-10 m the energy reductions are smaller, approximately less than 1 kW m −1 .In the third zone, the energy flux is lower because the wave heights are less than 0.5 m for most of the time (see the wave climate of the buoy of Catania in Fig. 1).However, this zone contains point energy values near to 3.5 kW m −1 .In the fourth zone, there are areas of high energy alternating with areas of low energy.Even in this case, there exist some points where the energy grows drastically. Hotspot selections In this study we selected six sites characterised by high energy content between zone I and zone IV and an additional two sites near the islands of Favignana and Marettimo.Figure 11 shows the locations of the hotspots selected, and Table 3 presents their principal characteristics. The sites were analysed to understand how the energy is distributed with respect to the significant wave height, the energy period, the direction and the seasons.Figures 12-15 show the wave energy distribution with respect to the energy period and the significant wave height for the selected sites.Figure 16 shows the wave climate.Table 4 shows both the probability of occurrence for the classes T e − H m0 and the direction at which the wave energy is concentrated and the probability of "no-calm" occurrences; thus, waves with a significant wave height greater than 0.5 m are shown.Finally, Table 5 summarises the seasonal distribution of the average wave energy flux. The SH1 site is located in zone IV near the port of Terrasini.Here, the power density is relatively lower than that of the other sites, although it is nearly equal to that observed offshore.The wave energy is concentrated in the classes over a range of 6-8.5 s with respect to the T e and between 1 and 3.5 m with respect to H m0 , with an annual frequency of 12.55 % (approximately 46 days year −1 ).The percent of "no calm" is approximately 47.72 %.Waves with high energy content come from the sector at 290-320 • N with a frequency of 49.81 %.The wave energy flux is slightly greater Table 4. Occurrence frequency of the most energetic T e − H m0 and direction intervals.For each site the frequency of "non-calm" conditions are also reported. Table 5. Seasonal distribution of the average wave energy flux per unit crest length for the selected sites. DJF MAM GLA SON than 5 kW m −1 for the winter months, whereas for the summer months the value is reduced to almost 1 kW m −1 .The sites HS2 and HS3 are both located near the port of San Vito Lo Capo.However, they exhibit different energy distributions with to respect both H m0 − T e and the direction. At HS2, the waves tend to be aligned to the coast and the energy is focused in a more restricted range of H m0 −T e .The wave energy is concentrated in the range of 6-8.5 s and 1-3.5 m, with an annual frequency of 10.74 % (approximately 39 days year −1 ).The percent of "no calm" is approximately 56.98 % with waves coming predominantly from the sector between 350 and 10 • N and characterised by a frequency of 47.83 %.The wave energy flux is slightly greater than 10 kW m −1 for the winter months, whereas for the summer months, the value is reduced to 2 kW m −1 . At HS3, the waves with more energy and frequency are concentrated in the range of 6-8.5 s with respect to T e and between 2 and 4.5 m with respect to H m0 , with a frequency of 7.05 % (approximately 26 days year −1 ).Here, the waves come predominantly from the sector between 310 and 330 • N with a frequency of 43.11 %. Site HS4 is located near the port of Trapani.The wave energy is concentrated in the range of 6.5-9 s with respect to T e and between 2-3.5 m with respect to H m0 , with an annual frequency of 6.0 % (approximately 22 days year −1 ).The per-cent of "no calm" is approximately 49.53 %.More energetic waves come from the sector at 310-320 • N, with a frequency of 32.93 %.The seasonal variation is fairly high (approximately 88 %). The site SH5 is located near the west coast of Favignana Island.It has a good exposure regarding energetic waves.The wave energy flux is approximately 6.88 kW m −1 .More energetic waves have a frequency of 25.97 % (approximately 95 days year −1 ).As observed for the SH2 site, the energy is more concentrated in a fewer number of bins than occurs at the other sites.The dominant directions are in the sector 280-290 • N with a frequency of 31.93 %.However, the maximum seasonal variation between the winter and summer months is approximately 75 %. The SH6 site is located near the west coast of the Marettimo Island.The sites exhibits a different exposure from that at the SH5 site.The wave energy is concentrated in the range of 5.5-10 s with respect to T e and 1.5-4.5 m with respect to H m0 , with an annual frequency of 19.89 % (approximately 73 days year −1 ).The percent of "no calm" is approximately 67.42 %.The dominant directions are in the sector at 270-280 • N with a frequency of 40.80 %.Wave energy flux is slightly greater than 10 kW m −1 for the winter months, whereas for the summer months a reduction to 1.87 kW m −1 is observed.The SH7 site is located approximately 1.2 km from the city of Marsala.At this site, highly energetic waves come from the direction in the range of 260-290 • N, and less energetic waves come from the direction of 180-210 • N. The wave energy flux is slightly greater than 7 kW m −1 for the winter months, whereas for the summer months, a reduction to 1.29 kW m −1 is observed. The SH8 site is located approximately 9 km from the city of Mazara del Vallo.The wave energy flux is approximately 5.4 kW m −1 .The wave energy is concentrated in the range of 5-9 s with respect to T e and 1-4 m with respect to H m0 , with an annual frequency of 27.84 % (approximately 101.6 days year −1 ).The percent of "no calm" is approximately 66.77 %.The dominant directions are included in the sector at 270-300 energy flux is slightly greater than 9.5 kW m −1 for the winter months, whereas for the summer months a reduction to 1.51 kW m −1 is observed. Discussion and conclusions The characterisation of hotspots is important for the appropriate location of a WEC farm, especially in the Mediter-ranean Sea which includes sites where a wave energy concentration can be observed due to wave transformation. In the present study, the potential wave energy along the coasts of Sicily was investigated to identify possible sites for the installation of wave farms near the coast.The analysis was based on wave and wind data obtained from the forecast centre ECMWF, which covers a period of 14 years (1999-2012) with a time resolution of 6 h.The wave data were prop- agated using the SWAN model, which allows wave propagation to be studied by taking into account several phenomena such as whitecapping, nonlinear wave-wave interactions, refraction, diffraction and wave regeneration due to wind.To validate the model, the significant wave height output was compared to data from several satellites.Good agreement was found between the two data sets. The obtained results of the wave energy flux showed that the most energetic areas are located on the western side of Sicily and in the Strait of Sicily.The offshore values of the observed energy flux are close to 8 kW m −1 on the western side, with a reduction in the Strait of Sicily to 4-6 kW m −1 .The wave energy flux is further reduced to 2-3 kW m −1 on the north and east sides of Sicily.Comparing the wave energy estimates along the bathymetry at −10, −20 and −50 m, eight hotspots were identified (Fig. 11 represents the locations of the sites).In particular, the HS3 site (near Capo San Vito) is the most energetic, although the analysis of the energy distribution showed that wave energy flux is determined by events that have high energy but a low annual frequency.Instead, the SH5 site (near the island of Favignana) is characterised by an average wave power of less than HS3, but the energy is concentrated in a limited range of H m0 and T e with an annual frequency of 25.97 %.The concentrated energy flux in the limited range of H m0 and T e and within a limited sector is an important characteristic for the productivity of WECs.Indeed, the devices are generally designed to guarantee good performances in average climates.Therefore, smaller variations in wave climate compared to the design conditions correspond to greater production of energy from the device.A similar energy distribution was observed for the HS2 (near Capo San Vito) and HS4 (near the Trapani port) sites, although they exhibit lower average energy than the HS5 site.The HS1 site (near the Terrasini port) does not generate sufficient energy to ensure an economic payback over a reasonable period of time.The percentage of calm events (significant wave height less 0.5 m) is greater than 50 % and the annual average wave energy is approximately 3.3 kW m −1 .For the HS6 (near the island of Marettimo), HS7 (near the Marsala port) and HS8 (near the Mazara del Vallo port) sites, the wave energy arrives not only from the dominant direction, as observed for the other sites, but also from secondary directions.Therefore, to better exploit wave energy, it is best to utilise fixed unidirectional devices at the HS2-HS5 sites, whereas for the latter three sites it is more convenient to use directional devices. These analyses show that profitable WECs could be realised at various sites around Sicily.However, currently, the majority of devices are designed for areas with high wave energy. Figure 1 . Figure 1.Location of study area.The red values in the left picture indicate the yearly mean wave power flux (in kW m −1 ) estimates by ECMWF data. Figure 2 . Figure 2. On the left: the bathymetry used to simulate wave propagation.On the right: detail of the model grid. Figure 4 . Figure 4. Observation points from satellites, selected within the computational domain for validating SWAN model results. Figure 5 . Figure 5.Comparison of the significant wave height evaluated by the SWAN model and by Jason 1 satellite data. Figure 6 .Figure 7 . Figure 6.Comparison between Eqs. (3) and (4): (a) the relative difference P is plotted as function of the peak period and the depth (the lines indicate the ratio between the depth and the wavelength.);(b) the relative difference P is plotted as function of the peak period and the ratio h L . Figure 8 . Figure 8. Distribution of the average wave energy flux per unit crest length within the computational domain. Figure 9 . Figure 9. Seasonal distribution of the average wave energy flux per unit crest length within the computational domain.(a) December, January and February; (b) March, April and May; (c) June, July and August; (d) September, October and November. Figure 11 . Figure 11.Locations of the selected hotspots and relative nearest ports where WECs could be located. Figure 13 . Figure 13.Characterisation of the yearly average wave energy in terms of H m0 and T e : on the left site HS3 and on the right site HS4.The colour scale represents annual energy per metre of wave front (in MWh m −1 ).The numbers within the graph indicate the occurrence of sea states (in number of hours per year). Figure 15 . Figure 15.Characterisation of the yearly average wave energy in terms of H m0 and T e : on the left site HS7 and on the right site HS8.The colour scale represents annual energy per metre of wave front (in MWh m −1 ).The numbers within the graph indicate the occurrence of sea states (in number of hours per year). Figure 16 . Figure 16.Wave power climate for the sites selected. Table 1 . Performance indices of the ECMWF data: comparison between ECMWF data and buoy data. Table 2 . Performance indices of the SWAN model: comparison between the SWAN data and satellite data. Table 3 . Sites selected in proximity of the Sicilian coast.For each site the table shows the geographical coordinates, the depth, the annual average wave power, the annual average wave energy, the distance between the sites and the coast D c , the distance between the sites and the nearest port D p , and the name of the port. Characterisation of the yearly average wave energy in terms of significant wave height H m0 and energy period T e : on the left site HS1 and on the right site HS2.The colour scale represents annual energy per metre of wave front (in MWh m −1 ).The numbers within the graph indicate the occurrence of sea states (in number of hours per year). • N, with a frequency of 40.35 %.The wave Characterisation of the yearly average wave energy in terms of H m0 and T e : on the left site HS5 and on the right site HS6.The colour scale represents annual energy per metre of wave front (in MWh m −1 ).The numbers within the graph indicate the occurrence of sea states (in number of hours per year).
7,868
2015-07-09T00:00:00.000
[ "Environmental Science", "Engineering" ]
Inferring work by quantum superposing forward and time-reversal evolutions The study of thermodynamic fluctuations allows one to relate the free energy difference between two equilibrium states with the work done on a system through processes far from equilibrium. This finding plays a crucial role in the quantum regime, where the definition of work becomes non-trivial. Based on these relations, here we develop a simple interferometric method allowing a direct estimation of the work distribution and the average dissipative work during a driven thermodynamic process by superposing the forward and time-reversal evolutions of the process. We show that our scheme provides useful upper bounds on the average dissipative work even without full control over the thermodynamic process, and we propose methodological variations depending on the possible experimental limitations encountered. Finally, we exemplify its applicability by an experimental proposal for implementing our method on a quantum photonics system, on which the thermodynamic process is performed through polarization rotations induced by liquid crystals acting in a discrete temporal regime. The study of thermodynamic fluctuations allows one to relate the free energy difference between two equilibrium states with the work done on a system through processes far from equilibrium. This finding plays a crucial role in the quantum regime, where the definition of work becomes non-trivial. Based on these relations, here we develop a simple interferometric method allowing a direct estimation of the work distribution and the average dissipative work during a driven thermodynamic process by superposing the forward and time-reversal evolutions of the process. We show that our scheme provides useful upper bounds on the average dissipative work even without full control over the thermodynamic process, and we propose methodological variations depending on the possible experimental limitations encountered. Finally, we exemplify its applicability by an experimental proposal for implementing our method on a quantum photonics system, on which the thermodynamic process is performed through polarization rotations induced by liquid crystals acting in a discrete temporal regime. I. INTRODUCTION While microscopic dynamical physical laws of both classical and quantum physics are time-symmetric, and hence reversible, the dynamics of macroscopic quantities exhibit a preferred temporal direction. The physical law formalizing this concept is the second law of thermodynamics, whereby the "arrow of time" [1] is associated with a production of entropy [2]. According to this law, for instance, if we take a vessel divided by a wall, and put a gas in only one half of the vessel, when we remove the wall we will observe with a near-unity probability the gas expanding and occupying the whole vessel. Because of its unidirectional temporal evolution, this phenomenon has often been used to differentiate between past and future. There is, however, a non-zero probability that at a time all the molecules may happen to visit one half of the vessel. In this regard, the development of so-called "fluctuation theorems", both for classical [3][4][5][6][7] and quantum [8][9][10][11][12][13][14][15][16][17][18] systems, has led to the sharpening of our understanding of the second law as a statistical law, where the entropy of a system away from equilibrium can spontaneously decrease rather than increase with non-zero probability. As specified by those theorems, the ratio between the probability of entropydecreasing events and that of entropy-increasing ones vanishes exponentially with the size of the fluctuations, and can hence be neglected in the macroscopic limit [6]. The fundamental and empirical basis for the study of entropy production and thermodynamic irreversibility in driven systems is typically provided by the notion of dissipative *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>work, W diss ≡ W − ∆F (namely, the work invested in a thermodynamic transformation between equilibrium states having a free energy difference ∆F , which cannot be recovered by reversing the driving protocol) [4,5,[19][20][21]. The fluctuations of the dissipative work in the process can be characterized by constructing the work probability distribution, P (W ), associated to the observation of a particular value of W in a single realization of the driving protocol. Such fluctuations are constrained by a refined version of the second law: namely, Crook's fluctuation theorem, according to which whereP (−W ) is the probability of performing a work W in the time-reversal dynamics, β = 1/k B T is the inverse temperature of the surrounding thermal environment, and k B is the Boltzmann constant. According to Eq. (1), this probability ratio decreases exponentially with the amount of dissipative work, W diss , in the realization. Furthermore, Eq. (1) implies the famous Jarzynski equality e −βW diss = 1, where the brackets denote the statistical average with respect to P (W ). Jarzynski's equality has severe implications by itself, such as the exponential decay of the probability to observe negative values of W diss in the forward dynamics (explicitly, P (W diss < −ζ) ≤ e −βζ for any ζ ≥ 0) [6]. Work fluctuations have been measured in small classical systems leading to both testing the Crook's theorem and the Jarzynski equality, and developing applications like measurements of free-energy [22][23][24][25][26][27]. In quantum physics, since work is not associated to any observable [28], its definition becomes more complex, and it usually demands the use of the so-called "two-point measurement (TPM) scheme" [9]. In the TPM scheme, work is represented as the difference between the ini-tial and final energies of the system, obtained by performing two projective measurements of the Hamiltonian at the beginning and at the end of the forward as well as of the timereversal process. Extensions to non-ideal measurements [29][30][31] and variants of the TPM scheme [32][33][34][35][36][37] have been also considered recently. The TPM approach has been directly implemented in several experiments [38][39][40][41][42]. However, since implementing projective energy measurements before and after an arbitrary process may be challenging in certain experimental scenarios, and the measurement might annihilate the system measured, alternative methods for extracting the work distribution were proposed to circumvent this requirement. For example, in Refs. [43,44], a scheme based on Ramsey interferometry using a single probe qubit was proposed, and subsequently implemented [45,46], to extract the characteristic function of work in an NMR platform. A similar method to sample the work probability distribution from a generalized measurement scheme was introduced in Refs. [47][48][49], and tested experimentally on an ensemble of cold atoms [50]. Despite their many advantages and proven efficacy, previous schemes often involve indirect measurements requiring postprocessing of data, or experimentally demanding entangling operations. Developing new accurate and simple methods to directly estimate the work probability distribution and irreversibility (thus refraining from the TPM scheme) is therefore of prime interest in quantum thermodynamics. In this paper, we propose a simple interferometric method for quantifying the work distribution and the average dissipative work associated to a given driving protocol Λ during a thermodynamic process. The method enables one to directly read out the relevant transition probabilities between eigenstates of the initial and the final Hamiltonians, which are needed to build the work probability distribution and the relative entropy (or Kullback-Leibler divergence) between the density operators in forward and time-reversal dynamics. Remarkably, our method requires no entangling operations with separate auxiliary systems, no measurement of the thermodynamic system, no data post-processing, and it runs twice as fast as running the complete protocol Λ. More precisely, in the proposed method we superpose two interferometric paths: along one path the system is driven following the first half of the driving protocol Λ (i.e., from t = 0 to t = τ /2), while along the other path the system is affected by the time-reversal version of the second half of the protocol (from t = τ /2 to t = τ ). We show that the fringe visibility in the interferometer allows one to quantify both the full work probability distribution associated to an arbitrary protocol Λ, and the relative entropy between the states in the forward and the time-reversal dynamics at any instant of time, assuming that the initial and final Hamiltonians are known. Moreover, in the case of limited control over preparations, our scheme still provides useful upper bounds on the average dissipative work. Since single photons provide advantages in interferometric schemes due to their robustness, individual addressability and the intrinsic mobility, we propose a photonic implementation of our scheme where the Hamiltonian of the thermodynamic system is represented by the polarization of a single photon. Other platforms that may be used to realize the scheme in-clude ultracold atoms [48,50], or NMR spectroscopy of nuclear spins [45,46]. Also a methodologically related scheme was proposed recently to investigate the thermodynamic arrow of time in a quantum superposition of the forward and time-reversal processes [51]. II. PROCEDURE OVERVIEW Consider a thermodynamic system S that is driven by a time-dependent Hamiltonian H(Λ(t)) depending on some control parameter Λ(t) which varies from t = 0 to t = τ , according to a protocol Λ = {Λ(t) : 0 ≤ t ≤ τ }. The system starts the evolution in a thermal state ρ th 0 = exp[−β(H 0 − F 0 )] in equilibrium with a thermal reservoir at inverse temperature β, where F 0 is the free energy corresponding to the initial Hamiltonian H 0 ≡ H Λ(0) . The system is then isolated from the environment, and the driving protocol Λ is applied, bringing the system to an out-of- T being the so-called "timeordering" operator resulting from the Dyson decomposition. Once the driving protocol is ended at time τ , the system may eventually equilibrate again from ρ τ = ρ(τ ) to the reservoir temperature, thereby reaching the thermal state ρ th τ = exp −β(H τ − F τ ) , corresponding to the final Hamiltonian H τ ≡ H(Λ(τ )) and the free energy F τ . Together with the above thermodynamic process, we consider its time-reversal twin. In the reverse process, the system starts the evolution at time t = 0 with Hamiltonian ΘH τ Θ † in equilibrium with the thermal reservoir, that is, Here, Θ is the (anti-unitary) time-reversal operator, responsible for changing the sign of observables with odd parity (such as momentum, or spin under time-reversal). The time-reversal operator fulfills Θ 1i = −1i Θ and Θ Θ † = Θ † Θ = 1. The system is then driven according to the time-reversal pro-tocolΛ = {Λ(t) ≡ Λ(τ − t) : 0 ≤ t ≤ τ }, corresponding to the inverse sequence of values of the control parameter. This brings the system out-of-equilibrium to the stateρ(t) =Ũ (t, 0)ρ th 0Ũ † (t, 0) at intermediate times, After completing the protocolΛ, the system may return back to equilibrium at time t = τ , reachingρ th We denote by E (0) n the initial energy eigenstates of the system in the forward process, and by p the probability that the system has energy E (0) n . Analogously, the initial eigenstates of the system in the time-reversal process read Θ E m . The work probability distribution in the TPM scheme results then [9]: where we introduced the conditional probabilities p m|n = Figure 1. Diagrammatic representation of the forward and timereversal evolutions of the thermodynamic system. An initial thermal state ρ th 0 with Hamiltonian H0 is driven into a final, non-equilibrium state ρτ . It then eventually equilibrates at the reservoir temperature, reaching the thermal state ρ th τ . (If the driving process was reversible, quasi-static, the system would have ended in the state ρ th τ , immediately after the drive.) Along the driving process the Hamiltonian is changed from H0 to Hτ . Analogously, in the process' time-reversal twin a thermal stateρ th 0 = ρ th τ with Hamiltonian Hτ evolves into a stateρτ and then it eventually equilibrates to the stateρ th in the second projective energy measurement after the unitary evolution U (τ, 0), given that it was found to be in |E The micro-reversibility relation for non-autonomous systems [9] reads: Using the micro-reversibility relation (3), we obtainp n|m = p m|n . This relation is the key property to obtain Crook's theorem in Eq. (1) [52]. Furthermore, we assume that the Hamiltonian is invariant under time-reversal [i.e., ΘH(t) = H(t)Θ]. As a consequence, the relations Θ |E are also verified. In Refs. [19,20], the authors derived an important relation closely connected to Crook's theorem linking the dissipative work produced during the protocol Λ with the relative entropy between the density operators in forward and time-reversal dynamics at any intermediate instant of time t: where S(ρ||σ) := Tr ρ ln(ρ) − ρ ln(σ) ≥ 0 is the relative entropy between two generic states ρ and σ. Reversible processes for which the state in the forward dynamics is statistically indistinguishable from the one generated in the time-reversal dynamics do not dissipate work, W diss = 0, and therefore all the work performed during the protocol Λ, W = ∆F , can be recovered back implementing the timereversal protocolΛ. Importantly, the equality in Eq. (4) is obtained in the case of a closed system following unitary dy-namics, as in the TPM scheme presented above. For open systems, the equality above is instead replaced by an inequality after tracing out environmental degrees of freedom [19,20]. In the following, we present an interferometric scheme that allows us to directly measure the conditional probabilities p m|n (and thereforep n|m ) without implementing the TPM scheme, but resorting to the visibility of fringes in the interferometer. This enables us to construct P (W ) andP (W ), and hence the relative entropy S ρ(t) || Θ †ρ (τ − t) Θ in Eq. (4). III. INTERFEROMETRIC SCHEME The main idea of our scheme is to entangle the system of interest with a two-level "auxiliary system", and implement different dynamics (forward and time-reversal) on each of the two states of the auxiliary system. To fix ideas, we assume that the auxiliary system is the path of a single photon in a Mach-Zehnder interferometer such as the one depicted in Fig. 2, and denote by {|0 A , |1 A } the basis of the two possible paths. (We stress, however, that this auxiliary system does not have to be encoded in the path, but can be any degree of freedom which can be suitably controlled.) Suppose now that, in one of the two states of the superposition (say, |0 A ), the system is prepared in the state |E (0) n , while in the other state of the superposition (|1 A ) the preparation is |E (τ ) m for a certain choice of n and m. Consequently, the initial state is the pure state The operation U (τ /2, 0) is then applied to S in the path |0 A , while, on the path |1 A , the operationŨ (τ /2, 0) is performed, followed by the time-inversion operation Θ † . The total evolution is given by Therefore, at the time τ /2, the state of system and path degree of freedom will read: If now we marginalize on the path degree of freedom (i.e., we trace out the thermodynamic system), we obtain < l a t e x i t s h a 1 _ b a s e 6 4 = " c b K a x I u n 2 e 2 E m u Schematic representation of the interferometric technique to directly estimate the work dissipation. A thermodynamic quantum system S is prepared in the state |E is then applied to S when traveling along the path |0 A , while the operationŨ (τ /2, 0) followed by the time-inversion operation Θ † is applied to S along the path |1 A . The resulting state is as in Eq. (7). The two quantum superposed amplitudes are then interfered with each other, and the auxiliary system is measured in the |0 A ± |1 A / √ 2 basis. The same setup can be used in the case of a limited preparation, in which case the input state is the thermal state ρ th Similarly, if we trace out the auxiliary system, we achieve a mixture between the state of the driven system at time τ /2 in forward and time-reversal processes: We see that the state of the system is a mixture of ρ n (τ /2) (i.e., the state resulting from the forward evolution during a time interval τ /2 with initial condition |E n ), andρ m (τ /2) (which is the state resulting from the time-reversal evolution during a time interval τ /2 with initial condition |E (0) m ). Ultimately, our aim is to relate the information gained by measuring the output ports of the interferometer to the work statistics and the "degree of reversibility" of the thermodynamic processes. This degree of reversibility is related to the distinguishability of the two possible paths followed by the auxiliary system in the interferometer. If we measure the final state ρ A (τ /2) in the basis (|0 A ± |1 A )/ √ 2 (see Fig. 2), the probability to get each of the two possible results is In an interferometer, the difference |p + − p − |/2 is called interferometric visibility or fringe, and is related to our capacity to identify the path followed by the auxiliary system [53]: Now, we crucially apply the micro-reversibility relation in Eq. (3), to realize that Θ †Ũ † (τ /2, 0) Θ = U (τ, τ /2). Inserting this into Eq. (11), and using the cyclic property of the trace, we obtain the main result of our proposal: where, in the last equality, we identified the expression of the conditional probabilities p m|n of the TPM scheme, and where we identified U (τ, τ /2) U (τ /2, 0) = U (τ, 0). Running this scheme for the N 2 different initial states, n, m = 1, 2, ..., N (where N is the dimension of the system Hilbert space), and assuming that we know the eigenenergies E and its time-reversal twinP (W ). We notice that in practice only (N − 1) 2 of the N 2 initial preparations need to be considered, since the properties of the conditional probability imply m V 2 m,n = 1 for all n = 1, ..., N , and for any unital process p m|n becomes doubly stochastic thus we also have n V 2 m,n = 1 for all m = 1, ..., N , as also noticed in Ref. [50]. Furthermore, we can rewrite the r.h.s. of Eq. (4) in terms of known quantities: which can be alternatively obtained from the average of the work probability distribution in Eq. (13), W = ∞ −∞ W P (W ) dW , and the free energy difference between the initial equilibrium states, ∆F = F τ − F 0 . As a consequence, this scheme allows, through Eqs. (13) and (14), the direct estimation of the work dissipation, and the testing of the Jarzynski equality. IV. LIMITED PREPARATION AND BOUNDS ON WORK DISSIPATION In the previous section, we assumed that we have the ability to prepare a superposition of pairs of energy eigenstates of the initial and final Hamiltonians of the system. Nonetheless, it could be the case that, due to technical limitations, one may not be able to prepare these pure states in the laboratory. For instance, if we do not have full control over the system in its preparation stage, and cannot isolate it from the reservoir, we may only be able to prepare the thermal states ρ th 0 and ρ th τ . In the following we explore what we can still learn about the work dissipation by exploiting our interferometric scheme in such a situation. We anticipate that, although the full work probability distribution is no longer recoverable in this case, we are still able to provide useful upper bounds on the dissipative work done in the process. As before, we prepare our auxiliary degree of freedom in a quantum superposition 1 √ 2 |0 A + |1 A at t < 0. The initial states for the system in the two branches will now be, in general, the mixed thermal states ρ th 0 and ρ th τ . However, hereafter we will make use of their "purifications", which can be considered as useful mathematical tools, and may correspond physically to all the environmental degrees of freedom E, such that the overall joint state of the system and these degrees of freedom is pure. (Notice that here the environment includes, but is not limited to, the thermal reservoir. Furthermore, our scheme does not require to have access to the environmental degrees of freedom.) We denote the purifications of the thermal states, respectively, as |ψ (0) S,E and |ψ (0) S,E , and they verify Tr Again, we perform the operation U (τ /2, 0) in the path |0 A according to the protocol Λ, andŨ (τ /2, 0) in the path |1 A according toΛ, followed by Θ † . Notice that the unitaries U (τ /2, 0) andŨ (τ /2, 0) only act on the system of interest, with no effect on the environment. We can then compute the global state of the system, the environment and the auxiliary system at τ /2 similarly as before, and obtain the marginal states for the auxiliary degree of freedom and the composite system consisting of the system and environment. For the latter, we obtain a mixture over the states of the system and the environment at τ /2 in the forward and time-reversal dynamics: where ρ (+) The corresponding state of the system only will be then an equal probability mixture of the states ρ S (τ /2) = Tr E ρ (+) S,E and Θ †ρ S (τ /2) Θ = Tr E ρ (−) S,E . The visibility, determined by the off-diagonal elements of the auxiliary degree of freedom, reads in this case: which can no longer be related to the different outcomes of a TPM scheme. This notwithstanding, as we will shortly see, one can still make use of this information in an alternative way. From Ref. [53], we know that the visibility V of the interferometer fringes and the distinguishability D(ρ, σ) between two "which-path detector states" ρ and σ (i.e., two states from which we can optimally infer the which-path information, would we perform a measurement to distinguish between them) are mutually exclusive. In particular, it has been shown that these two quantities respect the complementarity relationship and that this relation becomes an equality if the "detector states" are in pure states, as it is in our case. The distinguishability between the two states is given by the tracenorm distance between them, i.e., D(ρ, σ) := 1 2 ||ρ − σ|| := S,E gives us an estimation of how well one can distinguish between the two paths in the interferometer by measuring the system and the environment. However, we are interested in the trace-norm distance between the marginal states of the system only. We can therefore use the fact that the trace distance is non-increasing under partial trace, i.e., D ρ S,E ≥ D ρ S (τ /2), Θ †ρ S (τ /2) Θ , to get: Finally, we relate the distinguishability between the system states at τ /2 in the forward and time-reversal dynamics with the relative entropy in Eq. (4), and hence to the average dissipative work during the protocol Λ. This can be done using the upper bounds obtained in Eqs. (17) and (19) of Ref. [54]. Minor manipulations of these equations lead to the formulation of the following theorem: Theorem. Let ρ and σ be two strictly positive density operators in a finite-dimensional Hilbert space H. Then where α σ ∈ (0, 1] is the smallest eigenvalue of σ, and || || 2 = Tr[ † ] denotes the Frobenius (or Euclidean) norm, which verifies || || 2 ≤ || ||. Furthermore, setting the dimension of the Hilbert space to dim(H) ≡ d, we also have: Combining Eqs. (19) and the bounds (20)- (21), we obtain the following two bounds for the dissipative work during the original thermodynamic process: where we have used the relation between the dissipative work and the relative entropy in Eq. (4). Additionally, we denoted , where E τ max is the maximum eigenvalue of the Hamiltonian H τ . This follows from the fact that the statesρ S (τ /2) andρ S (0) = ρ th τ have the same spectrum due to their unitary equivalence, that is, We notice that the bounds (22)-(23) cannot be saturated in general when the initial state of the system is mixed due to the complementarity relation in Eq. (19), which involves a partial trace over the environmental degrees of freedom. (Conversely, saturation would require either measuring the whole environment or a pure initial state of the system as in the previous sections.) Nevertheless, there is a single case where the bound B 2 in Eq. (22) is saturated, namely, by verifying the reversibility conditions (quasi-static evolution) where V → 1 and W diss → 0. On the other hand, the bound in Eq. (23) is not saturated even in the reversible case, since it is designed to work better in irreversible conditions for V < 1 and α → 0, where Eq. (19) becomes a strict inequality. Further practical limitations on the ability to split the protocol or to implement the time reversal operation Θ are addressed in appendix A. Although in our discussion we supposed that the auxiliary degree-of-freedom is the path of the particle which encodes the system of interest, this is not a requirement of our proposal. The only three requirements on the auxiliary degreeof-freedom are the following. (1) The state in Eq. (5) should be initially prepared. (2) Depending on the state of this auxiliary degree-of-freedom, the forward and time-reversal evolutions should then be implemented. (3) Finally, the auxiliary degree-of-freedom should be measured in the basis while scanning the phase φ to estimate the visibility. This auxiliary degree-of-freedom could be encoded in the same particle, perhaps in additional energy levels of an atomic system, in which case the visibility measurement would take the form of atomic interferometry. Alternatively, a second particle could be used to condition the forward and time-reversal evolution. For example, if the target system is a qubit encoded in a single trapped ion, one could place a second ion in the trap and then couple the two via the collective vibrational mode. More concretely, Ref. [55] shows explicitly how one can implement the controlled evolution of different unitary operations using trapped ions. In this case, the initial state [Eq. (5)] would require entangling operations for its preparation, and the visibility could be easily measured on the internal degree-of-freedom of the second ion. V. EXAMPLE OF A PHOTONIC IMPLEMENTATION We apply our scheme to an illustrative experimental set-up in which the thermodynamic system is represented by a single qubit realized through the polarization degree of freedom of a single photon, its thermality is given by the degree of entanglement with an additional photon, the auxiliary qubit is encoded in its path, and the time-dependent thermodynamic process is performed in N discrete time-steps t k by sending the photon through a sequence of liquid crystal waveplates each executing a quench on the (time-independent) Hamiltonian H Λ(t k ) with k = 1, ..., N , as sketched in Fig. 3. The Hamiltonian of the qubit system can be defined as: where ω is the qubit's natural frequency, and the control parameter implements N sudden changes in the range Λ(0) = 0 to Λ(τ ) = π 2 . Consequently, the Hamiltonian is given by the spin operator within the x−z plane, which rotates by an angle of π 2N at each step around the y-axis. At the initial and final times of the protocol, the Hamiltonian is diagonal in the σ z and σ x bases, respectively. Therefore, |E In the k-th step, the control parameter takes a fixed value Λ k ≡ Ωτ k/N , where Ω = π 2τ is the angular frequency of the rotation. Therefore, any initial state |ψ R (0) evolves according to where ∆t = π 2N Ω , and for each step k = 1, ..., N : The Hamiltonian at each step H k induces a rotation on the system state of an angle θ = π 2N ω Ω around the axis whose direction d k = (sin kπ 2N , 0, cos kπ 2N ) changes from step to step. This evolution can be implemented by means of a sequence of N liquid crystal wave-plates (LCWPs). The k-th LCWP rotates the photon's polarization about an axis d k , and the angle of rotation is given by the retardance which we can change by an externally applied voltage. Hence, to implement the full evolution we can use a series of N LCWPs, each with an optic axis set at ϑ k = kπ 4N ∈ [0, π/2], and with the same retardance for all LCWPs (i.e., θ). Our scheme can be executed, following To give rise to pure (mixed) states, one of the two photons is detected with (without) polarization resolution. The remaining single photon is sent through the set-up to realize the quantum measurement scheme. In one arm of the Mach-Zehnder interferometer, the state |E (0) n is prepared in polarization (via a quarter-(QWP) and a half-(HWP) waveplates), the unitary U (τ /2, 0) is then applied to such a state by means of a sequence of liquid crystal waveplates (LCWPs). In the other arm of the interferometer, the photon is prepared in the state |E (τ ) m and it is then subjected to the unitary Θ †Ũ (τ /2, 0). After the two paths are recombined on a beam splitter (BS), the interference fringes are measured by varying the length of the trombone delay-line positioned along one of the two interferometric paths. By preparing the initial state of the photon from an entangled state in polarisation, the present scheme can be adapted to operate with initial thermal states (as in Sec. IV). interferometer. In particular, we take N = 7, and apply the discretised Hamiltonian H k for k = 1, 2, 3 along path |0 A , while along path |1 A we perform H k for k = 6, 5, 4. In this case, we recover the whole work probability distribution, together with the average dissipative work during the process, which can be used to test the fluctuation relations. We note that the work evaluated does not correspond to the intrinsic photonic energy (which is given by its frequency) but to the generator of the evolution (24). In addition, our scheme can also be used to test the upper bounds on the dissipative work obtained in Eqs. (22) and (23) by inserting the thermal states of the two Hamiltonians. More precisely, we may insert a single photon from a pair of photons in a partially entangled state |ψ th 0 = a |z + |z + + b |z − |z − , where a, b ∈ C. The state of the injected photon is obtained by tracing out the second photon, ρ th 0 = |a| 2 |z + z + | + |b| 2 |z − z − |. This corresponds to a thermal state for the choice |a| 2 = exp(−β ω) Z0 and |b| 2 = 1 Z0 , with In Fig. 4(a), we show the expected work probability distribution associated to our discretized protocol with N = 7 steps for a fixed inverse temperature β = 1.2( Ω) −1 and three values of the frequency ω = {0.5, 1.5, 3.0}Ω (light blue, dark blue, yellow) and fixed duration τ = 2π/Ω. Since the eigenvalues of the Hamiltonian H Λ are constant, the work probability distribution consists of three peaks placed at work values W = {− ω, 0, ω}. Low temperatures favor an asymmetric distribution with a higher peak for positive work W = ω with respect to W = − ω, while for higher temperatures the two lateral peaks approach equal heights. Moreover, as we observe, for faster protocols the three peaks in the distribution are comparable, while slower protocols ω Ω lead to the suppression of lateral peaks in favor of a high central peak at W = 0. In this case, we approach an adiabatic evolution where the initial populations of Hamiltonian eigenstates remain almost constant in time, hence leading to zero energy changes and zero work. On the contrary, in the opposite limit ω Ω, we approach a sudden quench of the Hamiltonian. In this case, the state of the system remains unchanged by the evolution in Eq. (25), and the lateral peaks associated with the overlaps | x − |z + | and | x + |z − | become maximal. In Fig. 4(b) we show the performance of the bounds for the dissipative work in Eqs. (22) and (23). We assume the equality in Eq. (19), and take a rotation frequency Ω = 1.5ω. As can be appreciated in the plot, in the low temperature regime (right side) the logarithmic bound B log becomes the best option while B 2 diverges due to the exponential decrease of α ρ th In all cases, the difference in probability distributions between the discrete and continuous systems is shown to decrease as the number of steps increases. In this sense, the quench generated by a discrete series of time-independent Hamiltonians is an approximation to the quench generated by a continuous time-dependent Hamiltonian. when temperature is increased (left part), B 2 starts to perform better as soon as k B T becomes higher than the system energy splitting (k B T > ω). When increasing Ω (not shown in the Figure), logarithmic and quadratic bounds become tighter in their respective temperature regimes of performance. In the opposite limit of a near adiabatic process (where the dissipative work vanishes), the quadratic bound still performs good for high temperatures, but, contrary to previous cases, the logarithmic bound becomes worst even in the limit of small temperatures. Nevertheless, the bounds do not appear to become saturated in any of the parameters' regime. In the limit of many steps N 1 the discrete rotation protocol can be approximated by a continuous rotation, with Λ(t) = Ωt for arbitrary Ω and t ∈ [0, τ ]. Experimentally, this could be realized using "twisted nematic liquid crystals" (TNLC). These are devices where the optic axis is continuously rotated (typically by 90 • ) along the beam propagation [56]. For our proposal, we would require two devices with 45 • rotation, one for the forward arm and one for the timereversed arm. Note that one could directly implement the final unitary operation using a set of three waveplates [which can implement an arbitrary SU(2) operation]. However, this is not a faithful implementation of the time dependent Hamiltonian as the polarization state will not evolve correctly as it traverses these waveplates. In such a case, the unitary evolution reads (see App. B for details): The differences in the work probability distribution between the discrete and continuous versions of the protocol decreases as the number of steps N increases [ Fig. 4(c)]. As before, the ratio ω/Ω determines the adiabaticity of the realized process. Using TNLCs, the length of the liquid crystal cell sets Ω [56]. For optical wavelengths, standard TNLCs operate in the our adiabatic regime, using long enough cells with a length of ≈ 10µm. Reaching the non-adiabiatic regime would require cells that are shorter than 2µm, which should also be achievable with current technology [57]. In the limit ω/Ω 1, we obtain a fully adiabatic process, where the populations of Hamiltonian eigenstates remain constant through the entire evolution (see App. B for a detailed analysis). Moreover, since the Hamiltonian H Λ(t) has the same eigenvalues at all times, we conclude that, under adiabatic evolution, a system starting in a thermal state at t = 0 will remain in equilibrium at the same temperature at all later times. VI. CONCLUSIONS In this work, we have developed a new method based on interferometric tools to measure the work probability distribution and the thermodynamic irreversibility of a generic driving process acting on a quantum system. The method utilizes the intereference between two paths, one along which the system is driven out of thermal equilibrium in the forward, and one where it is driven in the time-reversal process. We demonstrated that inserting the energy eigenstates of the initial and final Hamiltonians of the system in the two paths of the interferometer and measuring the fringe visibility enable us to directly reconstruct the work distribution and the average dissipative work. The latter is known to be equal to a production of positive average entropy, and it is a measure of the thermodynamic irreversibility. Our proposal offers a faster implementation speed than TPM schemes as it halves the duration of each execution. A speed enhancement in each run is a considerable advantage since, in TPM schemes, sufficient statistics must be acquired to reconstruct the order of N 2 instances (i.e., probabil-ities) of the work probability distribution from the results of projective measurements. Furthermore, in the TPM scheme the results of the projective measurements are randomly sampled. Due to finite size effects in sampling, the TPM scheme can have a significant delay in acquiring sufficient data, especially for low-probability instances in the work distribution. In contrast, in our scheme one can control which instance in the work distribution to measure by choosing the appropriate input states in the forward and time-reversal amplitudes, making the scheme much less affected by finite-size statistics. Our scheme also offers advantages over existing alternatives to the TPM scheme [44][45][46] as it enables a direct measure of the conditional probabilities that make up the work probability distribution. For example, in Refs. [44][45][46], the proposed scheme measures the characteristic function of work, i.e., the Fourier transform of the work probability distribution, from which the work probability distribution must then be recovered indirectly. In the implementations of Refs. [45,46], this problem required a large sampling of a continuous function (the characteristic function) to recover a discrete probability distribution with only a few peaks. In our proposal, these drawbacks are overcome by directly obtaining the conditional probabilities associated with the peaks. In the case of limited experimental control, when only the thermal states of the initial and final Hamiltonians of the system can be prepared, our method provides useful upper bounds on the average dissipative work. The scheme involves no entangling operations with external auxiliary systems and no energy measurements, and thus offers an accessible and versatile playground for studying the thermodynamics of quantum processes. To provide a concrete example of implementation of our scheme, we have developed an experimental proposal of our scheme using an all-optical platform, and standard tools for single-and entangled-photon manipulation. The out-ofequilibrium quantum dynamics is realized via a series of liquid crystals wave-plates splitting the thermodynamic process in a series of discrete time-steps t k , each represented by a liquid crystal with an optical axis set at a different angle of rotation ϑ k . Although here we have focused for simplicity on the case of initial equilibrium states, we stress that our method can be used to determine the work probability distribution for generic nonequilibrium initial states. Systems with initial coherence in the energy basis or composite systems sharing quantum correlations can also be handled within our method by measuring transitions from arbitrary eigenstates n | 2 and using extended trajectories (Bayesian networks) techniques [37,58] to infer the work probability distribution. Finally, work probability distributions using collective measurements might be instead reproduced following the proposal in Ref. [42] by augmenting the number of paths and using them to encode other system degrees of freedom. In this section, we consider the situation where the ability to control the application of the protocol Λ is heavily affected by experimental limitations such as (i). impossibility to split the protocol Λ in two halves and invert the second half, or (ii). difficulties in applying the time-reversal operation Θ † at the end of the second branch of the interferometer. If any of these circumstances applies, the requirements for the usability of the interferometric scheme proposed above may not be met. In light of this, here we propose an alternative set-up to be applied in such situations. The main price to pay is that the time needed to run the scheme for any initial state is doubled. In this alternative scheme, we will take advantage of the unitary equivalence of the system states in the forward and time-reversal dynamics. In addition, the relation between the dissipative work and the relative entropy in Eq. (4) is verified for any intermediate instant of time t ∈ [0, τ ]. As a consequence, we can observe interference between the states in the forward and time-reverse dynamics also at the extremes of the interval, where one of the two states is thermal. In the fol-lowing, we present the scheme in the case of interference at time τ in the forward dynamics (corresponding to t = 0 in the time-reversal dynamics), but an analogous scheme can be developed for interference at time t = 0 in the forward dynamics (corresponding to t = τ in the time-reversal one). As in the previous case, we start by preparing the auxiliary degree of freedom in the quantum superposition 1 √ 2 |0 A + |1 A at t < 0. Once again, the initial states of the system in the two branches may either be the pure states |E (0) n along the path |0 A and |E (τ ) m along |1 A , or the mixed thermal states ρ th 0 and ρ th τ , respectively, depending on whether we have full control over the system in the preparation stage. However, in contrast to the previous case, we implement the whole protocol Λ over the system in the path |0 A , while the branch |1 A remains unaffected. Assuming, for concreteness, initial pure states, the global state of the system and the auxiliary system after time τ can be evaluated and, tracing the system degrees of freedom, we obtain: Consequently, in this case the visibility directly give us the conditional probabilities for the work probability distribution: and we recover Eqs. (13) and (14). Likewise, when the initial states in the two interferometer paths are the mixed thermal states, we find again, for the visibility: which is equivalent to Eq. (17). Consequently, the bounds developed in Eqs. (22) and (23) for the dissipative work apply also in this situation.
9,962.4
2021-07-05T00:00:00.000
[ "Physics" ]
Outlier Removal Approach as a Continuous Process in Basic K -Means Clustering Algorithm : Clustering technique is used to put similar data items in a same group. K-mean clustering is a commonly used approach in clustering technique which is based on initial centroids selected randomly. However, the existing method does not consider the data preprocessing which is an important task before executing the clustering among the different database. This study proposes a new approach of k-mean clustering algorithm. Experimental analysis shows that the proposed method performs well on infectious disease data set when compare with the conventional k-means clustering method. INTRODUCTION Data analysis techniques are necessary on studying actually increasing huge range of large sizing data.Regarding the same edge, cluster analysis (Hastie et al., 2001) tries to pass through data easily to achieve 1st structure experience by dividing data items straight into disjoint classes in a way that data items owned by identical cluster are the same whereas data items owned by another clusters tend to be different.Among the significant well known as well as effective clustering techniques is known as the K-means technique (Hartigan and Wang, 1979) utilizing prototypes (centroids) so as to signify clusters through perfecting the error sum squared operation.(The specifics report for K-means as well as relevant techniques has been provided in (Jain and Dubes, 1988). The computational difficulty with traditional Kmeans algorithm is extremely large, specifically with regard to huge data units.Moreover the amount of distance computations rises greatly with the increase with the dimensionality of the data.When the dimensionality increases usually, just a few dimensions are highly relevant to specific clusters, however data on the unimportant dimensions may possibly generate extremely very much noise and also conceal the true clusters that will possibly be observed.Furthermore whenever dimensionality elevates, data normally turn out to be extremely short, data elements positioned on separate measurements may be regarded virtually all equally distanced as well as the distance amount, that, primarily for grouping exploration, turns into useless. Therefore, feature reduction or just dimensionality lessening is the central data-preprocessing approach regarding cluster analysis for datasets which has a huge number of features. However, huge dimensional data are sometimes enhanced into reduce dimensional data through Principal Component Analysis (PCA) (Jolliffe, 2002) (or singular value decomposition) whereby coherent patterns could be detected more easily.This type of unsupervised dimension reduction is commonly employed in tremendously broad areas which includes meteorology, image processing, genomic analysis and information retrieval.It is additionally well-known that PCA can be used to project data into a reduced dimensional subspace and then K-means will then be applied to the subspace (Zha et al., 2002).In other instances, data are embedded in a low-dimensional space just like the eigenspace from the graph Laplacian and K-means will then be employed (Ng et al., 2001). A very important reason for PCA reliant dimension lowering is that often it holds the dimensions considering the main variances.This is the same with locating the optimal low rank approximation (in L2 norm) for the data employing the SVD (Eckart and Young, 1936).Also, the dimension lowering property on its own is actually inadequate in order to elucidate the potency of PCA. On this study, we take a look at the link concerning both of these frequently used approaches and also a data standardization process.We show that principal component analysis and standardization approaches are basically the continuous solution for the cluster membership indicators on the K-means clustering technique, i.e., the PCA dimension reduction automatically executes data clustering in line with the K-means objective function.This gives an essential justified reason of PCA-based data reduction. The result also provides best ways to address the K-means clustering problem.K-means technique employs K prototypes, the centroids of clusters, to characterize the data.These are determined by minimizing error sum of squares. K-means clustering algorithm: A conventional procedure for k-means clustering is straightforward.Getting started we can decide amount of groups K and that we presume a centroid or center of those groups.Immediately consider any kind of random items as initial centroids or a first K items within the series which can also function as an initial centroids. After that the K-means technique will perform the 3 stages listed here before convergence.Iterate until constant (= zero item move group): • Decide the centroid coordinate • Decide the length of every item to the centroids • Cluster the item according to minimal length Principal component analysis: PCA can be looked at mathematically as the transformation of the linear orthogonal of the data to a different coordinate so that the largest variance of any of the data projections lie on the first coordinate (known as the first principal coordinate), the next largest on the second coordinate and so on.It transforms a numerous possibly correlated variables into a compact quantity of uncorrelated variables called principal components.PCA is a statistical technique for determining key variables in a high dimensional dataset which accounts for differences in the observations and is very important for analysis and visualization where information is very little lacking. Principal component: Principal components can be determined by the Eigen value decomposition of a data sets correlation matrix/covariance matrix or SVD of the data matrix, normally after mean centering the data for every feature.Covariance matrix is preferred when the variances of features are extremely large on comparison to correlation.It will be best to choose the type of correlation once the features are of various types.Likewise SVD method is employed for statistical precisions. LITERATURE REVIEW Many efforts have been made by researchers to enhance the performance as well as efficiency of the traditional k-means algorithm.Principal Component Analysis by Valarmathie et al. (2009) and Yan et al. (2006) is known as an unsupervised Feature Reduction technique meant for projecting huge dimensional data into a new reduced dimensional representation of the data that explains as much of the variance within the data as possible with minimum error reconstruction.Chris and Xiaofeng (2006) Proved that principal components remain the continuous approaches to the discrete cluster membership indicators for K-means clustering and also, proved that the subspace spanned through the cluster centroids are given by spectral expansion of the data covariance matrix truncated at K-1 terms.The effect signifies that unsupervised dimension reduction is directly related to unsupervised learning.In dimension reduction, the effect gives new insights to the observed usefulness of PCA-based data reductions, beyond the traditional noise-reduction justification.Mapping data points right into a higher dimensional space by means of kernels, indicates that solution for Kernel K-means provided by Kernel PCA.In learning, final results suggest effective techniques for K-means clustering.In (Ding and He, 2004), PCA is used to reduce the dimensionality of the data set and then the k-means algorithm is used in the PCA subspaces.Executing PCA is the same as carrying out Singular Value Decomposition (SVD) on the covariance matrix of the data.Karthikeyani and Thangavel (2009) Employs the SVD technique to determine arbitrarily oriented subspaces with very good clustering.Karthikeyani and Thangavel (2009) extended Kmeans clustering algorithm by applying global normalization before performing the clustering on distributed datasets, without necessarily downloading all the data into a single site.The performance of proposed normalization based distributed K-means clustering algorithm was compared against distributed K-means clustering algorithm and normalization based centralized K-means clustering algorithm.The quality of clustering was also compared by three normalization procedures, the min-max, z-score and decimal scaling for the proposed distributed clustering algorithm.The comparative analysis shows that the distributed clustering results depend on the type of normalization procedure.Alshalabi et al. (2006) designed an experiment to test the effect of different normalization methods on accuracy and simplicity.The experiment results suggested choosing the z-score normalization as the method that will give much better accuracy. Removal of the weaker principal components: The transformation on the data set to the new principal component axis provides the number of PCs same as the number in the initial features.Although for various data sets, the first few PCs mention most of the variances and so the others can easily be eliminated with minimum loss of information. MATERIALS AND METHODS Let Y = {X 1 , X 2 , …, X n } imply the d-dimensional raw data set.Then the data matrix is an n×d matrix given by: The z-score is a form of standardization used for transforming normal variants to standard score form.Given a set of raw data Y, the z-score standardization formula is defined as: where, ‫̅ݔ‬ j and σ j are the sample mean and standard deviation of the j th attribute, respectively.The transformed variable will have a mean of 0 and a variance of 1.The location and scale information of the original variable has been lost (Jain and Dubes, 1988). One important restriction of the z-score standardization Z is that it must be applied in global standardization and not in within-cluster standardization (Milligan and Cooper, 1988). Principal component analysis: )′ be a vector of d random variables, where ′ is the transpose operation.The first step is to find a linear function ܽ ଵ ᇱ ‫ݒ‬ of the elements of v that maximizes the variance, where α 1 is a d-dimensional vector (ܽ ଵଵ ܽ ଵଶ , … , ܽ ଵௗ ) ′ so: 1 1 1 ' . ᇱ ‫ݒ‬ and has maximum variance.Then we will find such linear functions after d steps.The j th derived variable ߙ́jv is the j th PC.In general, most of the variation in v will be accounted for by the first few PCs. To find the form of the PCs, we need to know the covariance matrix Σ of v.In most realistic cases, the covariance matrix Σ is unknown and it will be replaced by a sample covariance matrix.That is for j = 1, 2, ..., d, it can be shown that the j th PC is: z = ܽ ଵ ᇱ ‫,ݒ‬ where a j is an eigenvector of Σ correspond with the j th main eigenvalue λ j . In fact, in the first step, z = ܽ ଵ ᇱ ‫ݒ‬ can be found by solving the following optimization problem: Maximize var (ߙ́1v) subject to ߙ́1a = 1, where, var (ߙ́1v) is computed as: var (ߙ́j v) = ߙ́j Σ a 1 To solve the above optimization problem, the technique of Lagrange multipliers can be used.Let λ be a Lagrange multiplier.We want to maximize: ( ) Differentiating Eq. ( 4) with respect to a 1 , we have: where, I d is the d×d identity matrix.Thus λ is an eigenvalue of Σ and a 1 is the corresponding eigenvector.Since, a 1 is the eigenvector corresponding with the main eigenvalue of Σ.In fact, it can be shown that the j th PC is ܽ ଵ ᇱ ‫,ݒ‬ where a j is an eigenvector of Σ corresponding to its j th largest eigenvalue λ j (Jolliffe, 2002). Singular value decomposition: x n } be a numerical data set in a d-dimensional space.Then D can be represented by an n×d matrix X as: where, x ij is the j-component value of x i .Let ߤ̅ = (ߤ̅ ଵ , ߤ̅ ଶ , … , ߤ̅ ௗ ) be the column mean of X: And let e n be a column vector of length n with all elements equal to one.Then SVD expresses X -e n ߤ̅ as: where, U is an n×n column orthonormal matrix, i.e., U T U = I is an identity matrix, S is an n×d diagonal matrix containing the singular values and V is a d×d unitary matrix, i.e., V H V = I, where V H is the conjugate transpose of V.The columns of the matrix V are the eigenvectors of the covariance matrix C of X; precisely: Since C is a d×d positive semi definite matrix, it has d nonnegative eigenvalues and d orthonormal eigenvectors.Without loss of generality, let the eigenvalues of C be ordered in decreasing order: λ 1 ≥λ 2 ≥ … ≥λ d .Let σ j (j = 1, 2, …, d) be the standard deviation of the j th column of X, i.e.: The trace Σ of C is invariant under rotation, i.e.: Noting that e T n X = nߤ̅ and e T n e n = n from Eq. ( 5) and ( 6), we have: Since V is an orthonormal matrix, from Eq. ( 7), the singular values are related to the eigenvalues by: 2 , 1, 2,..., The eigenvectors constitute the PCs of X and uncorrelated features will be obtained by the transformation Y = (X -e n ߤ̅ ) V. PCA selects the features with the highest eigenvalues. K-means clustering: Provided some series involving observations (x 1 , x 2 , …, x n ), in which each observation is known as a d-dimensional real vector, k-means clustering is designed to partition an n observations to k units (k = n) S = S 1 , S 2 , …, S k as a way to reduce the Within-Cluster Sum of Squares (WCSS): at which µ i stands out as the mean for items within S i . RESULTS AND DISCUSSION The presence of noise in a large amount of data is easily filtered out by the normalization and PCA/SVD preprocessing stages, especially since such a treatment was specifically designed to denoise large numerical values while preserving edges. In this section, we examine as well as evaluate the tasks for the approaches below: conventional k-means with the original dataset, k-means with normalized dataset, k-means with PCA/SVD dataset and k-means with normalized and PCA/SVD dataset seeing as methods of response to the goal intent behind the kmeans technique.The level of a particular clustering are as well be evaluated, whereby level is analyzed with the error sum of squares for the intra-cluster range, that is a range among data vectors in a group as well as the centroid for the group, the lesser the sum of the Fig. 1: Basic K-means algorithm differences is, the better the accuracy of clustering and the error sum of squares. Figure 1 presents the result of the basic K-means algorithm using the original dataset having 20 data objects and 7 attributes as shown in Table 1.Two points attached to cluster 1 and four points attached to cluster 2 are out of the cluster formation with the error sum of squares equal 211.21. The number of PCs found is in fact same with the actual number of initial features.To remove the weakened components out of the PC set we worked out the corresponding variance, its percentage and cumulative percentage, shown in Table 2 and 6.There after we considered the PCs with variances lower than the mean variance, disregarding others.The lessened PCs are shown in Table 3 and 7. Table 2 presents the variances, the percentage of the variances and cumulative percentage which corresponds to the principal components. Figure 2 explained the pareto plot of for the variances percentages against the principal component for the original dataset having 20 data objects and 7 variables. The improve matrix using lessened PCs has been made this also transformed matrix is simply employed on the initial dataset to generate a different lessened estimated dataset, that will be utilized for the remaining data exploration and also reduced dataset containing 4 attributes is also shown in Table 4. Figure 3 presents the result of the K-means algorithm applying principal component analysis to the original dataset.The reduced datasets containing 20 data objects and 4 attributes as shown in Table 4 and all the points attached to both cluster 1 and 2 are within the cluster formation with the error sum of squares equal 143.14. Figure 4 presents the result of the K-means algorithm using the rescale dataset with z-score standardization method, having 20 data objects and 7 attributes as shown in Table 5 attached to both cluster 1 and 2 are within the cluster formation with the error sum of squares equal 65.57.Table 6 presents the variances, the percentage of the variances and cumulative percentage which corresponds to the principal components. The improve matrix using lessened PCs (Table 7) manufactured this also transformed matrix simply employed on a standardized dataset so as to generate different lessened estimated dataset, that will be utilized for the remaining data exploration and the lessened dataset containing 4 attributes shown in Table 8. Figure 5 presents the result of the K-means algorithm applying standardization and principal component analysis to the original dataset.The reduced datasets containing 20 data objects and 4 attributes as shown in Table 8 and all the points attached to both cluster 1 and 2 are within the cluster formation with the error sum of squares equal 51.26. CONCLUSION We have proposed a novel hybrid numerical algorithm that draws on the speed and simplicity of k- Table 2 : . All the points The variances cumulative percentages Table 5 : The
3,896.2
2014-01-02T00:00:00.000
[ "Computer Science" ]
Pattern selection in a lattice of pulse-coupled oscillators We study spatio-temporal pattern formation in a ring of N oscillators with inhibitory unidirectional pulselike interactions. The attractors of the dynamics are limit cycles where each oscillator fires once and only once. Since some of these limit cycles lead to the same pattern, we introduce the concept of pattern degeneracy to take it into account. Moreover, we give a qualitative estimation of the volume of the basin of attraction of each pattern by means of some probabilistic arguments and pattern degeneracy, and show how are they modified as we change the value of the coupling strength. In the limit of small coupling, our estimative formula gives a perfect agreement with numerical simulations. We study spatio-temporal pattern formation in a ring of N oscillators with inhibitory unidirectional pulselike interactions. The attractors of the dynamics are limit cycles where each oscillator fires once and only once. Since some of these limit cycles lead tothe same pattern, we introduce the concept of pattern degeneracy to take it into account. Moreover, we give a qualitative estimation of the volume of the basin of attraction of each pattern by means of some probabilistic arguments and pattern degeneracy, and show how are they modified as we change the value of the coupling strength. In the limit of small coupling, our estimative formula gives a perfect agreement with numerical simulations. 05.90.+m; 87.10.+e; 05.50.+q; 87.22.As I. INTRODUCTION The study of the collective behavior of populations of interacting nonlinear oscillators has attracted the interest of physicists and mathematicians for many years since they can be used to modelize several chemical, biological and physical systems [1,2]. Among them, we should mention cardiac pacemakers cells [3], integrate and fire neurons [4] and other systems made of excitable units [5]. Most of the theoretical papers that have appeared in the scientific literature deal with oscillators interacting through continuous-time couplings, allowing them to describe the system by means of coupled differential equations and apply most of the modern nonlinear dynamics techniques. More challenging from a theoretical point of view is to consider a pulse-coupling, or in other words, oscillators coupled through instantaneous interactions that take place in a very specific moment of its period. The richness of behavior of these pulse-coupled oscillatory systems includes synchronization phenomena [6], spatio-temporal pattern formation [7] (we could mention, for instance, traveling waves [9], chessboard structures [7], and periodic waves [10] ), rhythm anihilation [11], self-organized criticality [8],... Most of the work on pattern formation has been done in mean-field models or populations of just a few oscillators. However, such restrictions do not allow to consider the effect of certain variables whose effect can be crucial for realistic systems. The specific topology of the connections or geometry of the system are some typical examples which usually induce important changes in the collective behavior of these models. Pattern formation usually takes place when oscillatory units interact in an inhibitory way, although it has also been shown that the shape of the interacting pulse, when the spike lasts for a certain amount of time, or time delays in the interactions can lead to spatio-temporal pattern formation also in the case of excitatory couplings [14,15]. Only recently, general solutions for the general case, where the patterns existence and stability is proved, have been worked out [12,13]. The aim of this paper is to study some pattern properties and get a quantitative estimation of the probability of pattern selection under arbitrary initial conditions or, in the language of dynamical systems, the volume of the basin of attraction of each pattern. Keeping this goal in mind, we will use the general results given in [13] where assuming a system defined on a ring the authors developed a mathematical formalism powerful enough to get analytic information of the system. Not only about the mechanisms which are responsible for synchronization and formation of spatiotemporal structures, but also, as a complement, to proof under which conditions they are stable solutions of the dynamical equations. Despite the apparent simplicity of the model, some ring lattices of pulse-coupled oscillators are currently used to modelize certain types of cardiac arhythmias where there is an abnormally rapid heartbeat whose period is set by the time that an excitation takes to travel the circuit [16]. Moreover, there are experiments where rings of a few R15 neurons from Aplysia are constructed and stable patterns are reported [17]. Our 1d model allows us to study analytically the most simple patterns and understand their mechanisms of selection. The structure of this paper is as follows. In Sec II we review the model introduced in [13] as well as set the notation used throughout the paper. In Section III we study some pattern properties which will be useful for, in Section IV, propose an estimation of the probability of selection of each pattern. In the last section we present our conclusions. II. THE MODEL Our system consists in a ring of (N + 1) pulse-coupled oscillators. The phase of each oscillator φ i evolves linearly in time until one of them reaches the threshold value φ th = 1. When this happens the oscillator fires and changes the state of its rightmost neighbor according to subjected to periodic boundary conditions, i.e. N +1 ≡ 0, and where ε denotes the strength of the coupling and µ = 1 + ε. Where we have assumed that, from an effective point of view, the pulse-interaction between oscillators, as well as the state of each unit of the system, can be described in terms of changes in the phase, or in other words, in terms of the so called phase response curve (PRC), εφ in our case. A PRC for a given oscillator represents the phase advance or delay as a result of receiving an external stimuli (the pulse) at different moments in the cycle of the oscillator. We will assume ε < 0 througout the paper, as we are only interested in spatio-temporal pattern formation and ε > 0 always leads to the globally synchronized state [13]. This linear PRC has physical sense in some situations. For instance, it shows up when we expand the non-linear PRC for the Peskin model of pacemaker cardiac cells [3] in powers of the convexity of the driving or in neuronal modelling [18]. In practice, however, this condition can be relaxed since a nonlinear PRC does not change the qualitative behavior of the model provided the number of fixed points of the dynamics is not altered. Moreover, a linear PRC has the advantage of making the system tractable from an analytical point of view. Let us describe the notation used in the paper. The population is ordered according to the following criterion: The oscillator which fires will be always labeled as unit 0 and the rest of the population will be ordered from this unit clockwise. After the firing, the system is driven until another oscillator reaches the threshold. Then, we relabel the units such that the oscillator at φ = 1 is now unit number 0, and so on. This firing + driving (FD) process for N + 1 oscillators can be described through a suitable transformation where M k is a N ×N matrix, φ is a vector with N components, 1 is a vector with all its components equal to one and k stands for the index of the oscillator which will fire next. We call this kind of transformation a firing map, and we have to define as many firing maps as oscillators could fire, that is, index k must run from k = 1 (φ 1 fires) to N (φ N fires). For example, in the N + 1 = 4 oscillators case we have that the firing map corresponding to the FD process where φ 2 is the next oscillator which do fire, and so on. Once we have defined all possible firing maps for a given number of oscillators we can proceed to deal with the attractors or fixed points of the system dynamics. As has been proved in [13] these fixed points must be cycles of N + 1 firings. We define a cycle as a sequence of consecutive firings where each oscillator fires once and only once. Mathematically, each cycle is described by means of a return map. The return map is the transformation that gives the evolution of φ during a cycle and is the composition of all firing maps involved in the firing sequence of that cycle where T ci • T cj (φ) is the usual composition operation T ci (T cj (φ)) and Note that not all possible combinations of firing maps are allowed, just those ones whose indices c i sum p(N + 1) without any partial sum equal to q(N + 1), where p > q are positive integers. As all firing maps are linear transformations, return maps are also linear. There are N ! possible cycles in the N +1 oscillators case (all permutations of firing sequences with the initial firing oscillator φ 0 fixed). Following our previous example, for the four oscillators case all possible firing sequences and their associated return maps are Now, in order to find the attractors of the dynamics, we must solve the fixed point equation for every cycle c. Formally, As was shown in [13], there are N different stable solutions to the whole set of fixed point equations. Their stability is assured by the fact that ε < 0, since it guarantees that all eigenvalues of M c lie inside the unit circle for all cycles c. In our four oscillators example these solutions are Which are a kind of four-oscillators traveling wave, chessboard and inverse traveling wave structures. From now on we will label such solutions with index m (m = 1...N ) since their first component always satisfy Therefore, in the example, we relabel patterns φ * A as m = 3, φ * B , φ * C , φ * D , φ * E as m = 2 and φ * F as m = 1. Since there are N ! possible cycles and N solutions to Eq. (7) there will be some fixed points or patterns which will appear more than once, so, we shall use C(N + 1, m) to characterize these degeneracies. In the example, the values of the degeneracies are C(4, 1) = C(4, 3) = 1 and C(4, 2) = 4. In general, patterns which are solutions of cycle consisting in the iterative application of the same firing map (like A and F in our example) have no periodicities whereas the ones solution of mixtures of differents firing maps (B,C,D and E) have some periodic structure that are also solution of Eq. (7) for a case with less oscillators. In Fig. 1 we can visualize the solutions for N + 1 = 2, 3 and 4 oscillators and realize that solution m = 2 for the four oscillators case is a periodic composition of solution m = 1 for the two oscillators case. III. PATTERN PROPERTIES As we have seen, the stability of all patterns solution of Eq. (6) is guaranteed by the fact that ε < 0, but the existence of such solutions is not ensured. In fact, for small values of the coupling strength |ε| all patterns do exist, but, as we increase it, some patterns disappear. The reason is that the solution loses its physical meaning because φ * 1 > 1. Their first component is always the one that becomes larger than unity earlier and this happens, for each m and according to Eq. (9), when Our coupling strength range of interest ends at ε = −1, since at ε ≤ −1 we always find the same pathological dynamics which does not have any physical or biological sense. Realistic couplings never reach such higher values. Therefore, as ε runs from 0 to −1, all patterns whose m satisfy m > N +1 2 , disappear. There is another interesting pattern property which has to do with the calculation of the pattern degeneracy C (N + 1, m). In principle, to calculate such degeneration, we should solve fixed point Eq. (6) for all possible cycles and count how many of them lead to the same pattern. Although for few oscillators the problem is quite straightforward, as we deal with higher and higher number of oscillators, the number of cycles increases (it grows as N !) and solving Eq. (6) becomes more difficult. Fortunately, there is another way of calculating C(N + 1, m) which reduces the problem to a combinatorial question. Lets show it through an example, in the previous four oscillators case, if we count, for each firing sequence, the number of oscillators which have received the pulse before firing, we can easily realize that this number is the same as its value of m Here an upper bar means that the oscillator has already received a pulse during the cycle. The point is that it turns out that every pattern m corresponds to a sequences of firings involving exactly m oscillators that, when they do fire, had already received a pulse from their leftmost neighbor. Therefore, this property (we have checked for several values of N + 1) allows us to associate every cycle with the pattern it leads to, just by counting these kind of firings. Now, calculating C(N + 1, m) becomes a straightforward matter. In Table I we have computed C (N + 1, m) for several values of N + 1. Apart from brute force counting, degeneracy distribution C (N + 1, m) can also be determined from the following relation N + 1, m). First column stands for the number N + 1 of oscillators and first row for m. Another interesting property is the period ∆ N +1 m of each spatio-temporal pattern m. Since all oscillators are in a phase-locked state, they must oscillate with the same period. Then, as the intrinsic period of each oscillator is one, and when any oscillator receives the delaying pulse from its neighbor it has a phase equal to φ * 1 , one can easily realize that the effective period is Therefore, the larger the value of m, the longer the period of its associated pattern. It is important to notice that we have not fixed the value of such periods (each pattern has its own which is different from the others), since there are some authors who fix all periods equal to some constant, and use it as a condition to find the structures [17]. IV. PATTERN SELECTION Once we have characterized all spatio-temporal patterns, we proceed to find some general formula which give us some estimation of the probability of each pattern to be selected, or in other words, an estimation of the volume of its basin of attraction. In order to achieve this objective, we should understand the mechanism which lead to the selection of a certain spatio-temporal structure and how is it modified as the parameters of the model (ε in our case) change. There is an easy and straightforward way to get the essential features of this mechanism assuming that the probability of one oscillator to fire next is, basically, proportional to its phase (that is, if it has a phase slightly below 1 it has a higher probability to be the next firing oscillator, whereas if it has a smaller phase, it will rarely fire next). Imagine the phases of all oscillators randomly distributed over the interval (0, 1). Then we let the system evolve till one of the oscillators reaches a phase φ i = 1 and emits a pulse that is received by its rightmost neighbor which lows its phase by an amount εφ i+1 . Now we assume that all phases are again randomly distributed over (0, 1) except the one which received the pulse whose phase is distributed over (0, 1+ε). So, we get rid of memory effects (we know the oscillator that has fired should, now, have a phase equal to zero) and just keep in mind if each oscillator has received a pulse or has not. Therefore, the point is that under this conditions, the probability that one oscillator which has still not received a pulse do fire is some constant and, on the other hand, for the ones which had, is this constant times the factor (1 + ε). Then, we can characterize the probability of having some cycle just by recalling how many oscillators do fire having previously received a pulse during that cycle. Basically, this probability is proportional to (1+ε) n where n stands for the number of oscillators which do fire having already received a pulse (the product of all constant terms will be absorbed in a normalization factor). This approach, where we assume all firings as almost-independent events, can be viewed as a kind of mean-field approximation. Then,as has been shown before, since cycles leading to the same pattern m always exactly have m oscillators that do fire having received the interacting pulse, we can give an estimation of the probability for pattern m selection in the N + 1 oscillators case Here N (ε) is chosen so that summation of the probabilities over m gives 1 In the limit of small coupling strength ε → 0, which is the more interesting case for the majority of physical and biological systems, one can assume that interaction plays almost no role when pattern selection takes place. That is, the fact that one oscillator has received the pulse from its neighbor does not low its probability to fire as the pulse does not modify appreciably its phase. Then, we can consider that all cycles have approximately the same probability to be selected, (1 + ε) m → 1, and only pattern degeneracy has to be considered to get a good estimation of p N +1 The dominant pattern, that is, the one which has the larger probability to be selected coincides with the mean value of m (due to the symmetric behavior of C(N + 1, m)). For an odd number of oscillators < m > N +1 does not exist and we have a competition between the two closest patterns m = N/2 and m = (N + 2)/2. Recall that the most probable patterns turn out to be the ones with "shortest wavelengths", a fact that was already reported in simulations of these sort of systems [7]. In Figs. 2 and 3 we check this new approximation for the N + 1 = 10 and 9 case and realize that expected results are in good agreement with simulations data. There also is the interesting question of how does this probability distribution modifies when the number of oscillators increases. In Fig. 4 We could not prove this without an explicit expression for C(N + 1, m) but we have checked it N up to 170. Therefore It turns out that for a large number of oscillators almost all initial conditions lead to a pattern whose m approximately falls in the interval < m > N +1 ± √ < m > N +1 . In order to compare it for different number of oscillators we have to normalize m dividing by N + 1. In that case, one observes that σ 2 N +1 ∼ 1/ √ N + 1 so that as we increase N + 1, the spread of p N +1 m diminishes getting the distribution sharpened. As Eq. (14) does not take into account the disappearance of the different patterns m at the different values of ε * m predicted by Eq. (9), it can not give a good quantitative estimation of pattern selection for higher coupling values. Nevertheless we can expand Eq. (14) to the leading order in ε. For small ε, p N +1 m are approximated by . We can realize that the smaller |ε| is, the more accurate our estimations are. The most probable pattern is m = (N + 1)/2 and the probability for the patterns near the extremes is almost zero due to the fast decay of pm there. Fig. 2 but now for an odd number of oscillators N + 1 = 9. We can realize that there is not a peak anymore, instead, almost all probability is concentrated in the two competing patterns m = N/2 and m = (N + 2)/2. In Fig. 5 we compare this approximation with simulated data. The slopes near ε = 0 do agree with Eq. (21). In our simulations we calculate the probability of each pattern to be selected just by counting how many realizations (with φ 0 = 1 and the rest of oscillators with random initial conditions) lead to each pattern m and divide over the total number of realizations. Although we only have a good quantitative estimation of p N +1 m for small values of ε, Eq. (15) catches the two basic mechanisms responsible of pattern selection. On the one hand, it is clear that for higher values of the coupling strength |ε|, when one oscillator receives a pulse, it lows its phase to almost zero and, consequently, its firing probability also does. Therefore pattern selection probability p N +1 m (ε) is strongly controlled by the number of oscillators which have to fire having already received a pulse, that is, the probabilistic factor (1 + ε) m . As a consequence, p N +1 m begin to decrease sooner when |ε| increases, the larger m is. On the other hand, for small values of the coupling strength, interaction plays almost no role and p N +1 m (ε) is dominated by the degeneracy factor C(N + 1, m). Therefore p N +1 m (ε) for the different values of m are basically ordered as C(N + 1, m). In Fig. 6, 7 and 8 we show results from simulations of p N +1 m (ε) for different number of oscillators. V. CONCLUSIONS In this paper we have studied some properties of the spatio-temporal patterns that appear in a ring of pulsecoupled oscillators with inhibitory interactions. We have focused our attention in estimating the probability of selecting a certain pattern under arbitrary initial conditions and have shown the two basic mechanisms responsible of that: the degeneracy distribution C (N + 1, m), for small values of ε, and m, the number of oscillators that do fire having already received a pulse, for higher values of ε. According to this, the different probabilities of selecting pattern m start being distributed following the degeneracy distribution C (N + 1, m), and, as ε decreases, these probabilities diminish in a hierarchical way: the larger the value of m, the sooner its selection probability is going to decrease, so that only patterns with smaller m will survive for higher values of ε. Moreover, some of the structures disappear, at the different values of ε * m , during this process. We have found out an approximation formula for p N +1 m (ε) which takes into account all these mechanisms and gives us a quantitative estimation of the different selection probabilities for small ε. The estimation of the volume of the basin of attraction of each spatio-temporal pattern m also gives us an idea of the stability of the different structures with respect to additive noise fluctuations (for instance, we can add some random quantity η to all phases after each firing event or a continuous-time η(t) in the driving). Simulations of arrays of noisy pulse coupled oscillators showed that our most probable patterns were also the most stable [7]. The present paper only concerns spatio-temporal pattern formation in a ring of oscillators, nevertheless, all results are trivially generalized to bidirectional couplings. Although the question of what happens when dealing with higher dimension lattices remains opened, some simulations results in 2d [7] showed that almost all realizations lead to a chessboard pattern in analogy with our results in the ring. That makes us believe we have caught the basic features of the problem in our 1d model.
5,629.4
1999-06-29T00:00:00.000
[ "Mathematics" ]
A Hybrid Stochastic Approach for Self-Location of Wireless Sensors in Indoor Environments Indoor location systems, especially those using wireless sensor networks, are used in many application areas. While the need for these systems is widely proven, there is a clear lack of accuracy. Many of the implemented applications have high errors in their location estimation because of the issues arising in the indoor environment. Two different approaches had been proposed using WLAN location systems: on the one hand, the so-called deductive methods take into account the physical properties of signal propagation. These systems require a propagation model, an environment map, and the position of the radio-stations. On the other hand, the so-called inductive methods require a previous training phase where the system learns the received signal strength (RSS) in each location. This phase can be very time consuming. This paper proposes a new stochastic approach which is based on a combination of deductive and inductive methods whereby wireless sensors could determine their positions using WLAN technology inside a floor of a building. Our goal is to reduce the training phase in an indoor environment, but, without an loss of precision. Finally, we compare the measurements taken using our proposed method in a real environment with the measurements taken by other developed systems. Comparisons between the proposed system and other hybrid methods are also provided. Introduction Currently, sensor networks are the main part of many monitoring and control systems. Many of them tend to be wireless because it allows them to be spatially distributed. Wireless Sensor Networks (WSNs) [1] are formed dynamically because the connectivity between nodes depends on their position and their position variation over the time. These kinds of networks are easy to be deployed and are self-configuring. A sensor node is a transmitter, a receiver, and it offers services of routing between nodes without direct vision, as well as recording data from other sensors. Since the 1950s, location systems have been incorporated into our lives. In the 1980s the Global Position System (GPS) [2] began as a location method for outdoor environments. This system is based on a triangulation system of variables, where we know the position of a device thanks to the existing satellite network. Now, this system remains the most used because of its good performance and the low price of the required devices. At the end of the year 2000 other location systems appeared, based on cellular networks [3]. These systems are designed for emergency situations. In this case, the base stations are used as a reference point and the location is made through the distance and the angle of the signal. These systems are developed to work in outdoor environments and the devices must also have several communication skills. Previous technologies are not adequate for indoor environments. This is mainly due to the signal characteristics. Other wireless technologies, such as IEEE 802.11a/b/g [4], Radio Frequency IDentification (RFID) [5], Ultra Wide Band wideband (UWB) [6], Bluetooth [7] or Zigbee [8], must be used for indoor locations. The main problem in these environments is the multipath effect and signal variability. The location in these types of environments can be centralized [4] or distributed [9]. The centralized location uses reference devices. These devices tend to have a greater capacity and they are located in a fixed position, for example a base station (BS) or an Access Point (AP). In contrast, in the distributed location there are no reference devices. The devices interact with their neighbors in order to know their position. A study about the oscillation of the received signal is shown in reference [10]. The RSS variation is introduced in our approach in order to obtain smaller degree errors in the location service. The three main issues that make variations in the RSS are the following ones:  Temporal variations: when the receiver remains in a fixed position, the signal level measured varies as time goes by.  Small-Scale variations: the signal level changes when the device is moving over small distances (less than the wavelength). In IEEE 802.11 b/g technologies the wavelength is 12.5 cm.  Large-Scale variations: the signal level varies with the distance due to the attenuation that the radio frequency (RF) signal suffers with the distance. Besides these typical variations of the RF signal together with the receiver mobility, we have also considered the temperature and humidity variations, the effect of opening and closing doors, the changes in the localization of the furniture, and the presence and movement of human beings, which are all characteristics of indoor environments. These variations have already been analyzed in [11]. All these systems can be used in different applications. Localization in sensor networks has attracted a large research effort in the last decade. Some WSNs location systems application areas are:  Emergencies: When we want to locate an individual in the case of an emergency (injury or criminal attacks) or in a life-threatening situation. Both can be located using the positioning capability of the mobile device.  Information: It can be used in public places such as swimming pools, museums, conferences, etc. in order to provide information service about this place to the user depending on his position.  Navigation: When the user needs to meet the situation of addresses, positions, directions in an indoor places such as big supermarkets, commercial centers, etc.  Discovery: When it is necessary to find or locate things or persons in indoor places. It is very useful to locate people with Alzheimer or to locate disabled people with very little motion.  Security: It can be used to avoid theft, to move unwanted items, etc. Wireless sensors would be in specific places, when the sensors transfer a position threshold they send an alarm.  Tracking: When it is required to track a device or person inside a building. There are two main methods to estimate the position in indoor environments. On the one hand, there are the so-called deductive methods. These take into account the physical properties of signal propagation. They require a propagation model, topological information about the environment, and the exact position of the base stations. On the other hand, there are the so-called inductive methods. These require a previous training phase, where the system learns the signal strength in each location. The main shortcoming of this approach is that the training phase can be very expensive. The complex indoor environment makes the propagation model task very hard. It is difficult to improve deductive methods when there are many walls and obstacles because deductive methods work estimating the position mathematically with the real measures taken directly from environment in the training phase [12]. In this work we present a hybrid location system using a new stochastic approach which is based on a combination of deductive and inductive methods. This system has been developed for wireless sensor networks using the IEEE 802.11b/g standard in order to use a deployed wireless access network that is also used for internet access and data transfer. On the other hand, the aforementioned technology allows us to cover a hard indoor environment without many base stations. The goal of this work is to reduce the training phase without losing precision. The remainder of this paper is organized as follows. Section 2 presents the best known related work on location methods in sensor networks. Our hybrid location system is described in Section 3. Section 4 shows the efficiency of our system. It shows real measurements and compares our proposal with other systems. A comparison between our proposal and other hybrid systems proposed in the literature is shown in Section 5. Section 6 concludes the paper and discloses our planned future work. Related Works The main location systems in WSNs are based either on the GPS [2], on localization algorithms based on different measurement techniques [13] or systems based on known sensor positions [14]. The system proposed in reference [14] uses the physical layer and the Medium Access Control (MAC) to transmit location information between pairs of sensors using IEEE 802.15.4. The well-known analytical techniques used in localization algorithms are the angle of arrival (AOA), the time of arrival (TOA) and the time difference of arrival (TDOA) [15][16][17], relative distance [18] and the RSS [13]. This section discusses some works related with deductive and inductive location systems and other hybrid location systems. Deductive Methods As shown in reference [4], the effectiveness of the measurement techniques based on location algorithms in indoor environments is limited by multiple reflections. That paper describes the RADAR system which is based on RF. The system uses the received signal strength information for a trilateration system and signal propagation models to locate the device. While the empirical model has higher precision, the signal propagation method is easier to use. In [14], measurement based statistical models of TOA, AOA and RSS are presented and used to generate localization performance boundries. Such boundries are useful as design tools to help choose among measurement methods, select neighborhood size, set minimum reference node densities, and compare localization algorithms. R. Schmidt proposed, in reference [15], a system that processes the signals received from the emitter on an array of sensors. The system considers sensors with arbitrary locations and arbitrary directional characteristics (gain/phase/polarization) in a noise interference environment of arbitrary covariance matrix. The author proposes the multiple signal classification (MUSIC) algorithm to determine the parameters of multiple wave fronts arriving and then the method calculates the position. A paper where the authors explain another trilateration based location system is [19]. Niculescu and Nath present an indoor positioning architecture that does not require a signal strength map, simply requiring the placement of special VOR (VHF Omnidirectional Ranging) Base Stations (VORBA). The accuracy of the positions obtained by their system, of 2.1 meter median error, is comparable to the original RADAR. Its basic idea is to find the strongest maximums in the signal strength, and use them as the most likely directions in which the device can be placed. Reference [20] describes two approaches developed by the same authors of this paper, where wireless sensors could find their position using WLAN technology. The scenario is an indoor environment that contains walls, several sources of interferences, multipath effects, humidity and temperature variations, etc. Both approaches are based on the RSS. The first approach uses a training session and the position is calculated based on a heuristic model using the data obtained from the training session. The second approach uses a triangulation model with some fixed access points and a signal propagation model based on wall looses to calculate the position. The techniques based on RSS are easier to implement. This is due to the fact that standard wireless devices possess features for measuring this value. As indicated in [21], we must take into account that the location error in triangulation systems is very low when the number of base stations is higher than seven and the triangulation analysis is in three dimensions. As we have said previously, when the number of base stations is high, the location system provides more precision, but it makes the sensor estimate more distances (for every base station), so the sensor could need more processing time, diminishing the overall system performance [20]. Inductive Methods Inductive methods use location techniques based on RSS profiles. This technique consists of building a map according to the signal strength behavior with respect to the coverage area. A sensor location can be estimated with the gathered information from several base stations. These access points or base stations work with the signal strength vector. The vector is obtained from the RSS model and probabilistic techniques or various methods based on neighbors. With this information the system can estimate the possible area where the sensor could be located. This method uses two parameters: a) the likelihood that an object is in this area and b) the precision of the signal strength. This second parameter depends on, for example, the size and the type of the location area. With these types of systems, the final user does not require any additional hardware for the localization process. These algorithms give very low localization errors using the IEEE 802.11 technology [21]. According to reference [21], we can consider three different techniques used in algorithms based on patterns or areas: a) single point adaptation, b) likelihood based on areas and c) Bayesian networks. There are statistical models based on the signal strength, where the distance between different sensors is obtained by the calculation of a Cramér-Rao bound (CRB) on the location estimation precision possible for a given set of measurements (see reference [14] for more details). This is a useful tool to help system designers and researchers select measurement technologies and evaluate localization algorithms. Reference [22] shows a localization system based on the RSS, which considerably reduces the number of broadcasting stations. This system is called Location Estimation Assisted by Stationary Emitters (LEASE) for indoor RF wireless networks and uses a few stationary emitters and sniffers in a novel way to solve the location estimation problem. The estimation engine uses non-parametric modeling techniques that automatically capture the anisotropy of the RSS encountered in indoor environments. Siddiqi et al. present in [23] another work where we can see the use of Bayesian networks. In this paper, the authors use a robot to retain samples. These samples are used to know the location by means of a probability density function. Each time the robot moves or senses the signal strength of an AP, a Bayes filter is used to recursively update the belief function (Monte Carlo localization (MCL) algorithm). Their results show that accurate localization ( 2 m) is achieved in most test cases and the average localization error decreases with time. Another important inductive location system is LANDMARC (see reference [5]). It is a localization prototype that uses RFID technology to locate objects inside the buildings. Although RFID is not designed for indoor location, the authors demonstrate that active RFID is a viable and cost-effective candidate for indoor location. References [4,13,20,21] can be consulted to examine some inductive methods in depth. Hybrid Methods There are other papers where the authors propose hybrid systems. In [24], the authors propose a radiolocation scheme based on the AOA and the TOA in multipath environments with a single base station. This scheme is used in macrocellular networks (such as the Code Division Multiple Access cellular network) and Global System for Mobile communications (GSM) networks. In [25], the authors combine RSS measurements and the TDOA measurements. This model is sturdy to variations of measurement noise and quantization. The error is lower than ones based on individual measures. There are few works related with the hybrid location techniques in sensors networks. In reference [26] Cramer-Rao Bound (CRB) is used for location estimation using of two different hybrid schemes: TOA/RSS and TDOA/RSS. These techniques provide improved location accuracy with respect to TOA and TDOA schemes for networks with devices having communications ranges of 30 meters or less. In [27] Sahinoglu and Catovic developed a hybrid location estimation scheme for heterogeneous WSNs with unsynchronized short range simple relays and mobile sensor nodes, and synchronized stations. In this work, the authors use RSS measurements as well as TOA and TDOA measurements. These measurements are used to filter out the clock offset that appears due to the lack of synchronization. They quantify the estimation accuracy of the scheme by deriving the Cramer-Rao bound (CRB), and discuss the performance trade-off between the number of synchronized and nonsynchronized devices involved. This work takes into account the heterogeneity of sensor networks, in terms of communication range, time synchronization and routing capabilities of network devices. If we analyze the hybrid location systems in WLAN, regardless of whether they are sensor networks or any network, there are several proposed systems. In [28] the authors propose a hybrid location system based on three stages. Firstly, it establishes a database that contains a distance-signal strength map. Next, the system uses the database to obtain the distances between mobile terminal and base stations. Thirdly, this proposal applies trilateration to calculate the mobile terminal position. In this case, we see that the hybrid modeling has better accuracy than propagation modeling. Finally, another hybrid location method is proposed in [29]. This hybrid method has two stages. In the first stage, it uses the fingerprinting method with a fast training phase to obtain an estimate of the mobile user (MU) position. In the second stage, trilateration is used to compute the MU location more accurately. The result shows their proposed method is better than the simple trilateration method based on general propagation mode, but worse than the fingerprinting method with a medium training phase. The Euclidean models are optimum when there are multiple access points. Although some works show that the statistical properties of the RSS signal is stationary under certain circumstances, the distribution of the RSS is not usually Gaussian, it is often left-skewed and the standard deviation varies according to the signal level. Signals from multiple APs are mostly independent and the interference from other APs using the same frequency does not have a significant impact on the RSS pattern. Consequently, the coverage areas can be grouped together as a group of clusters. More than one cluster may represent one location because of the multimodal distribution of the RSS. In such a case, using a simple Euclidean distance to determine the location may easily classify some patterns into a wrong location. Our proposal combines the advantages of the deductive and inductive methods in order to provide more accurate measurements in hard environments (few base stations and/or few trained points). Hybrid Stochastic Approach to Location Estimation In this section we explain the mathematical assumptions used in our proposal. We analyse the inductive and deductive methods from a statistical point of view. In this way, we can describe our hybrid model. Table 1 shows the variables used in the analysis. Stochastic Approach for Location Estimation The location estimation problem can be statistically stated as follows. For simplicity, the true distribution Pr(X = x) and Pr(X = x | Y = y) are denoted as Pr(x) and Pr(y). The model parameters are denoted by p( ). Let b be the number of base-stations. We denote o as an observation. The observation variable is a b-dimensional vector; one for each signal strength from each base-station. We denote as o j the signal strength from base-station j for j  {1,…,b}. We have a location l associated to each observation. In this work we use bi-dimensional locations for simplicity, but it can be used in three dimensions easily. The methodology used is based on the definition of a function Pr(l|o) that returns the probability of the location l, given the observation o. This nomenclature has been used in other proposals [22,28,29]. Once this function is estimated, the problem can be formulated to find the location l that maximizes the probability Pr(l|o) for a given observation o. Using Bayes' theorem, we can write: The denominator in equation (1) does not depend on the location variable l. And, therefore, the location estimation problem can be presented as: where Pr(l) is the prior probability of the location l, knowing the observation. This probability can be used to incorporate information such a more training locations [22] or tracking [29] to our statistical model. The tracking information will not be taken into account in this work, so, for the prior probability, we use the uniform distribution. In equation (1), Pr(o|l) is the so called likelihood function. It estimates the probability of one observation given a location. In the literature, we find two main approaches to estimate this function: inductive approach and deductive approach. The next two subsections explain these methods analytically in order to propose our approach. Inductive Approach for Location Estimation On the one hand, the inductive approach estimates the likelihood function measuring directly the signal strength in each place. That is, several measurements are taken for each training place; then, the function p(o|l) is estimated. The main drawback of this approach is the time consuming nature of the training phase. We denote as T the set of training data, formed by t observations with their respective locations. Each training data, T i , it is represented as (l i , o i ), where i can be from 1 to t. Several alternatives has been proposed in the literature to estimate p(o|l) from T: the histogram method [30,32], the Bayesian method [33] or the kernel method [30,34]. Another drawback is that this model only returns one of the locations from the training set. In order to solve this problem, several proposals can be found in reference [35]. Deductive Approach for Location Estimation On the other hand, the deductive approach estimates the likelihood function by using empirical formulas about the signal propagation in an indoor environment. In this approach, we need to know the location of each base-station, the described map of the environment (walls, obstacles, etc.) and a propagation model. Several propagation models can be seen in references [35,36] If we assume that each observation o j , from the vector o, is mutually independent, we can write: where b is the number of base-stations; B j is the base-station j; and, o j is the observation signal strength from base-station j. In our study the base-station B j is characterized by two variables (l j , o 0 j ). The variable l j denotes the location of base-station j, and o 0 j denotes the mean signal strength measured to d 0 distance from base-station j. In this work, we assume that (o j |l,B j ) follows a Gaussian distribution with standard derivation σ. The following empirical propagation model, which supposes that signal strength is measured in dB, is used [36,37]: where, d j is the Euclidian distance between the observation location (l) and the base-station j (l j ), n is the attenuation variation index (n value depends on the specific propagation environment) and L w j is the attenuation caused by the obstacles. In our study the value of L w j depends on the number of walls that the line of sight crosses from the base-station j to the location. We will use L w j = wL 0 . Where w is the number of wall crossed and L 0 is the wall average attenuation. Finally, let N(µ, σ²) be the normal distribution with mean µ and variance σ². Stochastic Hybrid Approach for Location Estimation In the inductive approach we assume that the signal distribution for each training sample location is known in advance. Taking a sample for all possible locations is not a realistic assumption. However, for a given location we can have several training samples near to our location. In the hybrid approach we are interested in combining the information of both previous approaches to improve the system. That is, we know the signal distribution for several training samples near our location and we know how the signal is attenuated from the location of these samples to our actual location. Without loss of generality, we can write: We assume that Pr(T i |l) is uniformly distributed. Then, we are only interested in defining the second term. Using the same assumption that in equation (3), we can write: Now, we define the random variable (o j |l,B j ,T i ) in the same manner as (o j |l,B j ) has been defined in equation (4). But, instead of o 0 j (the signal strength measured in the reference distance d 0 ), we use o i j (the signal strength measured in the training sample location i). where o i j is a random variable that represent the signal strength of training sample i from base-station j. d i j is the distance from training sample i to the base station j. And, L wi j is the wall attenuation from training sample i to the base station j. Note that in this equation, X σ has been eliminated because the variability is included in the random variable o i j . Implementation Details From equation (7) the random variable o i j can be expressed as follows: ( 10 (8) In the training phase, we have estimated p(o i j |l i ,B j ). In this stage several methods such as histogram or kernel can be used. Then, using equation (8), we can write: where n = 2 in free space. In order to obtain the optimal location for equation (2), the proposed algorithm is written in pseudocode in Figure 1. Its explanation is as follows: given the input signal strength the location probability for each point is evaluate using 0.5 meter greed. For each point the k nearest samples are taken. The probability of this location is calculated using equation (5), but, using only these k samples instead the all training set. Farther samples will distort the results. For each sample (of k nearest samples), first we use equation (9) to combine the deductive approach, to take into account the shift from the actual location to the sample location, and, second the inductive approach, to obtain the signal probability in a well known place. Experimental Results This section shows the results obtained from a real environment to test our proposal. First, we will test the errors based on the number of samples and based on the number of base stations. Then, we compare it with other commercial and implemented location systems. Test Bench To assess our proposal, we have deployed the approach in an indoor wireless environment. This place is located on the first floor of the "A building" in the "Campus Gandia" of the Polytechnic University of Valencia. The distribution of this floor is shown in Figure 2. There are 10 access points acting as base stations. These base stations have a fixed position and their transmission power is known in advance. 1. get the vector of signal strength, o 2. for each possible location l using 0.5 meter greed 3. search the k nearest neighbor samples from l using Euclidian distance 4. Error Measurement Based on the Number of Samples Our proposal takes into account k nearest neighbour samples from a position using Euclidian distance (see Figure 1). This experiment gives us the optimum number of samples in order to obtain the lowest error. In order to test our approach we took 56 samples spread equitably throughout the floor of the building. The floor was split in a grid where sampling points are placed every 2 meters. Figure 3 shows that the error is not reduced linearly as regards the number of samples. It has an inflection point in which the error changes its trend. In other words, the error decreases until it has the three closest samples, then, the error begins to increase. This happens because the method obtains relative distances from the samples to the sensor and when the method begins to use measures that are not close to the sensor the error increases. Obviously, the smaller the area is, where the sensor can be found, the lower the error in its location will be. More samples will give higher relative distances and therefore the error of location will be greater. Our first conclusion, based on the previous graph, is that given a fixed number of samples, there will be a value of number of samples where the location error will be the optimal. Then, if the number of samples used to train the system is greater, the estimated position will be more accurate because there will be closer samples. Error Measurement Based on the number of APs (Base Stations) In order to test the influence of the number of APs in our proposal, we measured the error of the approach adding access point one by one in each location (in the same place of the 56 samples previously taken). In Figure 4 we can observe that the localization error tends to decrease exponentially (blue line with squares). Therefore, with higher number of APs we obtain lower error values in the sensor location estimation. This tendency is given because one of the methods used in our hybrid system is based on the triangulation method. This method uses the distance from the sensor to various access points based on RSS. Once the sensor obtains the value of at least three distances, between the sensor and the APs, the sensor estimates its position. Therefore, the more distances the sensor to different APs has, the higher the accuracy of the localization sensor will be, in other words, the error of location will be lower. where x is the number of APs in the indoor environment. Equation (10) is shown in figure 4 with the black-thin line. We can see that it fits the error tend quite well. It should be noted, that when there are more than five APs the improvement appreciation in terms localization error is minimal when a new AP is added to the indoor environment. Comparative Measurement with others Existing Location Systems In order to compare our proposal with others, we have evaluated five wireless sensor location systems: a) Inductive 1. This is an inductive location system which has enough a number of samples for an adequate training. b) Inductive 2. It is also an inductive location system, but in this case, the number of samples is very low. c) Deductive. This system uses the method based on the equation of spread that we have seen in subsection 4.3. d) The Hybrid method. This is our proposed method. e) Ekahau, which is the basis of many currently used location systems [38]. For the inductive methods we used a system described in our previous work [11]. As has been previously mentioned, inductive methods need a training phase. For the Inductive 1 and Ekahau methods, we collected 312 samples spread equitably throughout the floor of the building. The floor was split in a grid where sampling points are placed every 2 meters. Thirty observations were taken from each training point; 15 of them were taken one day and the other 15 were taken one week later. In contrast, for the Inductive 2 and hybrid methods we used a subset of 56 samples. For the hybrid method we estimated p(o i |l) using the histogram method shown in references [30,32]. In the test phase, all these systems were tested for 40 locations (all these locations were different that the training ones, they were randomly placed and they were not inside the training grid). For each location we gathered a mean of 15 RSS consecutive values. This let us take into account the signal variability in the measurements. Each one of the test samples has been applied to the different location methods. Then, we estimate the error measuring the Euclidean distance between the output of the method and the real location of the sample. Figure 5 shows the results obtained for all the location systems as a function of the number of APs. Their graph follows an exponential tends approximately. We note that the Inductive 2 method has a higher localization error than the others. This method had 56 training samples. They were few compared with the Inductive 1 method (312 samples). This difference gives considerably more accuracy in the Inductive 1 than the Inductive 2 method. With regards to the deductive model, we note that it did not give good results because the floor where the measurements were taken had many walls, so there was very little accuracy when we estimated their loss. The hybrid model proposed in this paper has a stable and optimal graph compared to the rest of systems (with few training measures low errors were obtained). As noted in Figure 5, for a certain number of AP (five APs) its average error remains the second best. Finally, the Ekahau system together with the Inductive 2 system are the methods with the worst results. In Table 2 we can see the average error and the standard deviation of the approached compared in our experiment. We see that the method with less error is the Inductive 1 (1.23 m), this is because the number of samples is adequate. The model with the worst behaviour is the Inductive 2 (3.02 m). The proposed hybrid method has an average error of 1.80 m, with the advantage that training is minimal. A statistical significance test has been calculated using a paired t-test (the hybrid approach is used as reference). A result labelled with a " ▲ "means statistical confidence of 99%. " ∆ " means statistical confidence of 90%. Hybrids Methods Comparison This section compares our proposal with the hybrid methods found in the literature. In Table 3 we can see the performance analysis. First, we analyzed the analytical techniques used. All cases use multiple parameters, except in [28,29] and our proposal. The next feature compared was their working environment (indoor or outdoor environments). Our proposal and the one in reference [28] support both environments. The systems used in WSNs are [26,27] and our application; the others are used for other purposes. Our location system has a better accuracy (1.8 m) than other works, although the systems in [27, 28 and 29] have good features too. The next analyzed feature is the number of stages to ascertain the final position. In this case, our system has two stages. The best solution is one stage because of simplicity. As we can see in Table 3, only the systems in [26] and [27] have one stage, but these systems need extra messages to estimate the location. But, on the other hand there are several works that demonstrate that sending messages Our system uses a small set of training samples (inductive information). Given the actual signal strength, we use the closest training samples as a starting point. Then, the deductive propagation model is used to obtain the shift from the training samples. A stochastic approach is used whereby the optimal location can be estimated as the point that maximizes the product of probabilities from each of the closest training samples Our proposal combines the advantages of the deductive and inductive methods in order to provide accurate measurements in hard environments (few base stations and/or few trained points). The goal of this work has been to reduce the training phase without losing precision. Now, we are trying to find the proposed model by adding other methods in order to obtain more accurate results. The proposed method is useful in cases where a good training phase is not practical (very few samples can be taken in advance), and the precise location of some access points is not known. These environments could be military, such as troop deployments inside buildings or discovery squads for hard environments, environments where the radio coverage is not known in advance (unknown deployments), or even environments where there the APs can be on or off at any time (dynamic environments). We are currently working on enhancing the precision of the proposed model. In future work we will evaluate the performance our proposal in hard environments.
8,192.8
2009-05-15T00:00:00.000
[ "Computer Science", "Engineering" ]
User Preferences-Based and Time-Sensitive Location Recommendation Using Check-In Data Location-based social networks have attracted increasing users in recent years. Human movements and mobility patterns have a high degree of freedom and provide us with a lot of trajectory to understand the activity of users. In this paper, we present a user preferences and time sensitive recommender systems that offer an appropriate venue for a user when he appears in a special time at a particular location. The system considering the factors are: 1) the popularity of a location; 2) the preferences of a user; 3) social influence of the friends of the user and the friends who are check-in at the same location with the user; and 4) the time feature of the location and the user visiting. We evaluate our system with a large-scale real dataset from a location-based social network of Gowalla. The results confirm that our method provides more accurate location recommendations compared to the baseline. Introduction Location based on services applications, such as Foursquare, Facebook Place and Whrrl, becomes popular among more and more users.Such applications allow users to check in their own Point of Interests (POIs), share activities and post comments about their experiences, so that they can share their real-life with their friends.For example, a user checks in his visit to a fast food or theater in Foursquare, and at the same time posts his comment about it, then the friends can refer to his comment to choose their favorite food or theater.The check-in data reflect users' interests and habits and can help social networks to better understand the intent of their behavior.A check-in data usually contains timestamps, geographic information expressed by latitude and longitude generally and even textual information.It is a popularity research using those data to recommend suitable path and POIs to users [1] [2]. The timestamp of check-in records the activities of the user for a period of one day or a point of time.Figure 1 shows two locations' check-in data frequency distribution at different time point of all days which the locations are randomly selected from our check-in data.From these two plots we know that the numbers of users to visit these locations are very much different at different time.The recommended system should aware these diversities and recommend appropriate locations according to the user's current time.For instance, the time is 15:00 now and a user will visit a venue, then the system should recommend for him of location 153,505 instead of location 14,710. Figure 2 is check-in distributions of two different users in different locations.From the figure we can observe that for most locations, the user may only visit for a day or two, but some locations user check-in frequently.The recommended system should recommend the locations of their interests. As above discussion, in this paper, we argue that a high quality location recommendation has to simultaneously consider the following factors. The first is the popularity of the location.Location popularity depicts the ability of a location to attract the users to visit it.The higher the popularity of a location, the more users check-in in it.The second is user preferences.We mine user's preferences through analyzing the check-in data from the users' history, and then recommend them with their favorite venue.For example, if a user often visits location l where has lower popularity, This indicate the user may visit this location again based on his preferences.The third is the time of user visits and the time feature of locations.From the above analysis, we know that the number of days The duration for a user visiting the same location varies a lot, and each location the day at different time users visited vary a great deal.How to recommend a reasonable location according the both time is a problem to be solved in the paper.The fourth is current location of a user.In the mobile Internet, most users will want to visit the venue where is near to their present location, but not far away venue of thousands of miles, when they expect the system recommended a location for them.Taking account of a hunger user who is now in Beijing wanting to visit a noodle house, if the system recommends all the noodle houses in New York, the user should not be satisfied.The fifth is the views of the user's friends.If many of the user's friends have visited a location, it is likelihood that the user will visit this location.Or if two users are not friends, but they often visit the same location, their preferences may be the similarity. Filtering-Based Recommended Systems Collaborative filtering-based methods make use of the check-in histories of a group of similar users or a set of similar locations to generate location recommendations [3]. Leung et al. [3] propose a Collaborative Location Recommendation framework (CLR) for location recommendation.CLR employs a dynamic clustering algorithm to cluster the trajectory data into groups of similar users, similar activities and similar locations efficiently for new update in order to improve the efficiency of CLR.Zheng et al. [1] [4] mine the knowledge of location feature, activity-activity and location-activity correlations from history GPS data, and then apply a collective matrix factorization method to mine interesting locations and activities, and use them to recommend to the users where they can visit if they want to perform some specific activities and what they can do if they visit some specific places.Zheng et al. [5] employ a hierarchical-graphbased similarity measurement (HGSM) to uniformly model each individual's location history and effectively measure the similarity among users.Then incorporate a content-based method into a user-based collaborative filtering algorithm, which uses HGSM as the user similarity measure, to estimate the rating of a user on an item.Ye et al. [6] develop a collaborative recommendation algorithm fusing geographical influence and social influence. In [7] [8], they merge a data from activity-activity and location-feature with location-activity and then fuse matrix factorization with geographical and social influence for POI recommendation in LBSNs.Huang et al. [9] [10] annotate user profiles with context, measuring similarities between contexts and similarities between users, and incorporating context information into the CF process for POI recommendation.Gao et al. [11] [12] employ the social network information with geo-social correlation model to capture social correlations on LBS to recommend venue to users. Personalized Location Recommendations Bao et al. [13] model each individual's personal preferences with a weighted category hierarchy and infer the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model in offline part.And then in online part selects candidate local experts in a geospatial range to recommend locations to users.In [14], Ying et al. integrate user preferences and location properties simultaneously for recommending users urban POIs, and in [15], they take into account the user movements, online texting and social information to discover the relationship between users' information needs and provided information for followee recommendation.Hsieh et al. [16] try to recommend users time-sensitive route, but they didn't consider users oneself preferences and the influence of user's friends for recommendation.Gao et al. [17] explore the effects of temporal features on location recommendation, and offer an overview of personalized location recommendation with location based social networks [18].Chen et al. [19] develop a greedy algorithm to optimize the point-of-interest recommendation by information coverage for location category. Popularity and Temporal Characteristics of Location The higher the popularity of the location, the more people to visit this location, accordingly, the more users check-in in the location in the data.Most users prefer to choose the popularity locations of the recommended results because the locations which have high popular can provide a better service, and more worthy of trust.According to the popularity of the location i l to recommend user k u defined as follows: where U and L are the set of users and locations in the data, , is the check-in counts of user u in location j l . Such a way to measure the popularity of a location has a case, namely although a location (i.e.home) check-in frequency but all of them are check-in by a few or even a single user, the actually the popularity of the location is lower.We user the entropy to compute the location popularity to avoid the aforementioned case, the definition as follows: ( ) , , , , , log From the above analysis, we know that the system should consider the visiting time of users when recommending location for them.If the user's current time is not in the period of the location popularity, then it should not be recommended to the user, or recommended probability should be minimal.So, we using the visiting time of user modify the equation as following: , , where t is the time of the user, u t is the check-in time of the user u, θ is a threshold. pop u l pop u l pop u l ∈ .It is difficult to determine the value of the θ for each location due to the difference of them.So, we use the following formula instead: According to the nature of the exponential function e u t t − − we know that the more closer from the user visiting time, the higher similarity for them. User Preference The users' historical check-in data implied their interests.We learn the users' preferences from their history check-ins in order to recommend their appropriate location. Figure 3(a) and Figure 3(b) are the check-in probability distribution of the two users who are randomly selected from the data and the probability distribution of all these locations.From the figure we can see that in some locations the two users' check-in frequency, but the other users check-in fewer, while in other locations, the situation is opposite.Accordingly, we use "TF-IDF" to characterize the user's preferences, defined as follows: where the i U is the set of users who have checked in at this location i l .Through statistical users' histories check-in we found that 26.7% of users will visit the same location at least twice in one day.If a user visits a lo- cation very frequency in a short period, we could not determine the user must be interested in this location.Conversely, if a user visits one location in a long period, it is likely that he is interested in that location.So we fuse the time factor into user preferences, namely: where , k u i d is the count of day the user k u check-in in the location i l . Social Influences of Friends Friends usually have the same preferences and habits and they may check-in in the same location at the same time, and also may use others experience for reference to visit a location.The similarity of the two users is defined using meet/min coefficient [20]: where the u F is the set of friends of the user u.The standard meet/min coefficient counts the number of common friends of the two users and scales by the size of the friends of at least one of two users whose friends are smaller. There is a unique relationship between friends of the check-in data, namely the two users may check-in in the same location.This situation is better able to show the two users have the same preferences.So we calculate the similarity of the two users using their common check-ins as following: where the u L is the check-in set of the user u.Adamic et al. [21] measure the two users' similarity though the items which are shared by them.We will use it to calculate the similarity between two users who are check-in in the same location.It calculates by the inverse log frequency of their common check-in locations: ( ) Friend-based recommendation uses collaborative filtering algorithm [6]: Fusing Model It can improve the performance of the system if using variety features [3] [6] [14].So we integrated all of features which mentioned above and select the best algorithm in each feature to recommend.Before fusion, we normalize each score of the feature using maximum score respectively.The fusion function is as following: where the two weighting parameters α and β are in the range of [0, 1] and 1 α β + = . Experimental Data We employ the Gowalla dataset [22] of half year check-ins which has been used for location-based analysis as an experimental data.The Gowalla dataset contains 4,396,820 check-in records from May to Oct. of 2010.We only use the locations which more than one check-in in it because if the location have only one check-in that such location we can't use any of the methods to recommend.So in actually the dataset contains 86,802 users, 612,544 locations and 3,908,034 check-ins.We randomly select the 10,000 users from the Gowalla dataset as the test users and their last check-in to estimate, all of the remaining data are training data.Our goal is to recommend the location of the user expecting to visit according to the users' history check-ins. Evaluated Recommendation Approaches We want to observe the performance of each recommended method through the experiment.Simultaneously, looking at while it can improve the recommendation accuracy or not when we consider the user's visiting time and the time characteristics of the locations.In LBS applications, user accepting the recommendation list is real-time, and most of them hope that the recommended locations are nearby them and they can arrive in a short time.Therefore, we refer to literature [21] just utilizing the locations which are 5 kilometers away from the user's current location as a candidate recommendation set.We compared the method proposed in our paper with the baseline of presenting in [6], and meanwhile compared influence of the methods between the within time feature and without time factor.USG: It is a collaborative recommendation algorithm which fuses the social influence and geographic influence base on naïve Baycsian [6].LPm: using the popularity of the location which it is calculated by the users' check-in frequency recommend; LPe: the popularity of location is calculated using the entropy; LPmt/LPet: considering the visiting time of the users and the time feature of the location to recommend when using the popularity of the location; UP: according to the user's preferences to recommend that learned from the users' history check-in data; UPt: considering the number of the day which user check-in in the location when using the preferences of user to recommend; Cf: recommending location for user according to the collaborative filtering of friends of the user; Cl: recommending location for user according to the collaborative filtering of friends who are check-in at least one same location with the user; Clog: the fewer users of one location check-in which of two users visit, the higher similarity of the two users; LUt: fusing the popularity of a location, the preferences of user and time features to recommend; LUCt: fusing the popularity of a location, the preferences of user, and the influence of the friends of user and time features to recommend. Performance Comparison and Analysis In this section we evaluate and compare the performance of each method.First, let's look at the two methods LPm and LPe which base the popularity of the location, the performance of LPe which measures the popularity of a location using the entropy is not as good as the method of LPm which measures the popularity of a location using the location popular.We observer the results and find that some location are the private location (i.e.home), when using the entropy to recommend, the similarity is zero.When considering the time feature of a location, the performance of the LPet is lower than the LPe.The reason is that in some location, there are several users check-in in them and the time of check-ins is scattered, if consider the time in these location, the evaluation precision is decline.However, the performance of the LPmt is increasing because of many locations' active time is inconsistent with the user's visiting time, when taking into account the time feature, these locations are filtered.Similarly, the performance of the method based user preferences also can improve in some extend when augments the time factor. In the user-based collaborative filtering recommendation, the methods of Cl and Clog which recommend using the friends that check-in in the same location with the user are better than the method Cf which recommends using all of the user's friends.Intuitively, the friends who visited the same locations with the user are likely having the same interests for user.However, the performance of all the user-based CF are not as good as the methods of based location popularity, because many of the candidate location are not check-ined by the one of the user's friends, and the similarity is zero. In all the methods of based the one feature to recommend, the performance of the user preferences of UP and UPt is the best.The results stated that the majority of the users' interests unchanged for a long time, they are more likely to visit the location which they have visited. Continue to observe Figure 4 we can find that considering the user preferences and the popularity of the location at the same time, the precision and the recall are better than any single feature method and the performance improved on average by 25%.And once again fusing the user-based CF, the performance of the system increases a small, but computational complexity will increase a lot. Finally, the performance of the method of LUPt which are proposed in this paper is outperformance than the baseline method of USG which is presented in [6].The reason is the USG didn't consider the user's preference, time feature of the user and the current location of the user, however, we consider all of influence factors. The Influence of the Recommendation Range In general, the current location of the user is impact on whether the user will visit the recommendation location and users are more inclined to visit the location where is closer to the current location [6] [13].We study the performance of the different methods in different distance from the user current location.Observing Figure 5 we find that the precision and recall are reduced monotonically as the distance increasing except the baseline method of USG which don't consider the user's current location.When the distance increasing to 10 km, the performance of the methods of based the popularity of location and the user-based CF is lower than the baseline of USG, the preference of fusing models of LUt and LUCt decreases more because of the popularity factor of the location, but it still better than the baseline of the USG. The Impact of the Number of the User's Check-Ins The number of the user's history check-ins has an important impact on the recommendation system.We divide the users into nine intervals according to the number of their check-ins, the fewer the check-in numbers, the more users, and then the interval is smaller.Observing Figure 6, first of all, we see the greatest changed methods of the USG, UP and UPt.When the number of the users' check-in is lower than ten, the performance of the baseline of USG is better than all of the methods except the fusing methods of LUt and LUCt.The performance achieved the best when the number of the user's check-in in the range of two to five hundreds, if the number of the check-ins continue to increase, the precision and the recall are decline.There have two reasons for the phenomenon: the one is that it is difficult to capture the preferences of users when their check-ins is a lot because of a wide range of their interests. Conclusion In this paper, we present user preferences and time sensitive recommender systems to recommend an appropriate location for user in location-based social networks.We primarily consider five factors to recommend location for user.According to the experimental results, our approach significantly outperforms the baseline of USG and the other recommendation methods, including the collaborative filtering method and user's preferences method.The results show that the performance of the recommender system can improve when considering the time feature of the location and the user visiting time. Figure 3 . Figure 3.The distribution of (a) user 0 and all other users in location which user 0 visited, (b) user 273 and other users in location which user 273 visited. Figure 4 . Figure 4. (a) Precision and (b) recall of the experimental results for all the methods. Figure 5 . Figure 5.The range impact of (a) precision and (b) recall for recommendation. Figure 6 . Figure 6.The check-in count impact of (a) precision and (b) recall.ral Science Foundation of Liaoning Province, China (No. 201202031, 2014020003), State Education Ministry and The Research Fund for the Doctoral Program of Higher Education (No. 20090041110002).
4,793.6
2015-09-01T00:00:00.000
[ "Computer Science" ]
An overview of negative hydrogen ion sources for accelerators An overview of high current (>1 mA) negative hydrogen ion (H–) sources that are currently used on particle accelerators. The current understanding of how H– ions are produced is summarised. Issues relating to caesium usage are explored. The different ways of expressing emittance and beam currents are clarified. Source technology naming conventions are defined and generalised descriptions of each source technology are provided. Examples of currently operating sources are outlined, with their current status and future outlook given. A comparative table is provided. Introduction Over the last five decades, the negative hydrogen (H -) ion has become the particle of choice to inject into high power proton accelerator facilities. This is because the ion's charge polarity can be inverted by removing two electrons when it passes through a thin stripping foil, leaving a bare proton. This conversion from Hion to proton is known as charge exchange. Charge exchange is employed in tandem accelerators to double the accelerating voltage; in cyclotrons to allow simple and effective beam extraction; and in storage rings and rapidcycling synchrotrons to accumulate high current proton beams. Although using Hions brings many advantages to a proton accelerator facility as a whole, the ion source technology required to produce an effective Hion beam is more complicated than to produce a proton beam. A variety of operational Hion source technologies are discussed in this paper. Ion sources for accelerator facilities must produce high current beams with low emittance, be highly reliable and have lifetimes compatible with the operating schedule of the accelerator they serve. Depending on the application, the ion beam may either be 'always on' continuous wave (CW) or be pulsed with a range of different pulse lengths, l (in seconds) and repetition rates, f (in Hz). For a pulsed beam, the duty cycle, D=l·f describes the proportion of time the ion beam is being produced. Generally, sources operated at lower duty cycles will produce higher pulsed beam currents. For example, pulsed Hsources typically produce beam currents around 10-80 mA, whereas CW sources typically produce beam currents of 0.5-25 mA. The present drive in Hresearch and development is focused on increasing the duty cycle whilst maintaining high beam currents. In addition, caesium has a major role to play in H − ion production, as will be discussed later, but it brings with it several problems. Therefore another major R&D activity is to reduce the reliance on-or even remove the need for-caesium in H − ion sources [1][2][3]. There have been several excellent review papers on the subject of H − ion sources for accelerators over the years [4,5]. This paper provides an updated overview of high current (>1 mA) H − ion sources that are currently providing beam for accelerator facilities or accelerator test stands. The negative hydrogen ion production mechanisms, as they are currently understood, can be separated into two main branches: surface and volume production. Surface production requires a positive ion or an energetic (or a hyper-thermal) neutral hydrogen atom to impact a low work function material. At the surface, the impacting particle may capture electrons and form a negative ion. The rate of electron capture becomes high enough at short distances that the incoming particle loses memory of its original charge, so it suffices to assume that surface-produced negative ions are always formed from neutral atoms. Similar to the ionisation energy required to remove an electron from a neutral atom, there exists an affinity energy, which is the energy released when an extra electron is added to an already neutral atom. Generally, the electron affinity is much lower than the ionisation energy, so negative ions are very fragile and easy to strip. In the moments before impact, the wave functions of the surface and impacting atom overlap, resulting in the electron affinity level of the atom smoothly shifting downwards toward the valence band of the surface. Valence electrons may then tunnel onto the atom with an exponentially higher likelihood as the atom moves closer to the surface [6,7]. If the surface material has a low work function, the affinity does not need to shift down so far before electron capture occurs. The low work function affects H − formation probability not only due to the required affinity shift but also due to increased tunnelling probability of the electron through the surface potential barrier. The material with the lowest work function is caesium. Therefore, H − ion sources dominated by surface production generally involve the introduction of caesium to reduce the surface work function as much as possible [8]. Volume production of H − ions is a two-step process: The first step is to create population of highly excited ro-vibrational hydrogen molecules. These are made both by colliding ground-state H 2 with fast electrons and also at the walls of the plasma chamber. The second step is dissociative attachment of slow (∼1 eV) electrons with the H 2 * . The cross section of the dissociative electron attachment process depends strongly on the vibrational level of the H 2 molecule and the energy of the impacting electron. This is because the probability of the compound state to dissociate into H and H − without auto ionisation depends strongly on the inter-nuclear distance which is affected by the vibrational energy. To successfully form H − ions using the volume technique, a magnetic filter field is required inside the ion source which divides the plasma into two distinct regions: a high-temperature region to create excited molecules and a low-temperature region near the outlet aperture to create and quickly extract the H − ions [9]. A filter field of around 10 mT is sufficient to reflect high energy electrons but allow slow electrons and hydrogen to diffuse across toward the extraction region. Until recently, H − ion sources were categorised as purely surface or volume sources. For example, Penning [10], magnetron [11] and surface-converter [12] sources primarily create H − ions from caesiated cathodes; whereas filament [13] and RF [14] sources use the volume technique. However for high power applications, new ion sources increasingly combine both techniques [15][16][17]. They achieve this in a low-pressure plasma with an optimised filter field for favourable volume production, but enhance the H − yield with a caesiated extraction aperture. Excellent source cleanliness and a well-understood caesium conditioning process are vital for operation of these hybrid sources. There are many processes that can destroy H − ions. The cross section of the H − ion in these destruction processes is also larger compared to the H 0 atom. As well as being fragile and easily destroyed, it is more likely to be hit. The aim of the H − ion source designer is to minimise the H − destruction processes by controlling the geometry, temperature, pressure and fields in the source. On caesium handling The use of caesium (Cs) is itself an interesting topic. Due to its low work function, Cs is often added to H − ion sources to enhance the rate of surface production. As Cs atoms are deposited on (typically) a molybdenum (Mo) surface, the work function decreases from 4.6 eV for Mo toward 2.1 eV for bulk Cs. When the surface is partially covered, however, the work function actually has a minimum of 1.5 eV: lower than that of Cs or Mo. This minimum occurs at a Cs coverage factor somewhere between 0.5 and 0.7 monolayers [18]. Maintaining the optimal surface coverage is of paramount importance to an H − ion source operator [19]. Several techniques are in use. At its most simple, elemental Cs with a purity of at least 99.99% may be housed in a heated oven (often the misnomer 'boiler' is used for this device: although the Cs is fully melted in the oven, it is far from its boiling point). Caesium is in the liquid state at 28.4°C. Operating the oven at around 150°C yields a significant flux of Cs vapour from the oven into the ion source via a heated transport tube. The transport tube is maintained at around 300°C so that it is the hottest point in the transport system, preventing Cs accumulating [20], possibly leading to a blockage. Caesium is highly reactive to oxygen and water vapour in the atmosphere, so care must be taken especially if elemental Cs is used. The Cs may be installed into the oven sealed inside a glass ampoule, which is cracked under an inert gas at atmospheric pressure after the ion source is installed on the accelerator. This prevents any risk of personnel handling exposed Cs. Alternatively the ampoule may be heated (to melting point) and cracked inside a secure and inert gas glove box [21], then carefully poured into the oven before being attached to the ion source. Although more difficult and with higher associated risks, this method ensures that no partially-cracked glass blocks the flux of Cs into the ion source [22]. In either method, the oven must be well cleaned and baked out to remove any impurities which could affect the Cs. The oven and transport tube should be well insulated to contain the required heat. This can be done either by wrapping insulating tape around the assembly, or by constructing a bespoke insulating jacket. The oven and transport tube may be separated with an all-metal shut-off valve which can survive the high temperatures. With the valve closed, the oven can be removed, thus allowing separate filling or maintenance of either the oven or the source. However the valve does add complexity to the caesium transport system and may not be suitable for every application. Alternative methods of dispensing Cs are also well practised. Caesium-chromate cartridges may be used instead, which are very stable even at elevated temperatures. The cartridges contain a Zr-Al getter, which reduces the Cs 2 CrO 4 to Cs, Al 2 O 3 , Cr 2 O 3 , and ZrO 2 when heated to >500°C without the emission of any gaseous products. The cartridges emit a known flux of Cs when heated, albeit at a much higher temperature than elemental Cs [23]. Caesium-chromate cartridges may be installed directly inside the ion source [24], without the need of a remote heated transport system. In this manner, the Cs is already located near where it needs to be on the cathode surfaces and so much less excess vapour is wasted or emitted into locations where it would have an adverse effect. An alternative method is to pass an electrical current of several Ampères through a solid Bi 2 Cs dispenser which releases a Cs on demand [25]. The Cs flux can be measured directly at the dispenser opening using a surface ionisation detector for highly accurate control of the Cs evaporation rate. Regardless of how Cs is injected into the ion source, it must usually be replenished because the plasma sputters away Cs atoms deposited on the cathode. The replenishment is also required to bury 'impurities' (sputtered metal from the walls, metal eroded from the filaments etc) under a fresh Cs layer on the H − production surface. The sought-after minimum work function is actually difficult to achieve in test stands [26], while in real ion sources with plasma this minimum is often reached by varying the temperature of the H − emitting surface. Since the Cs must usually be continuously injected into the ion source, one might naturally ask where it ends up. For example, typical elemental Cs ovens hold several grams of caesium, which evidently does not remain on the source cathode. In fact usually the Cs escapes the ion source and enters the high voltage extraction region, covering it and the rest of the vacuum vessel. Caesium depositing on negative high voltage electrodes drastically enhances the rate of sparking, so a careful control of Cs flux is required, as well as shielded insulators and heated electrodes to prevent Cs accumulation [27]. One noteworthy exception is the SNS ion source which has a wellrehearsed caesiation procedure to ensure the surface-production collar is extremely clean and free of impurities. This allows Cs atoms to bind sufficiently strongly to the surface to prevent sputtering [28]. In this manner, no additional Cs needs to be injected into the source after the initial caesiation on start-up: a significant achievement. Variation in caseation can have a significant effect on source performance. It should also be noted that the total Cs consumption throughout the source lifetime varies hugely depending on the type of source technology: Penning and magnetron surface plasma sources can use >10 g, whereas the SNS source only uses a few mg. Beam emittance As well as the headline figures of beam current, duty-cycle and lifetime, the transverse emittance is a very important number to quote for each ion source. An ion source may be able to produce 100 mA of beam current, but if it has such a large emittance that it cannot be transported through the rest of the accelerator, then it is of no practical use. Unfortunately, the emittance often causes confusion and makes comparison between sources difficult as there are many ways to quantify it. H − ion beams can seldom be defined with simple Gaussian distributions, since the beam current density is often perturbed by magnetic filter fields, electron-dumping fields or other asymmetries in the extraction system. Kapchinskij-Vladimirskij (KV) [29], bi-Gaussian or waterbag distributions may be used to analytically describe the more complicated beams. Often there are halo particles, nonlinear tails or 'ghost beams' overlapping the main beam distribution in trace space. Outlying particles have a disproportionally large influence on the overall measured emittance. Therefore sometimes some threshold is chosen to cut out the bottom few percent of the data such that it is not included in the emittance calculation. Then one must be careful to indicate exactly what fraction of the beam is included [30]; for example 90% or 95% emittances are often quoted. Conversion factors can be calculated for different emittance values; for example the 95% emittance is equal to six times the RMS emittance, whereas a KV distribution is equal to four times the RMS. The four-RMS emittance has become somewhat of a standard in the accelerator community after the proposal by Lapostolle [31], so it is adopted in this paper. For a statistical ensemble I x x , ¢ ( ) of particle intensities passing through positions x at angles x′, the 4·RMS emittance is defined by the second moments of the distribution: where: x , Before calculating the second moments, the distribution should be normalised to its centroid to eliminate the first moments. That is, each position x and angle x′ in the equations above are equal to ) respectively, where x 0 and x 0 ¢ are the raw measured positions and angles, whereas are the first moments, or means. Since all ion sources produce beams of different energies, the 4·RMS emittance values quoted herein are normalised to the particle velocity for easy comparison, thus: where β and γ are the usual relativistic velocity factors. The self-consistent un-biased elliptical exclusion analysis [32] is a useful technique to accurately calculate the 4·RMS emittance even in the presence of low-level background or ghost particles discussed above, without having to take arbitrary thresholds. 4-RMS emittance is a meaningful value to quote because it contains around 90% of the beam (89%-100%), regardless of the distribution (Gaussian, KV, waterbag, s-shapes etc), whereas 1-RMS contains 25%-40% of the beam, depending on the distribution. This is a much more significant variance which can give a misleading impression of the beam's quality. For clarity, the 4-RMS emittance is a value, which when multiplied by π, is an area in the phase space which contains 90% of the beam. By explicitly including π in the units, there is no confusion that it is an emittance area. Similarly, some people fold the milliradian (a simple ratio equalling 0.001) into the mm unit, compressing mm mrad down to μm which can also cause confusion. Beam current As well as being careful to indicate the correct emittance, the beam current can also cause confusion. For example, the ion source may produce a beam current which is not fully transported through the subsequent low energy beam transport (LEBT) and later accelerating stages. Therefore one might ask whether to only report the transportable beam current, or the current which fits inside the accelerator acceptance. This measurement is usually difficult to determine online in an operational facility, rather than on a test stand with multiple diagnostics devices. Moreover, transportation problems are usually attributed to unknown LEBT space-charge compensation or a poorly performing radio frequency quadrupole (RFQ), not to the ion source itself. Therefore, to ensure consistency, all values reported in this paper refer to the H − beam current delivered to the entrance of the LEBT by the ion source. The actual measurement of beam currents from negative ion sources is also fraught with problems because of electrons. Electrons co-extracted from the source can be inadvertently transported to the measurement. If the pressure is high enough electrons from background gas stripping can be measured. Secondary electrons from beam lost on structures in the vessel can be measured. Beam current toroids can read high or low depending on what direction these stray electron currents are flowing through them. Interceptive Faraday cups need careful design and setup with secondary electron suppression and adequate electrical screening. Classification Source technology is classified by how the primary plasma discharge is driven and by the dominant H − production processes. Surface, volume and charge exchange processes will occur in all sources to differing degrees. The terminologies shown in bold are used to classify the different source technologies. All accelerator facility H − ion sources generate their primary plasma by electron impact ionisation driven by an electrical power source. The electrical power can be electrostatically coupled to the plasma via the electric field or inductively coupled by the magnetic field. The power source can be a current regulated power supply that applies a voltage between the cathode and anode electrodes, or power regulated amplifier that couples to the plasma via an antenna or waveguide. The electrodes can be cooled, or run hot, or be heated to visibly high temperatures as in a cathode filament. The electrodes form the walls of the plasma chamber in the Penning and magnetron geometries. The other sources have a plasma-confining multicusp-magnetic-field surrounding the plasma chamber, this allows the walls of the chamber to be constructed of conducting or non-conducting material. The multicusp field also helps to improve the power efficiency by confining the energetic electrons. The power supply must be capable of applying enough voltage to overcome the strike potential and then to be able to regulate with a non-Ohm's law load. The power supply must also be able to withstand short circuits. For RF (MHz) or microwave (GHz) sources, the amplifier driving the plasma is power regulated and it must be able to withstand reflected power. The antenna or waveguide can be immersed in the plasma, internal to the plasma chamber; or it can be outside the plasma, external to the plasma chamber. Keeping the antenna out of the plasma reduces sputtering erosion problems and can yield sources with very long lifetimes, however it can require higher output power from the amplifier due to lower coupling efficiency to the plasma. The antenna geometry is most commonly a solenoidal helix or a planar spiral. Volume production requires the presence of magnetic filter fields to stop fast electrons. Surface production becomes significant when caesium is present in the source. Surface Plasma is an accepted term for the surface and resonant charge exchange processes that occur in Penning and magnetron sources. Surface Converter sources require a caesiated surface biased at a negative voltage. The following sections provide generalised descriptions for each of the source technologies and give current examples of their implementation. Generalised description The filament driven volume source has a cylindrical plasma discharge with a magnetic dipole filter field near the emission aperture as shown in figure 1. The dipole filter field creates a region of low electron temperature near the emission aperture allowing H − volume production. Hydrogen is fed into the discharge through an opening. The plasma chamber wall is the anode and a heated filament is the cathode. The plasma discharge is created by applying power between the anode and cathode using either a voltage or current regulated power supply: For the plasma driven with tungsten (W) or tantalum (Ta) filaments, a voltage regulated power supply is used, since the discharge current is regulated by the operation with the space-charge-limited electron-emission mode by using the W or Ta filaments with a sufficiently high temperature. On the other hand, the current regulated power supply is essential for the plasma driven with a Lanthanum hexaboride (LaB 6 ) filament, since the LaB 6 filament is operated at a temperature much lower than that for the space-charge limited electronemission mode. The walls of the vessel are lined with a multicusp magnetic field arrangement to confine the plasma. Caesium can be added to this source to make it a combined Volume and Surface source. The addition of caesium increases the H − beam current and reduces the co-extracted electron current. D-Pace filament source, Canada Based on the source design developed at the TRIUMF cyclotron in Canada [13], this source is commercially available to purchase from D-Pace as a complete 'turnkey' system [33]. Shown in figure 2, it is now in use at many accelerator facilities. When configured with four 1.6 mm diameter tantalum filaments arranged in concentric half circles and 5 kW of discharge power, this source is able to produce up to 18 mA CW H − beams. Sumitomo heavy industries, caesiated filament driven multicusp source. Japan A caesiated filament-driven multicusp H − source is being developed for medical cyclotrons at Sumitomo Heavy Industries. The source can produce 23 mA CW beams [34]. External planar RF antenna driven volume sources 3.3.1. Generalised description The external planar RF antenna driven volume source has a cylindrical plasma discharge with a magnetic dipole filter field near the emission aperture. The plasma discharge is coupled to a flat spiral RF antenna behind a dielectric RF window as shown in figure 3. Hydrogen is fed into the discharge through a hole. The plasma discharge is created by applying power via the antenna using a power regulated amplifier. The walls of the vessel are lined with a multicusp magnetic field arrangement to confine the plasma. D-Pace RF source, Canada This source is based on the filament driven source described in the previous section 3.2.2, in which an external planer RF antenna developed by the University of Jyväskylä, Finland [35]. This source is also commercially available to purchase from D-Pace as a complete 'turnkey' system [36]. With a 3 kW 13.56 MHz RF discharge, Both the D-Pace RF and filament sources can be run with deuterium or Acetylene for D − and C 2 − beams, respectively [37]. Microwave driven volume sources 3.4.1. Generalised description The microwave driven volume source (figure 5) has a cylindrical plasma discharge with a magnetic dipole filter field near the emission aperture. The microwave power is coupled to the plasma discharge via a waveguide either with a stepped matching section or through a dielectric window. Hydrogen is fed into the discharge through a hole. The walls of the vessel are lined with a multicusp magnetic field arrangement to confine the plasma. Peking University (PKU) microwave source PKU is currently developing a microwave driven volume source with a cooled RF window. On an ion source test stand, they have recently reported over 25 mA of CW H − beam current without the addition of caesium [38]. The previously highest reported CW currents for this type of source are 5 mA at Argonne National Laboratory [39] and 3.8 mA CEA Saclay [40]. If PKU can demonstrate that 25 mA of CW H − beam current can be transported to an accelerator, this will become the leading long-lifetime source technology. 3.5. Magnetron surface plasma sources 3.5.1. Generalised description The magnetron surface plasma source shown in figure 6, has a discharge that twines around a central reel-like cathode like a belt. The cathode is held inside the anode body using a ceramic insulator. Caesium vapour from an external oven and hydrogen are fed into the discharge through holes in the anode body. A magnetic field is applied parallel to the axis of the reel-like cathode; this causes the electrons to propagate around the belt-like discharge. Power is applied to the discharge between the anode and cathode. Beam is extracted through a hole in the anode. Often there is an indent (dimple) in the cathode opposite the extraction hole that increases the output current by focusing cathode produced H − towards the extraction hole. An extraction electrode is used to create the electric field that extracts and shapes the beam from the extraction aperture. A large proportion of the H − beam is directly extracted from the focusing dimple at cathode potential without undergoing resonant charge exchange with slow H 0 , this means magnetrons have somewhat higher beam noise and energy spread than Penning ion sources discussed later. Fermi National Accelerator Laboratory (FNAL), USA The FNAL magnetron surface plasma source shown in figure 7, which was developed based on the original 1970s magnetron design from the Soviet Union [41], has provided beams for accelerator operations for over 30 years. When operated with a 15 A discharge current, over 80 mA of H − beam current is reliably produced at low duty cycles (0.3% @ 15 Hz) with lifetimes exceeding 6 months. This source is currently delivering beams to two accelerator test stands. FNAL copied the Brookhaven National Laboratory (BNL) developments and made several improvements to the hydrogen and caesium delivery systems to minimise sparking. They have also introduced a solid state high voltage extraction power supply. Magnetron surface plasma source-BNL, USA BNL optimised the FNAL design [42]. They reduced the discharge current; increased the extraction voltage and used permanent magnets. This highly reliable, very efficient magnetron is shown in figure 8. This source still regularly provides beam for accelerator operations. A solid state high voltage extraction power supply is also being developed. 3.6. Penning surface plasma sources 3.6.1. Generalised description The Penning surface plasma source has a small brick-shaped discharge, bounded by a window-frame anode and two opposing cathodes. Caesium vapour from an external oven and hydrogen and are fed through holes in the anode as shown in figure 9, however it is also possible to feed the discharge through holes in the cathode. A magnetic field is applied perpendicular to the cathode surfaces. The magnetic field confines the electrons to oscillate between the two cathodes. Power is applied to the discharge between the anode and cathode using a current regulated power supply. Beam is extracted through a hole in the anode. An extraction electrode is used to create the electric field that extracts and shapes the beam from the extraction hole. H − ions desorbed from the cathode have no direct line of sight to the outlet aperture, so must undergo resonant charge exchange with slow neutral hydrogen atoms to reach the extraction hole. This process yields a beam with low energy spread and emittance, making a high quality beam at high duty factors. ISIS accelerator at the STFC Rutherford Appleton Laboratory (RAL), UK The ISIS source plasma chamber geometry shown in figure 10, is essentially unchanged from the LANL Penning source of the 1980s [43], itself derived from the original Penning ion source developed by Dudnikov in the 1970s [44]. With 30 years of operational experience, the ISIS design has been replicated in several facilities [45,46] due to its high emission current density, low emittance, reasonable lifetime, simple operation, rapid replacement and relatively low cost. The ISIS ion source has undergone significant development over the last ten years, primarily to deliver a 60 mA, 10% beam duty cycle, low emittance H − beam for the Front End Test Stand (FETS) project [47]. The source is currently limited to producing a 60 mA, 5% duty factor beam at 65 keV. To achieve the full 10% duty cycle, a 2X Scaled Penning source is being developed at RAL [48]. Scaling of Penning Surface Plasma Sources was investigated at Los Alamos National Laboratory [49] in the 1990s. . Generalised description The filament-driven surface converter source shown in figure 12, has a cylindrical plasma discharge and a concave converter surface on which the H − ions are produced. The converter surface is located inside the plasma and is biased negatively at a few hundred volts. The converter surface is concave so the H − ions produced there are focused toward the extraction aperture. Caesium vapour (from an external oven) and hydrogen are fed into the discharge through holes. The plasma chamber wall is the anode and the cathode is a heated filament. The plasma discharge is created by applying power between the anode and cathode. The walls of the vessel are lined with a multicusp magnetic field arrangement to confine the plasma. Los Alamos Neutron Science Center (LANSCE), USA The LANSCE source shown in figure 13, routinely produces a 16 mA, 60 Hz H − beam with a lifetime of 35 d. Like all filament-driven discharge sources it suffers from lifetime limitations due to filament erosion. A 6 kW pulsed discharge is ignited from a specially shaped tungsten filament. The set-up and stabilisation time of this source takes around 36 h but it can operate at a wide variety of repetition rates [54]. 3.8. Internal RF solenoid antenna driven volume and surface sources 3.8.1. Generalised description The internal RF solenoid antenna driven volume and surface source shown in figure 14, has a cylindrical plasma discharge with a magnetic dipole filter field near the emission aperture. The plasma discharge is coupled to a solenoidal helix RF antenna housed inside in the plasma chamber. The antenna is coated with several thin layers of porcelain, to provide around 0.5 mm of insulation from the plasma. Hydrogen is fed into the discharge through a hole in the back wall. The plasma discharge is created by applying power to the antenna using a power regulated amplifier. After extended high-power, high-duty-factor operation, the ultra-high purity gas can no longer be ignited with just pulsed RF, so a low power, high frequency RF signal is constantly applied in parallel to maintain a dim plasma between high power pulses [55]. The walls of the vessel are lined with a multicusp magnetic field arrangement to confine the plasma, whilst separate water-cooled filter magnets are immersed inside the plasma. This source relies on both volume and surface production processes. Caesium is introduced into the source either by heated caesium chromate (Cs 2 CrO 4 ) cartridges installed in a collar near the emission aperture, or by an external caesium oven. Caesium covers the surfaces near the emission aperture enabling surface production of H − ions to supplement those produced by the volume process. figure 15, a 2.5-turn RF antenna is held inside the plasma. The source operates at high duty factors (6%), with a pulsed discharge power of 50-60 kW, and with a temperature-controlled outlet aperture. By using caesium-chromate cartridges housed in the outlet aperture and a precise start-up procedure, this ion source can maintain high beam currents without the need for continual caesium injection. The ion source injects in excess of 60 mA of beam into a compact electrostatic LEBT. The LEBT incorporates two Einzel lenses to match the beam into the following RFQ. The second Einzel lens is split into four quadrants to provide for steering and chopping. The LEBT has no room available for a direct measurement of beam current, for example with a Faraday cup or toroid. Instead, the entrance plane of the RFQ is at floating potential such that beam may be steered onto it using the chopper and steerers as a method of measuring the LEBT output and the RFQ input beam current [56]. Around 35 mA of beam exits the RFQ, whose transmission is sensitive to ion source and LEBT alignment. This is much less than the 56 mA exiting in 2008 [28] before the RFQ transmission started to deteriorate around 2011 [57]. Beam is extracted from the source biased to suit the 65 keV RFQ input energy. To reduce heat loads electrons are dumped immediately outside the source at 6.2 keV, which yields a uniform extraction field and optimises the transmission through the RFQ [28]. Recent performance improvements have focused on reducing electron dump and Einzel lens sparking, strict antenna selection procedures, and ensuring perfect source cleanliness and removal of impurities which would sputter the caesium coverage on the plasma electrode. These improvements have led to long lifetimes of up to 96 d [58], which is world-leading for such a high power, high duty-factor H − source. Japan Proton Accelerator Research Complex (J-PARC), Japan Originally operating with a lanthanum-hexaboride double spiral filament, the J-PARC ion source (figure 16) has been fitted with SNS internal RF antennas. Operating with a thick tapered plasma electrode with a thickness of 16 mm, an external caesium oven, carefully tuned filter magnets and a sophisticated computer-controlled feedback system, the J-PARC source is now producing 45 mA of H − beam current in user operations, with the ability to increase to 66 mA during machine physics periods. The magnetic filter field is produced with two water-cooled rod magnets housed inside the plasma chamber. Beam is produced with a two-stage extraction system, whereby the beam is extracted and co-extracted electrons dumped at 10 keV, then the H − beam is postaccelerated up to the 50 keV RFQ input energy. Two neighbouring pairs of permanent magnets are housed inside the extraction electrode to produce two consecutive dipole magnetic fields with opposite polarity. The first dipole field dumps the co-extracted electrons and bends slightly the H − beam. The second dipole field bends back the H − beam on the beam axis. The ejection angle error of the H − beam is corrected by an electromagnet, which is located just behind the grounded electrode. A comprehensive improvement campaign involving the emittance reduction by the low plasma electrode temperature operation with slight water feeding into hydrogen plasma, the optimisation of the rod filter magnets, and careful emittance analysis has led to over 66 mA of H − beam produced in 1000 μs pulses at 25 Hz [59]. 3.9. External RF solenoid antenna driven surface and volume sources 3.9.1. Generalised description The external RF solenoid antenna driven volume source shown in figure 17, also has a cylindrical plasma discharge with a magnetic dipole filter field near the emission aperture, but the plasma discharge is coupled to a solenoidal helix RF antenna outside of the plasma chamber. The antenna is wrapped around the plasma chamber which is made of a dielectric material such as alumina. For higher power operation, aluminium-nitride may be used instead as it has a higher thermal conductivity; however it is more expensive, difficult to machine, and difficult to make a vacuum seal. External antenna RF sources can operate caesium free [14], but for high current/duty cycle operation they rely on both volume and surface production processes. Caesium vapour is fed into the source from an external oven, and covers the surfaces near the emission aperture, enabling surface production of H − ions. Because the RF must be coupled (with associated losses) into the plasma through ceramic walls, a higher RF power may be required than for internal RF antennas. Another issue is the high inductance of the large solenoid, which can lead to high voltage discharges for high power operations. A separate low power plasma ignition power supply may be required for increased stability. CERN Linac4, Switzerland An upgraded copy of the highly successful DESY external antenna source [14], the CERN Linac4 source ( figure 18) is designed to operate at a higher duty cycle whilst producing more beam [60]. It uses an octopole cusp field and a five-turn external RF antenna to feed 40 kW of 2 MHz RF power into the plasma. Originally intended to be operated without caesium, as at DESY, the required operational parameters could only be met with the introduction of caesium. A two-day stabilisation period is required after installing the ion source, however persistent beam currents have been demonstrated for seven weeks. Unlike the SNS and J-PARC sources which use a continuous low power RF signal to maintain plasma between high power pulses, the CERN source does not need continual low power injection. Instead, because of its low repetition rate of 0.8 Hz, it is possible to provide a sufficiently high pressure hydrogen gas pulse to initiate plasma breakdown each pulse. Since the pulsed hydrogen pressure varies during the plasma pulse, a sophisticated feedback loop is implemented to adjust the RF power during the pulse such that the resultant beam current has a flat top. The downstream LEBT residual gas pressure is also closely monitored as it affects the beam space-charge neutralisation process, which in turn affects beam transmission through the RFQ. Several iterations of post-extraction optics involving an electron dump and einzel lens have been implemented to enhance the beam transport. Recent research has focused on understanding the plasma pulse through impressive modelling and spectroscopy measurements, as well as investigating the cause of a higher-than-expected beam emittance. The aim is to push the beam current towards 80 mA from its 45 mA present value, whilst maintaining a low emittance. Table 1 summarises the performance of the sources detailed in this paper. Discussion Accelerator facilities require the ion source to be very reliable. The ion source should not be the main cause of failure for the accelerator facility as a whole, otherwise questions will be asked. This has caused a large amount of ion source development activity to be focused on long lifetime sources. Source availability is simply the percentage of time that a source delivers beam when it is scheduled to do so. By scheduling suitably timed maintenance days in an accelerator operations schedule, even sources with relatively short lifetimes can deliver 100% availability. For example, ISIS user cycles are between 4 and 8 weeks in length with a scheduled maintenance day approximately every 2 to 3 weeks. By changing the source during the maintenance day; when the source is only at half its maximum lifetime (∼6 weeks); source availabilities of 100% for a user cycle are often achieved. Although the maintenance days were primarily introduced to change the source, they have also proved useful for overall machine operations because they provide opportunity for other equipment to be repaired or inspected. The downsides of this approach are: more sources need to be refurbished; no long term source lifetime statistics can be generated because the sources are never allowed to run to failure; and this approach is only feasible where regular access is available. The requirements of the accelerator facility dictate which source should be used. For DC beams up to 8 mA the obvious choice is an external planar RF antenna volume source because of its 'maintenance free' operation. However the RF amplifier adds significantly to the cost of the source so where the accelerator schedule allows it, a filament driven volume source is a more cost effective solution for DC beams up to 15 mA. Filaments can be quickly, easily and cheaply replaced. With high discharge powers the volume process can produce at most 40 mA of beam suitable for accelerator applications. The only way to exceed 40 mA of beam current is to add caesium. This instantly adds handling, management and cleaning complications, resulting in higher costs and complexity. With the addition of caesium and a suitable surface production region near the extraction aperture, beam currents of 80 mA can be extracted from combined volume and surface sources. This makes the RF volume and surface sources the best choice for 40-80 mA operation. The highest brightness beams are produced by the Penning and magnetron surface plasma sources. With current densities measured in the A cm −2 range, they are the only sources that can produce beam currents in excess of 100 mA. However surface plasma sources are fundamentally lifetime limited by the sputtering rate of molybdenum electrodes, yielding short lifetimes when operated at high currents and duty factors.
9,419.2
2018-02-15T00:00:00.000
[ "Physics" ]
Biosynthesis of dendroketose from different carbon sources using in vitro and in vivo metabolic engineering strategies Background Asymmetric aldol-type C–C bond formation with ketones used as electrophilic receptor remains a challenging reaction for aldolases as biocatalysts. To date, only one kind of dihydroxyacetone phosphate (DHAP)-dependent aldolases has been discovered and applied to synthesize branched-chain sugars directly using DHAP and dihydroxyacetone (DHA) as substrate. However, the unstable and high-cost properties of DHAP limit large-scale application. Therefore, biosynthesis of branched-chain sugar from low-cost and abundant carbon sources is essential. Results The detailed catalytic property of l-rhamnulose-1-phosphate aldolase (RhaD) and l-fuculose-1-phosphate aldolase (FucA) from Escherichia coli in catalyzing the aldol reactions with DHA as electrophilic receptors was characterized. Furthermore, we calculated the Bürgi–Dunitz trajectory using molecular dynamics simulations, thereby revealing the original sources of the catalytic efficiency of RhaD and FucA. A multi-enzyme reaction system composed of formolase, DHA kinase, RhaD, fructose-1-phosphatase, and polyphosphate kinase was constructed to in vitro produce dendroketose, a branched-chain sugar, from one-carbon formaldehyde. The conversion rate reached 86% through employing a one-pot, two-stage reaction process. Moreover, we constructed two artificial pathways in Corynebacterium glutamicum to obtain this product in vivo starting from glucose or glycerol. Fermentation with glycerol as feedstock produced 6.4 g/L dendroketose with a yield of 0.45 mol/mol glycerol, representing 90% of the maximum theoretical value. Additionally, the dendroketose production reached 36.3 g/L with a yield of 0.46 mol/mol glucose when glucose served as the sole carbon resource. Conclusions The detailed enzyme kinetics data of the two DHAP-dependent aldolases with DHA as electrophilic receptors were presented in this study. In addition, insights into this catalytic property were given via in silico simulations. Moreover, the cost-effective synthesis of dendroketose starting from one-, three-, and six-carbon resources was achieved through in vivo and in vitro metabolic engineering strategies. This rare branched-chain ketohexose may serve as precursor to prepare 4-hydroxymethylfurfural and branched-chain alkanes using chemical method. Electronic supplementary material The online version of this article (10.1186/s13068-018-1293-7) contains supplementary material, which is available to authorized users. Background Directed aldol reaction is one of the most powerful carbon-carbon bond-forming procedures in synthetic organic chemistry and enables the concomitant creation of functionalized stereogenic centers and construction of chiral complex polyhydroxylated molecules [1,2]. Catalytic asymmetric addition of carbon nucleophiles (donor) to ketones (acceptor) is a fundamental approach to construct new tetrasubstituted stereogenic carbon centers. This reaction is synthetically efficient to synthesize chiral tertiary alcohols, which are important building blocks of naturally occurring and artificial biologically active molecules [3]. Aldol addition with ketones as electrophilic receptors is extremely challenging compared with the catalytic enantioselective aldol reaction to aldehydes. Few successful examples have been shown in catalytic aldol reaction to ketones, and they rely on metal catalysts or highly reactive trichlorosilyl enolate of methyl acetate [4][5][6]. However, all these reactions have a narrow substrate scope and depend on activated ketone acceptors or chiral auxiliaries. Biocatalyzed aldol additions are attractive because this type of reaction occurs under mild conditions. As such, aldolases are particularly compatible as catalysts in the production of chiral compounds due to high selectivity and catalytic efficiency [7,8]. A number of aldolases for catalyzing enantioselective aldol additions taking aldehydes as acceptors have been developed [9]. However, to date, only two aldolases that can use ketones as acceptors have been reported. One is pyruvate-dependent aldolase from Pseudomonas taetrolens, which exhibits the catalytic ability in aldol addition of pyruvate to a ketone acceptor indole-pyruvic acid and has been used in the stereoselective synthesis of a precursor of monatin [10,11]. The other one is l-rhamnulose-1-phosphate aldolase (RhaD) from Bacteroides thetaiotaomicron. This enzyme catalyzes the aldol reaction between DHAP and several ketones (hydroxyacetone, 1-hydroxybutanone, hydroxypyruvate and l-erythrulose) and gave four branched-chain sugars by coupling an acid phosphatase [12]. The development of several kinds of aldolases that can tolerate ketones as electrophilic receptors is necessary in the asymmetric catalysis field. Branched-chain sugars, e.g., dendroketose, which belong to a class of rare sugars, are monosaccharides, which rarely exist in nature [13]. Dendroketose was obtained from the polymerization of two molecules of dihydroxyacetone [14,15]. Typically, dendroketose that contains a tertiary alcohol moiety can be chemically dehydrated to furfural derivatives 4-hydroxymethylfurfural (4-HMF) which showed broad application prospects in preparing fine chemicals [16]. Such conversion is similar to that of fructose to 5-hydroxymethylfurfural (5-HMF) [17]. The synthesis of 2,5-dimethylfuran (DMF) from 5-HMF is a highly attractive route to a renewable fuel [18]. The feasibility of producing 2,4-dimethylfuran (2,4-DMF) or C 9 -C 15 branched-chain alkanes as liquid transportation fuels from 4-HMF has also been demonstrated [19]. Therefore, developing methods that can synthesize branched-chain sugar is meaningful. The asymmetric aldol addition reactions via DHAP-dependent aldolases are particularly striking strategies in preparation of innovative branched-chain sugars due to the direct and rapid creation of molecular complexity in benign environments [7]. DHAP-dependent aldolases exhibit a strict specificity for the donor DHAP [20]. However, compound DHAP is unstable [21] and currently very expensive. These conditions limit their large-scale application. In this work, we investigated the catalytic properties of two DHAP-dependent aldolases with ketones as electrophilic receptors. Moreover, we calculated the Bürgi-Dunitz trajectory using molecular dynamics (MD) simulations to reveal the original sources of the catalytic efficiency. Finally, an in vitro multi-enzyme system was designed, and two in vivo artificial pathways were constructed in Corynebacterium glutamicum to synthesize dendroketose from one-, three-, and six-carbon resources. Results and discussion Aldol reactions to DHA catalyzed by DHAP-dependent aldolases DHAP-dependent aldolases have been widely investigated for the synthesis of several new deoxy or phosphorylated sugars and iminocyclitols [22][23][24]. Naturally, this class of enzyme utilizes DHAP as the donor substrate and accepts a broad range of acceptor aldehydes. Well-known members of this class include FucA, RhaD, fructose 1,6-diphosphate aldolase (FruA), and tagatose 1,6-diphosphate aldolase (TagA). To date, only one kind of DHAP-dependent aldolases has been discovered in catalyzing the aldol reaction between DHAP and ketones and in synthesizing branched-chain sugars [12]. To investigate whether other kinds of DHAP-dependent aldolases showed this catalytic property, RhaD, FucA, TagA and FruA from Escherichia coli were used as candidates to catalyze the aldol reaction between DHAP and DHA. Expectedly, both FucA and RhaD enabled the direct aldol addition of DHAP to DHA to form a new compound in combination of acid phosphatase (AP) with conversions of 17% and 73%, respectively ( Fig. 1 and Table 1). The RhaD also accepted DHA as nucleophile donor [25]. Therefore, we carried out the aldol reaction with DHA as the sole substrate. An identical product was obtained with a low conversion (7%); however, the reaction needed longer reaction time of 96 h. Naturally, DHAP-dependent aldolases create two new stereogenic centers at C-atoms 3 and 4 [22]. In our work, nuclear magnetic resonance (NMR) analysis showed that both the products obtained from RhaD and FucA have two hydroxymethyl groups at C-4 carbon presenting a branched-chain sugar. RhaD and FucA from E. coli have been crystallized and showed a strict 3R-stereoselectivity due to mechanistic requirements [26,27]. Therefore, those two enzymes shared identical product termed as dendroketose and can be obtained by self-aldolization of DHA using chemical method [19]. Enzymes TagA and FruA from E. coli failed to catalyze aldol addition to DHA. Given that RhaD and FucA are class II aldolases, we further measured the catalytic aldol addition to DHA using the class I aldolase, such as fructose 6-phosphate aldolase (FSA) and FruA from rabbit muscle (RAMA). No product was detected when using RAMA and FSA. Molecular dynamic simulations provide insights into the catalytic properties of RhaD and FucA To gain insights into the catalytic properties of RhaD and FucA from E. coli, we determined the apparent steady-state kinetic parameters in the reactions of the nucleophile DHAP and acceptor DHA. Substrate l-glyceraldehyde (l-GAL) was evaluated as the control to compare the catalytic properties when using aldehyde and ketone as acceptors. Despite the lower K M value of RhaD to DHA (23.4 mM) than that of l-GAL (40.8 mM), the k cat /K M value of RhaD to DHA was threefold lower than that of l-GAL (Table 1). These results indicated that DHA binds to RhaD with higher affinity but with lower catalytic activity than l-GAL. In the case of FucA, the k cat /K M value of FucA to DHA was 65-fold lower than that of FucA to l-GAL. This result indicated that the catalytic activity of FucA toward DHA was significantly lower than that to l-GAL. In terms of k cat /K M , clearly, the value of RhaD to DHA was 350-fold higher than that of FucA to DHA. This finding suggested that RhaD is much more efficient in catalyzing the aldol reaction between DHAP and DHA than FucA. According to the K M value of FucA in Table 1, the failure of FruA and TagA in catalyzing the aldol addition to DHA was probably due to very low affinity and catalytic efficiency to this substrate. Four dimer models of enzyme-substrate complexes (dubbed as "FucA-DHAP-DHA", "FucA-DHAP-GAL", "RhaD-DHAP-DHA" and "RhaD-DHAP-GAL") ( Fig. 2) were correspondingly subjected to MD simulations at 300 K (to mimic the experimental conditions) for 100 ns to clarify the possible reasons of the distinct kinetic parameters of FucA and RhaD at the molecular level. Root-mean-square deviation (RMSD) along with rootmean-square fluctuation (RMSF) analysis suggested that the simulated trajectories of all four systems were stable and realizable (Additional file 1: Figures S1 and S2). In organic chemistry theory, the Bürgi-Dunitz angle (α BD ) describes the trajectory of approach of a nucleophile to an electrophile (Fig. 3a). In addition, the value of α BD determines whether enzymatic reactions form reactive Michaelis complexes or are arrested. The α BD analysis is proved to be a very helpful method in evaluating whether enzymatic reactions are active or not [28,29]. The ideal α BD observed in enzymatic reactions involving the carbonyl groups (e.g., in transaldolase [30], protease [31], and alcohol dehydrogenase [32]) may well differ from the value determined and calculated in organic chemistry (> 105 ± 5°) but will certainly be > 90°. In our study, we analyzed the calculated α BD values between the nucleophile DHAP C-atom (C nu ) and the electrophile sp2 carbonyl (C el -O) in all four systems (FucA-DHAP-DHA, FucA-DHAP-GAL, RhaD-DHAP-DHA, and RhaD-DHAP-GAL). The catalytic efficiency of FucA and RhaD is dependent on how often the nucleophile and electrophile are present in properly positioned poses and can be reflected at calculated α BD value. The average ᾱ BD value (90.85°) in FucA-DHAP-DHA system was lower than that of l-GAL as acceptor for FucA ( ᾱ BD = 103.25°) (Fig. 3b, c). Moreover, the ᾱ BD value (97.43°) in RhaD-DHAP-DHA was higher than that (90.85°) of FucA-DHAP-DHA but lower than RhaD-DHAP-l-GAL (Fig. 3d, e). These in silico results were supported by the k cat value. Construction of in vitro multi-enzyme system to produce dendroketose from formaldehyde One-carbon compounds as a low-cost, abundant feedstock option have been recently drawing attention in energy and chemical fields [33]. The bioconversion of one-carbon compounds into high-value products is under investigation [34,35]. Here, we attempted to construct a multi-enzyme system to in vitro synthesize dendroketose with formaldehyde (FALD) as substrate. This system comprised five enzymes, such as formolase (FLS), DHA kinase (DhaK), RhaD, fructose-1-phosphatase (YqaB), and polyphosphate kinase (PPK) (Fig. 4). The enzyme FLS was a computationally designed enzyme with benzaldehyde lyase from Pseudomonas fluorescens as starting point. FLS catalyzes the continuous carboligation of FALD to DHA with thiamine pyrophosphate (TPP) as cofactor [36]. DHA phosphorylation catalyzed by DhaK with adenosine triphosphate (ATP) as cofactor contributes in obtaining DHAP. To recycle using ATP, an ATP regeneration system based on PPK and polyphosphate was introduced into the reaction system. Enzyme RhaD catalyzed the aldol addition of the resulting DHAP to DHA to give dendroketose-1-phosphate. The latter was then dephosphorylated by YqaB to synthesize dendroketose (Fig. 4). Along this line, the FLS, DhaK from Citrobacter freundii [37], RhaD, YqaB from E. coli [38], and PPK from Rhodobacter sphaeroides [39] were chosen to construct this multi-enzyme reaction system. Those five enzymes were individually expressed in E. coli BL21(DE3). The key information including the Uniprot/Genebank number and enzyme activity for those enzymes is summarized in Table 2. The specific activities of purified enzymes for FLS, RhaD and PPK were 0.09, 0.27 and 18.2 U/mg, respectively. In our multi-enzyme system, the conversion of FALD to DHA was the key step. When FLS catalyzes this [36]. The aldolase RhaD not only converts DHAP and DHA to dendroketose but also catalyzes the aldol addition to substrate FALD and intermediate GA to d-erythrulose and l-xylulose, respectively [40]. We initially mixed the purified enzymes of FLS, DhaK, RhaD, YqaB, and PPK into one-pot reaction medium. However, a mixture of l-xylulose (68%), d-erythrulose (11%) and dendroketose (31%) was obtained (data no shown). In our previous study, onepot, two-stage reaction process was used to decrease the byproduct formation and increase the product yield during the multi-enzyme cascade reaction with FALD as the sole substrate [34]. Here, this one-pot, two-stage reaction process was again employed. In the first stage, the high conversion of FALD to DHA catalyzed by FLS should be achieved. In the second stage, four other enzymes converted the resulting DHA to dendroketose (Fig. 4). Accordingly, we mixed FLS (1.5 U, 16 mg) and 0.1 mM TPP at different concentration of FALD (20, 40, 60, 80, 100, 200 mM) in the reaction medium and performed the reaction at 30 °C for 16 h in the first stage. The conversion rate of DHA maintained high level (> 90%) at FALD concentration of 20 or 40 mM. However, this rate decreased when FALD concentration was equal to or higher than 60 mM, and no conversion was observed at 200 mM FALD (Fig. 5a). We further measured the enzyme activity of FLS under different FALD concentration. The catalytic ability of FLS indeed decreased when FALD concentration is higher than 60 mM and was absolutely inactivated at 200 mM. Those results indicated that higher FALD concentration inhibited the enzyme activity of FLS and decreased the formation of DHA. When 40 mM FALD was used, the system produced 12.1 mM DHA with a conversion rate of 91%. This value was calculated with the ratio of DHA formation amount to initial concentration of FLAD. The GA has not been detected in this system. This result was identical to the previous study for which DHA was the primary product at high concentrations of FALD (> 10 mM) [41]. In the second stage, the individually purified enzymes DhaK, RhaD, YqaB, PPK, 10 mM polyphosphate, 0.5 mM ATP Table 2 The key information of enzymes used in multi-enzyme system a The enzyme FLS was a mutant of benzaldehyde lyase (BAL) (Uniprot P51853) at A28I, A394G, G419N, and A480W substitution 5 a The influence of FALD concentration on FALD conversion and FLS enzyme activity. The conversion rate was calculated with the ratio of DHA formation amount to initial concentration of FLAD. One unit of enzyme activity was defined as the enzyme amount catalyzing the formation of 1 μmol of total DHA and GA per min. b Time course of one-pot, two-stage cascade reaction process to synthesize dendroketose from FALD; mean and error bars were calculated based on triplicate experiments and 5 mM MgSO 4 were added into the reaction system. After reaction for another 24 h, this system produced 5.7 mM (1.03 g/L) dendroketose with a conversion of 86%, which was calculated with the ratio of dendroketose formation amount to initial concentration of FLAD (Fig. 5b). We also detected a small amount of d-erythrulose (~ 0.5 mM) which was derived from the aldol reaction between DHAP and FALD. In this multi-enzyme system, the enzyme activity of FLS to FALD was low for DHA production. Combination of mutational hot spots analytical method and site-saturated mutagenesis strategy has increased the catalytic efficiency of FLS to acetaldehyde for 72.9% [42]. This protein engineering strategy showed the application potential in improving catalytic efficiency of FLS to FALD and further in increasing the production efficiency of dendroketose in the multienzyme system. Pathway design and strain engineering to produce dendroketose from glycerol Glycerol as a major byproduct in biodiesel industry has been considered an abundant and cost-effective feedstock for the production of value-added bioproducts [43][44][45]. This carbon resource is especially suitable for the production of dendroketose because the conversion of glycerol to DHAP and DHA needs only two enzymatic catalytic reactions which are catalyzed by glycerol dehydrogenase (GDH) and DhaK, respectively (Fig. 6). Here, we investigated the production of dendroketose with glycerol as feedstock through metabolic engineering of C. glutamicum, a Gram-positive soil bacterium generally recognized as safe status. In the previous study, the aldol reaction pathway (pXRTY) based on RhaD and YqaB [33] and glycerol assimilation pathway (pEFDK) based on GlpF, glycerol dehydrogenase from Klebsiella pneumoniae (DhaD) and DhaK has been combined in widetype strain, resulting in strain WT(pXRTY/pEFDK) [46]. This strain was initially cultured in CGXII minimal salt medium containing 220 mM glycerol and 110 mM DHA. After fermentation for 48 h, 5.6 g/L (31.1 mM) dendroketose was produced and DHA was absolutely consumed. We still detected 174.5 mM glycerol in the medium. As a result, the dendroketose yield was 0.68 mol/mol glycerol. This value was calculated with the ratio of formation of dendroketose to consumption of glycerol (Table 3). When this train was cultured with 220 mM glycerol as sole feedstock, the production decreased sevenfold. This result was probably due to more carbon flux into biomass but less into desired product. The enzyme triosephosphate isomerase (TPI) catalyzes the isomerization of DHAP to glyceraldehyde 3-phosphate (Ga3P) [47] and then direct carbon flux to biomass. Blockage of this reaction would increase DHAP accumulation and decrease the biomass formation (Fig. 6). Along this line, we constructed strain SY6(pXRTY/pEFDK) by transferring plasmids pXRTY and pEFDK into strain SY6, in which the gene tpi was eliminated. When this strain was cultivated in CGXII medium with 220 mM glycerol as the sole carbon source, 1.2 g/L (6.7 mM) dendroketose was produced. This value was 50% higher than that of strain WT(pXRTY/pEFDK). However, the production was still far from satisfactory. Second carbon source permitting the generation of ATP and nicotinamide adenine dinucleotide for cell growth should be co-utilized to increase the efficiency of the glycerol assimilation. Accordingly, strain SY6(pXRTY/pEFDK) was cultured in BHI-rich medium containing the same concentration of glycerol. After fermentation for 48 h, this strain produced 6.4 g/L (35.6 mM) dendroketose and maintained 141 mM glycerol in the medium ( Fig. 7a and Table 3). The maximum theoretical yield for dendroketose production with glycerol as substrate is 0.5. In this work, the yield reached 0.45 mol/mol glycerol, representing the 90% of the maximum theoretical value. We also detected small amount of DHA (4.6 mM) in the medium. Metabolic engineering of dendroketose production from glucose Glucose as the most commonly used feedstock can be converted to DHAP via glycolytic pathway in vivo. Enzyme HdpA from C. glutamicum belongs to an HAD super family phosphatase and catalyzes the dephosphorylation of DHAP to DHA [48]. In this case, DHAP and DHA would be obtained from the glucose. In this study, we attempt to synthesize dendroketose with glucose as carbon resource through strain engineering of C. glutamicum (Fig. 6). Firstly, the TPI should be inactivated to efficiently accumulate DHAP in vivo. Secondly, the gene hdpA was overexpressed. Thirdly, the artificial aldol pathway in combination with aldolase RhaD and YqaB from E. coli was introduced to convert the accumulated DHAP and DHA to dendroketose. Accordingly, we constructed an engineered C. glutamicum strain SY6(pXRTYH), wherein the gene tpi was deleted and genes of RhaD, YqaB, and HdpA were overexpressed via plasmid pXR-TYH (Table 4). Similarly, strain SY6(pXFucTYH) was constructed via replacement of RhaD with FucA. We have not detected the production of dendroketose in strain WT(pXRTYH) because of scarce accumulation of DHAP in wide-type strain [33]. Fermentation of strain SY6(pXRTYH) gave 36.3 g/L of dendroketose with a yield of 0.46 mol/mol glucose within 24 h (Fig. 7b and Table 3), whereas strain SY6(pXFucTYH) only produced 8.2 g/L of dendroketose (Additional file 1: Figure S4). The production would further increase during fermentation optimizations. Conclusion In summary, the detailed catalytic property of RhaD and FucA from E. coli in direct aldol addition using DHA as electrophile acceptor was characterized. The Pathway design and strain engineering in C. glutamicum to synthesize dendroketose from glycerol and glucose. glpF glycerol facilitator, dhaD glycerol dehydrogenase, dhaK ATP-dependent dihydroxyacetone kinase, G6P glucose 6-phosphate, F6P fructose 6-phosphate, F16P fructose 1,6-phosphate, Ga3P d-glyceraldehyde 3-phosphate, pgi encoding glucose-6-phosphate isomerase, pfkA encoding 6-phosphofructokinase, fba encoding fructose-bisphosphate aldolase, tpi encoding triosephosphate isomerase, hdpA encoding a HAD superfamily phosphatase, rhaD encoding l-rhamnulose-1-phosphate aldolase, fucA encoding l-fuculose-1-phosphate aldolase, yqaB encoding fructose-1-phosphatase Bürgi-Dunitz trajectory calculated by MD simulations has been applied to reveal the catalytic efficiency difference of those two aldolases. Furthermore, we provided a green and environment-friendly biocatalytic approach to synthesize a rare branched-chain sugar dendroketose directly from FALD with high conversion rate. The synthesis of dendroketose from renewable feedstock glucose or glycerol using an engineered strain was also achieved with high titre and yield. Those advanced routes developed in this study presented low-cost way in producing dendroketose. Chemical dehydration of dendroketose will obtain 4-HMF, which serves as a potential platform molecule in preparing certain biofuels and fine chemicals. Bacterial strains, plasmids and materials Compounds DHAP, DHA, l-GAL, AP from potato, isopropyl-β-d-thiogalactopyranoside (IPTG), polyphosphate, ATP, l-fructose, l-tagatose, glycerol, glucose, and antibiotics were purchased from Sigma-Aldrich. All restriction enzymes and DNA ligase were purchased from Novagen (Darmstadt, Germany). Ni-NTA affinity chromatography column was purchased from QIAGEN. The yeast extract and tryptone were purchased from OXOID LID, and brain heart infusion (BHI) was purchased from Becton, Dickinson and Company. All bacterial strains and plasmids are listed in Table 4. . 7 a Fermentation of strain SY6(pXRTY/pEFDK) to produce dendroketose from glycerol. The strain was cultivated in BHI-rich medium containing 220 mM glycerol. b Fermentation of strain SY6(pXRTYH) to produce dendroketose from glucose. The initial concentration of glucose concentration was 220 mM, and additional 220 mM glucose was supplemented into the reaction medium after 6 h to increase the production. Mean and error bars were calculated based on triplicate experiments Vectors and strains construction The genes fucA from E. coli and dhaK from Citrobacter freundii were amplified from genome and cloned into pET-21a(+) to obtain pET21-FucA and pET21-DhaK, respectively. The plasmid pET21-PPK containing the gene of ppk from Rhodobacter sphaeroides was kindly provided by Professor Chun You in our institute. For the construction of plasmid pXRTYH and pXFucTYH, gene hdpA was amplified from C. glutamicum 13032 genome. The amplified fragments were ligated into plasmid previous constructed plasmids pXRTY [40] and pXFucTY [46] at the SmaI and SacI sites to obtain pXRTYH and pXFucTYH, respectively. The constructed plasmids were then electroporated into the recombinant strain SY6, in which the gene tpi has been eliminated, to generate strains SY6(pXRTYH) and SY6(pXFucTYH). Plasmid pEFDK containing genes glpF, dhaD and dhaK [46] and pXRTY were co-transformed into SY6 strain to obtain SY6(pXRTY/pEFDK). Recombinant proteins expression and purification Escherichia coli BL21(DE3) strains harboring expression plasmids were cultured at 37 °C in 1 L LB medium containing 100 mg/L ampicillin to an optical density OD 600 of 0.6. 0.5 mM IPTG was added into the culture to induce protein expression and the temperature was adjusted to 16 °C to avoid inclusion body formation. After incubation for an additional 20 h, cells were harvested, washed twice and suspended in 50 mM triethanolamine (TEA) (pH 7.5) buffer. The suspension cells were then lysed by sonication and centrifuged at 14,000×g and 4 °C for 10 min. Clear supernatant was collected and loaded onto an Ni 2+ -NTA-agarose column pre-equilibrated with binding buffer (50 mM TEA buffer, 300 mM NaCl, 20 mM imidazole, pH 7.5). The retained proteins were recovered with elution buffer (50 mM TEA buffer, 300 mM NaCl, 300 mM imidazole, pH 7.5). The eluted fraction containing purified protein was dialyzed to eliminate buffer, salt and imidazole. The purified enzymes were freeze dried using a vacuum pump and stored at − 20 °C. Enzyme activity assay The activity of PPK was assayed in a reaction mixture After the reaction at 30 °C for 45 min, the reaction was stopped by the addition of 10% H 2 SO 4 (0.5 μL). One unit of enzyme activity was defined as the enzyme amount catalyzing the formation of 1 μmol of total DHA and GA per min. Steady-state kinetic parameters of RhaD and FucA to DHA and l-GAL Reaction: aldol addition of DHAP to l-GAL. To a solution containing freshly neutralized DHAP (60 mM) and RhaD (0.05 mg powder) in 50 mM TEA buffer pH 7.5 at 25 °C, different amounts of l-GAL (0.2, 0.5, 2, 5, 10, 20, 40, 60 mM) were added. The final volume was 400 μL. Samples (40 μL) were withdrawn at different times (0, 2, 5, 10, 20, 30 min) and the reaction was stopped by the addition of 10% H 2 SO 4 (0.5 μL). Samples were then analyzed by HPLC to measure the l-GAL consumption. One mmol of l-GAL consumed was equivalent to 1 mmol of l-fructose-1-phosphate formed. Aldol reactions with DHAP and DHA as substrates The reaction mixture (1 mL) contained freshly neutralized 50 mM DHAP solution, 50 mM DHA, 50 mM TEA buffer (pH 7.5) and RhaD (1 mg) or FucA (2 mg). The reaction mixture was transferred to a 1.5-mL Eppendorf tube and shaken at 25 °C and 120 rpm for 24 h. Then, the pH of the mixture was adjusted to 4.5-5.5 using 10% H 2 SO 4 , and 2 U AP was supplemented. The dephosphorylation reaction was performed at 30 °C for another 24 h. Molecular modeling Models of the dimer structures of FucA complex and RhaD complex were generated as follows: the monomer structure of FucA is derived from the previous study (PDB code 4FUA). However, there is no available polymer structure of FucA by searching the PDB database. TM-align program [49] was used to search for FucA homologies; in the top 10 hits ranked by TM-score, 2OPI crystalized in polymers was used as template. In the case of RhaD, the X-ray structure (PDB code 1GT7) was directly used as the basis for dimer creation of RhaD. The coordinates of the donor DHAP and the acceptor DHA/ L-GAL in constructed dimers were superimposed with those from the PDB codes 1OJR and 4FUA. Based on the catalytic mechanism of class II Aldolase, a specific residue in the adjacent monomer (Tyr113′ in FucA, and Glu171′ in RhaD) plays a key role on the protonation of the carbonyl oxygen of ketone acceptors. Therefore, dimer models of enzyme-substrate complexes (dubbed as "FucA-DHAP-DHA", "FucA-DHAP-GAL", "RhaD-DHAP-DHA" and "RhaD-DHAP-GAL") were built to reflect the catalytic mechanism. During the simulations, a constant force of 10 kcal/mol between the nucleophile C-atom and the electrophile C-atom was constructed via the consideration of Van der Waals' force. To estimate the stability of 100 ns trajectories of the four systems, root-mean-square deviation (RMSD) for all C α atoms was analyzed and no significant structure difference was observed. Furthermore, rootmean-square fluctuation (RMSF) which could reflect the stability of individual residue of protein was also evaluated; most residues are stable except for the terminations between two monomers. Molecular dynamics (MD) simulations The initial structures used for MD simulation were obtained from modeling analysis. Each apo-protein was protonated at pH 7.5 using H++ webserver. The Amber ff14SB force field was employed for the protein in all the MD simulations [50]. Na + ions were added to neutralize the system, and the TIP3P water model was used to solvate each system, ensuring a solvent layer of at least 10 Å from any point on the protein surface. Charges and parameters for ligands were generated with the Antechamber module using the AM1-BCC charge model along with the amber GAFF force field. The force field of zinc ion and its neighboring atoms (cutoff was set as 2.8 Å) were parameterized using 'MCPB.py' modeling, using a hybrid bonded/restrained nonbonded model. As a result, the three histidine residues (H92, H94 and H155 in FucA, and H141, H143 and H212 in RhaD) were attached to zinc ion by coordinate bonds, whereas the two oxygen atoms of donor were attached to zinc ion by applying harmonic restraint (100 kcal/mol). After proper parameterizations and setup, the resulting system's geometries were minimized (5000 steps for steepest conjugate and 5000 steps for conjugate gradient) to remove poor contacts and relax the system. The systems were then annealed from 0 to 300 K (≈ 27 °C) to mimic experimental temperature under the constant amount of substance (N), volume (V) and temperature (T) (NVT ensemble) for 50 ps. Subsequently, the systems were maintained for 25 ps of density equilibration under constant amount of substance (N), pressure (P) and temperature (T) (NPT ensemble) at constant temperature of 300 K and pressure of 1.0 atm using Langevin-thermostat (ntt = 3) with collision frequency of 2 ps −1 and pressure relaxation time of 1 ps. The heating and density equilibrations were carried out with a weak restraint of 20 kcal mol −1 Å −2 performed on all the residues. The systems were further equilibrated for 250 ps to get well settled pressure and temperature for conformational and chemical analyses. After proper minimizations and equilibrations, a productive MD run of 100 ns was performed for each system. During all MD simulations, the covalent bonds containing hydrogen were constrained using SHAKE algorithm [51], with a MD time step of 2 fs. The trajectory file was written every 1000 steps. All the above MD simulations were performed with GPU version of Amber 16 package. The generated trajectories (interval = 200, a total of 500 frames for each case) were used for the relative binding energy evaluation. Shake flask scale cultivation For precultivation of recombinant strain, a single clone was grown in 5 mL of BHI medium. After incubation for approximately 15 h, cells were inoculated into a 500-mL shake flask containing 100 mL BHI medium and cultivated at 25 °C in a rotatory shaker at 220 rpm. When the cell OD 600 reached 0.8, 1 mM IPTG was added to induce enzyme expression. Subsequently, the cells were harvested by centrifugation (8000×g, 10 min, 4 °C) and were suspended in CGXII medium [52]. Then, 50 mL cells were transferred into a 250-mL shake flask with an initial OD 600 of approximately 30. When appropriate, 10 mg/L chloramphenicol and 25 mg/L kanamycin were added. The fermentation process was carried out at 30 °C and 200 rmp. If glycerol and DHA were used as substrates, the concentration of glycerol and DHA was assigned to 220 mM and 110 mM, respectively. To produce dendroketose from glucose, 220 mM glucose was supplemented into the medium. For the fed-batch fermentation, 220 mM glucose was supplemented again into the medium after fermentation for 6 h. Samples were collected every 2 h and centrifuged at 14,000×g for 20 min. The resulting supernatants were analyzed by HPLC. The desired product was separated by a chromatographic column filled with Ca 2+ ion exchange resin, identified by a refractive index detector and then collected by a fraction collector. The purified products were analyzed by NMR. Analytical methods Cell density was determined by measuring the optical density at 600 nm (OD 600 ) with a UV-Vis spectrophotometer (TU-1901, Persee, Beijing, China). Cell dry weight (CDW, g/L) of E. coli was calculated from OD 600 values using the experimentally determined correlation factor of 0.25 g cells (dry weight [DW])/liter for an OD 600 of 1. Protein concentrations were determined by the Bradford method using bovine serum albumin as a standard. HPLC system (Agilent 1100 series, Hewlett-Packard) equipped with a refractive index detector and fitted with chromatographic column (Bio-Rad Aminex HPX-87H column or Waters Sugar-Pak I column) was used to qualitative and quantitative analysis of substrates and products. Additional file Additional file 1: Figure S1. Root mean square deviations (RMSD) measured during 100 ns MD simulation. Figure S2. Root mean squared fluctuations (RMSF) measured during 100 ns MD simulation. The terminations
7,470.4
2018-10-25T00:00:00.000
[ "Chemistry", "Engineering", "Biology", "Environmental Science" ]
Some Remarks on Energy inequalities for harmonic maps with potential In this note we discuss how several results characterizing the qualitative behavior of solutions to the nonlinear Poisson equation can be generalized to harmonic maps with potential between complete Riemannian manifolds. This includes gradient estimates, monotonicity formulas and Liouville theorems under curvature and energy assumptions. Introduction and Results One of the most studied partial differential equations for a scalar function u : R n → R is the Poisson equation, that is ∆u = f, where f : R → R is some given function. If one allows the function f to also depend on u, that is ∆u = f (u), one calls the equation nonlinear Poisson equation. Following the terminology from the literature we call a solution u ∈ C 3 (R n ) of the nonlinear Poisson equation entire. For entire solutions of the nonlinear Poisson equation the following results characterizing its qualitative behavior have been obtained: (1) Suppose that F ∈ C 2 (R) is a nonnegative function that is a potential for f , that is F ′ (u) = f (u). Let u be an entire, bounded solution of the nonlinear Poisson equation. Then the following energy inequality holds [16] |∇u| 2 ≤ 2F (u). (1.1) Such kind of inequalities became known as Modica-type estimates. (2) Making use of the Modica-type estimate (1.1) the following Liouville theorem was given [16,Theorem 1]: Suppose F ∈ C 2 (R) is a nonnegative function that is a potential for f and u an entire, bounded solution of the nonlinear Poisson equation. If F (u(x 0 )) = 0 for some x 0 ∈ R n , then u must be constant. (3) Again, making use of the Modica-type estimate, the following monotonicity formula has been established in [1]. Let u : R n → R be an entire, bounded solution of the nonlinear Poisson equation. Then the following monotonicity formula holds d dr where B r (x) denotes the ball around the point x ∈ R n with radius r. (4) Another kind of Liouville theorem was achieved in [19]: Suppose that u is an entire solution of ∆u = f (u, Du). If ∂f ∂u ≥ 0 and both u and Du are bounded, then u must be constant. (5) Recently, a maximum principle has been established for solutions of ∆u = ∇F (u) in the vector valued-case [3,2], that is u : A ⊂ R n → R m , where A ⊂ R n is some domain. Here it is assumed that the potential F vanishes at the boundary of a closed convex set. In this note we focus on the study of a geometric generalization of the nonlinear Poisson equation, which leads to the notion of harmonic maps with potential. To this end let (M, h) and (N, g) be two Riemannian manifolds, where we set n = dim M . For a smooth map φ : M → N we consider the Dirichlet energy of the map, that is M |dφ| 2 dM . In addition, let V : N → R be a smooth scalar function. We consider the following energy functional The Euler-Lagrange equation of the functional (1.3) is given by where τ (φ) ∈ Γ(φ * T N ) denotes the tension field of the map φ. Note that in contrast to the Laplacian acting on functions the tension field of a map between Riemannian manifolds is a nonlinear operator. Solutions of (1.4) are called harmonic maps with potential. We want to point out that motivated from the physical literature one defines (1.3) with a minus sign in front of the potential. Harmonic maps with potential have been introduced in [9]. It is shown that due to the presence of the potential, harmonic maps with potential can have a qualitative behavior that differs from the one of harmonic maps. Existence results for harmonic maps with potential have been obtained by the heat flow method [10], [6] under the assumption that the target has negative curvature. In addition, an existence result for harmonic maps with potential from compact Riemannian manifolds with boundary was obtained in [5], where it is assumed that the image of the map lies inside a convex ball. Besides the aforementioned existence results there also exist several Liouville theorems for harmonic maps with potential. For a compact domain manifold M these were derived by the maximum principle under curvature assumptions in [9,Proposition 4]. A Liouville theorem for harmonic maps with potential from a complete noncompact Riemannian manifold and the assumption that the image of the map φ lies inside a geodesic ball is given in [4]. A monotonicity formula for harmonic maps with potential together with several Liouville theorems was derived in [14]. For functions on Riemannian manifolds several generalizations of the Modica-type estimate (1.1) have been established, see [15], [17]. These results hold under the assumption that the manifold has positive Ricci curvature. However, is was also noted that estimates of the form (1.1) do not hold if we consider vectorvalued functions [11], [13]. It is the aim of this article to discuss if the results obtained for the nonlinear Poisson equation stated in the introduction still hold when considering harmonic maps with potential. This article is organized as follows: In Section 2 we discuss in which sense the Modica-type estimate (1.1) for solutions of the nonlinear Poisson equation can be generalized to harmonic maps with potential between complete Riemannian manifolds. In the last section we will give a Liouville theorem for harmonic maps with potential under curvature and boundedness assumptions. Energy inequalities for harmonic maps with potential Before we turn to deriving energy inequalities let us make the following observation: If we want to model the trajectory of a point particle in a curved space, we can make use of harmonic maps with potential from a one-dimensional domain, which are just geodesics coupled to a potential. To this end we fix some interval I and consider a curve γ : I → N that is a solution of (1.4), which in this case reads Here ′ represents the derivative with respect to the curve parameter, which we will denote by s. For a curve γ satisfying this equation the total energy is conserved, that is This can easily be seen by calculating d ds where we used the equation for harmonic maps with potential in the last step. This fact is well-known in classical mechanics, that is the mechanics of point particles governed by Newton's law. The total energy consists of the sum of the kinetic and the potential energy and it is conserved when the equations of motion are satisfied. However, if the dimension of the domain M is greater then one, we cannot expect that a statement about the conservation of the total energy will hold in full generality. We will make of the following Bochner formula for a map φ : M → N , that is Here e i , i = 1, . . . , n is an orthonormal basis of T M . Throughout this article we make use of the Einstein summation convention, that is we sum over repeated indices. In addition, by the chain rule for composite maps we find where we used that φ is a solution of (1.4) in the second step. In order to obtain the Modica-type estimate (1.1) for solutions of the scalar nonlinear Poisson equation one makes use of the so-called P-function technique, which heavily makes use of the maximum principle. The generalization of the P -function to harmonic maps with potential is given by Unfortunately, it turns out that the P-function does not satisfy a "nice" inequality in the case of harmonic maps with potential. Lemma 2.2. Let φ : M → N be a smooth harmonic map with potential. Then the P -function (2.3) satisfies the following inequality Proof. Using the Bochner-formulas (2.1), (2.2) a direct calculation yields In addition, we apply the Kato-inequality and find yielding the result. Let us make some comments about (2.4): (1) If the target has dimension one, then the last two terms on the right hand side in (2.4) cancel each other. In this case one can successfully apply the maximum principle under the assumption that the domain has positive Ricci curvature giving rise to the Modica-type estimate (1.1). (2) If dim N ≥ 2, then the last two terms in (2.4) will no longer cancel each other. Moreover, it is well known by counterexamples, see [20,Section 2] and references therein, that one cannot expect to obtain a Modica-type estimate in the case that dim N ≥ 2. Since we cannot derive energy inequalities by making use of the techniques that were developed for solutions of the scalar nonlinear Poisson equation, we will apply ideas that were used to derive gradient estimates and Liouville theorems for harmonic maps between complete Riemannian manifolds [8]. Here, one assumes that the image of the map φ lies inside a geodesic ball in the target. 2.1. Gradient estimates for harmonic maps with potential. In the following we will make use of the following Proof. This follows from the Bochner formula (2.1) and the identity d|dφ| 2 2 ≤ 4|dφ| 2 |∇dφ| 2 . Now we fix a point x 0 in M and by r we denote the Riemannian distance from the point x 0 . Let η : N → R be a positive function. On the geodesic ball B r (x 0 ) in M we define the function Clearly, the function F vanishes on the boundary B a (x 0 ), hence F attains its maximum at an interior point x max . We can assume that the Riemannian distance function r is smooth near the point x max , see [7,Section 2]. In the following we will apply the Laplacian comparison theorem, see [12, p. 20], that is with some positive constant C L . Moreover, we make use of the Gauss Lemma, that is |dr| 2 = 1. Proof. At the maximum x max the first derivative of (2.6) vanishes, yielding Applying the Laplacian to (2.6) at x max gives Squaring (2.8) we find (2.10) Inserting (2.5) and (2.10) into (2.9) and using the Gauss Lemma we get the claim. To obtain a gradient estimate from (2.7) for noncompact manifolds M and N we have to specify the function η. First, we choose a function η that is adapted to the geometry of the target manifold motivated by a similar calculation for harmonic maps between complete manifolds [8]. Let ρ be the Riemannian distance function from the point y 0 in the target manifold N . We define with some positive number √ d to be fixed later, where B R (y 0 ) denotes the geodesic ball of radius R around the point y 0 in N . We will assume that R < π/(2 √ d), thus 0 < ξ(R) < √ d on the ball B R (y 0 ). Lemma 2.6. On the geodesic ball B R (y 0 ) we have the following estimate 12) where g denotes the Riemannian metric on N . We will also make use of the following fact: If c 1 x 2 − c 2 x − c 3 ≤ 0 for c i > 0, i = 1, 2, 3, then the following inequality holds x ≤ max{2c 2 /c 1 , 2 c 3 /c 1 }. (2.14) where the positive constant C 2 depends on the geometry of N . Proof. We choose the function ξ defined in (2.11) and insert it for η in (2.7). By the Hessian comparison theorem (2.12) we find where we also used that φ is a harmonic map with potential. In addition, making use of the assumption on the image of φ(M ), there exists a positive constant C 2 such that holds. Inserting this into (2.7) we find The claim then follows from (2.13). Corollary 2.8. Under the assumptions of Theorem 2.7 we can take the limit a → ∞ while keeping the point x 0 in M fixed and obtain the estimate If M has positive Ricci curvature and if the potential V (φ) is concave, then the following inequality holds which can be interpreted as a Modica-type estimate for harmonic maps with potential. There is another way how we can obtain a gradient estimate from (2.7), by assuming that the potential V (φ) has a special structure. More precisely, we have the following Theorem 2.9. Let φ : M → N be a smooth harmonic map with potential. Suppose that the Ricci curvature of M satisfies Ric M ≥ −A and that the sectional curvature K N of N satisfies K N ≤ B. Moreover, assume that the potential V satisfies Then the following energy estimate holds (2.15) The constant C 3 depends on the geometry of N . Proof. We make use of the formula (2.7), where we now choose η(φ) = V (φ). Making use of the assumptions on the potential V (φ) we note that In addition, again by the assumptions on the potential V (φ), we get for some positive constant C 3 . Inserting into (2.7) then yields The statement follows from applying (2.13) again. If M has nonnegative Ricci curvature then φ is trivial. 2.2. Generalized Monotonicity formulas. In the following we will make use of the stressenergy-tensor for harmonic maps with potential, which is locally given by The stress-energy-tensor is divergence-free, when φ is a smooth harmonic map with potential [14], that is ∇ i S ij = 0. Let us recall the following facts: A vector field X is called conformal if where L denotes the Lie-derivative of the metric h with respect to X and f : M → R is a smooth function. Lemma 2.11. Let T be a symmetric 2-tensor. For a conformal vector field X the following formula holds div(ι X T ) = ι X div T + 1 n div X Tr T. (2.17) By integrating over a compact region U and making use of Stokes theorem, we obtain: Lemma 2.12. Let (M, h) be a Riemannian manifold and U ⊂ M be a compact region with smooth boundary. Then, for any symmetric 2-tensor and a conformal vector field X the following formula holds where ν denotes the normal to U . We now derive a type of monotonicity formula for smooth solutions of (1.4) for the domain being R n . Lemma 2.13. Let φ : R n → N be a smooth harmonic map with potential. Let B r (x) be a ball with radius r in R n . Then the following formula holds r ∂Br(x) Proof. For M = R n we choose the conformal vector field X = r ∂ ∂r with r = |x|. Note that div X = n. The statement then follows from (2.12) applied to (2.16). Making use of the coarea formula we obtain the following Theorem 2.14. Let φ : R n → N be a smooth harmonic map with potential. Let B r (x) be a ball with radius r in R n . Then the following formula holds Corollary 2.15. Let φ : R n → N be a smooth harmonic map with potential. Suppose that V (φ) ≤ 0. Then we have the following monotonicity formula d dr Note that this monotonicity formula is different from the one for solutions of the nonlinear Poisson equation (1.2) since we do not have a Modica-type estimate for harmonic maps with potential. Monotonicity formulas for harmonic maps with potential with the domain being a Riemannian manifold have been established in [14]. The results presented above also hold for harmonic maps with potential that have lower regularity. To this end we need the notion of stationary harmonic maps with potential. Definition 2.16. A weak harmonic map with potential is called stationary harmonic map with potential if it is also a critical point of the energy functional with respect to variations of the metric on the domain M , that is Here k ij is a smooth symmetric 2-tensor. Every smooth harmonic map with potential is stationary, which is due to the fact that the associated stress-energy-tensor is conserved. However, a stationary harmonic map with potential can have lower regularity. For stationary harmonic maps with potential we have the following result generalizing [1, Theorem 3.1]: N ) be a harmonic map with potential. Suppose that M = R n , H n with dim M ≥ 3 and M (|dφ| 2 + |V (φ)|)dM < ∞, then the following inequality holds In particular, this implies that φ is constant when V (φ) ≤ 0. Proof. We will prove the result for the case that M = R n . Let η ∈ C ∞ 0 (R) be a smooth cut-off function satisfying η = 1 for r ≤ R, η = 0 for r ≥ 2R and |η ′ (r)| ≤ C R . In addition, we choose Y (x) := xη(r) ∈ C ∞ 0 (R n , R n ) with r = |x|. Hence, we find Inserting this choice into (2.19) we obtain We can bound the right-hand side as follows Making use of the properties of the cut-off function η we obtain Taking the limit R → ∞ and making use of the assumptions we find which finishes the proof for the case that M = R n . Making use of the Theorem of Cartan-Hadamard the proof carries over to hyperbolic space. Remark 2.18. The last Theorem can be interpreted as an integral version of (1.1) for bounded harmonic maps with potential. Now we derive a generalized monotonicity formula for harmonic maps with potential, where we take into account the pointwise gradient estimate (2.15). Theorem 2.19. Let φ : R n → N be a smooth harmonic map with potential. Suppose that the Hessian of the potential V satisfies − Hess V ≥ −A V and that the sectional curvature where the positive constant C depends on B. Proof. Throughout the proof we set Making use of the coarea formula and rewriting (2.18) we find Applying (2.15) we obtain the following inequality from which we get the claim. Let us make several comments on Theorem 2.19: Remark 2.20. (1) The monotonicity type-formula (2.20) can be interpreted as the generalization of (1.2) to harmonic maps with potential. (2) It is straightforward to generalize (2.18) to the case of the domain being a Riemannian manifold. A Liouville theorem In this section we derive a Liouville theorem for harmonic maps with potential from complete noncompact manifolds with positive Ricci curvature. Our result is motivated from a similar result for harmonic maps, see [18,Theorem 1]. In addition, this result also generalizes the Liouville theorem for solutions of the nonlinear Poisson equation [19], which is stated in detail in the introduction. Making use of the curvature assumptions and the fact that the potential is a concave function, (3.1) yields ∆e(φ) ≥ |∇dφ| 2 . We therefore obtain We set B ′ R := B R \ {x ∈ B R | e(φ)(x) = 0} and find Letting ε → 0 we get Now, letting R → ∞ and under the assumption that the energy is finite, we have hence the energy e(φ) has to be constant. If e(φ) = 0, then the volume of M would have to be finite. However, by [22,Theorem 7] the volume of a complete and noncompact Riemannian manifold with nonnegative Ricci curvature is infinite. Hence e(φ) = 0, which yields the result.
4,592
2016-09-23T00:00:00.000
[ "Mathematics" ]
The Structure, Activity, and Function of the SETD3 Protein Histidine Methyltransferase SETD3 has been recently identified as a long sought, actin specific histidine methyltransferase that catalyzes the Nτ-methylation reaction of histidine 73 (H73) residue in human actin or its equivalent in other metazoans. Its homologs are widespread among multicellular eukaryotes and expressed in most mammalian tissues. SETD3 consists of a catalytic SET domain responsible for transferring the methyl group from S-adenosyl-L-methionine (AdoMet) to a protein substrate and a RuBisCO LSMT domain that recognizes and binds the methyl-accepting protein(s). The enzyme was initially identified as a methyltransferase that catalyzes the modification of histone H3 at K4 and K36 residues, but later studies revealed that the only bona fide substrate of SETD3 is H73, in the actin protein. The methylation of actin at H73 contributes to maintaining cytoskeleton integrity, which remains the only well characterized biological effect of SETD3. However, the discovery of numerous novel methyltransferase interactors suggests that SETD3 may regulate various biological processes, including cell cycle and apoptosis, carcinogenesis, response to hypoxic conditions, and enterovirus pathogenesis. This review summarizes the current advances in research on the SETD3 protein, its biological importance, and role in various diseases. Introduction One of the most common posttranslational modifications that modulates the physicochemical properties of proteins and determines their functional diversity, is the transfer of a methyl group from S-adenosyl-L-methionine (AdoMet) to their specific amino acid residues [1]. The primary target sites of methylation are lysine and arginine. However, this process may also occur on other amino acids, namely, cysteine, glutamate, glutamine, and histidine [2]. Decades of research into lysine and arginine methylation on histone tails have led to a fairly good understanding of the importance of such modifications in the epigenetic regulation of gene expression. Furthermore, it has become clear over time that a large number of nonhistone proteins may also be methylated at lysine and arginine residues, which may affect cellular physiology in mammals [2]. On the other hand, our knowledge about the mechanisms and biological significance of methylation on "noncanonical" amino acids has remained surprisingly limited. This seems particularly true for protein histidine. Histidine methylation on the Nπ or Nτ atom of the imidazole ring has been known for many years, but the process has so far been studied in greater detail only for a few proteins, including actin [3], S100A9 [4], myosin [5], MLCK2 [6], and RPL3 [7] (Figure 1). This fact is also indicated by the slow progress of research on actin histidine methylation. Reactions catalyzed by protein histidine N-methyltransferases. At pH ≈ 7, two neutral tautomers of histidine residues may exist in proteins: the N1-protonated π-tautomer and the N3-protonated τ-tautomer. Data show that different protein histidine methyltransferases catalyze the transfer of a methyl group from S-adenosyl-L-methionine (AdoMet) to specific nitrogen of the imidazole ring. HPM1, SETD3, METTL9, and METTL18 are the only enzymes characterized with this activity so far. AdoHcy-S-adenosyl-L-homocysteine. The actin cytoskeleton, which is involved in a variety of central cellular processes, such as cell growth, division, and motility, has long been known to undergo different posttranslational modifications [8]. In 1967, Johnson and colleagues isolated actin from various vertebrate species, and demonstrated that Nτ-methylhistidine is a natural component of this protein and a product resulting from enzymatic methylation [9]. A similar finding was reported by Asatoor and Armstrong [10]. Later, attempts were made to determine the amino acid sequence around methylhistidine in skeletal muscle actin [11] and establish the biochemical importance of methylation in actin functions [12]. By the late 1970s, it was confirmed that only a single histidine residue in actin is Nτ-methylated, and the residue is located precisely at position 73 of the amino acid sequence [13]. However, it was only in 1987 that the presence of actin histidine methyltransferase in the myofibrillar fraction of rabbit muscle was shown for the first time [14]. The advent of recombinant DNA technology allowed better characterization of a partially purified rabbit enzyme by using nonmethylated recombinant actin which was heterologously expressed in Escherichia coli and a synthetic peptide corresponding to residues 69-77 of actin [15]. In addition, it was also proved that rabbit skeletal muscle is a source of two different histidine methyltransferases. The first of these enzymes was specific for actin, while the second one-carnosine N-methyltransferase-converts carnosine (β-alanyl-L-histidine) into anserine (β-alanyl-Nπ-methyl-L-histidine) dipeptides, which are abundantly present in mammalian skeletal muscle. The carnosine-methylating enzyme was later identified as the UPF0596 protein, in eukaryotes [16]. Finally, pioneering studies carried out in 2002, employing actin monomers in methylated or nonmethylated forms, revealed that the methylation of actin at histidine 73 (H73) may facilitate its polymerization [3]. However, since these results were based on a functional comparison of actin monomers isolated from two different species-Saccharomyces cerevisiae and cow-their interpretation was difficult and the biological significance of such modification was uncertain. Only recently, a putative histone lysine methyltransferase, SETD3, has been identified as actin specific histidine N-methyltransferase, and shown to regulate cytoskeleton assembly and modulate smooth muscle contractility [17,18] (Figure 1). This finding encouraged the scientific community to conduct more systematic searches for novel protein histidine methyltransferases and their substrates. Indeed, it was recently found that METTL9 methyltransferase acts as a broad specificity enzyme, catalyzing the formation of the majority of Nπ-methylhistidine residues in the human proteome, including S100A9 and NDUFB3 proteins [19]. This was also confirmed by Lv and colleagues, who established that METTL9 recognizes an xHxH motif in substrate proteins [20], whereas proteomic studies indicated that the motif is mainly present in human proteins that are methylated at histidine residues [21]. Moreover, the human METTL18 enzyme was shown to Nτ-methylate histidine 245 in ribosomal protein RPL3 [22,23], and, thus, resembles its yeast homolog HPM1 protein [7,24]. Histidine methylation has now been found to be prevalent in human cells, involving hundreds of intracellular proteins, which implies that the human proteome may contain several unidentified protein histidine methyltransferases [21]. In this review, we discuss the current advances in research on the SETD3 protein that were stimulated by its identification as the first protein histidine N-methyltransferase in metazoans and the renewed interest in histidine methylation as an important mechanism regulating protein functions. The Structural Features of SETD3 SETD3 has a core SET domain (Su(var)3-9, Enhancer-of-zeste (E(z)), and Trithorax (Trx)), which is found in various proteins. In Drosophila melanogaster, all these genes code for proteins engaged in posttranslational modifications of histone H3 and transcriptional regulation: (i) Su(var)3-9 encodes [histone H3]-lysine(9) N-methyltransferase (EC 2. . The SET domain is typical for enzymes exhibiting methyltransferase activity, and, as indicated by the names of the above mentioned enzymes, the presence of this domain is often associated with methyltransferase activity on lysine residues within the protein substrate. Indeed, SETD3 was initially identified as histone lysine N-methyltransferase [25,26], although the enzyme was shown to function as an actin specific histidine N-methyltransferase [17,18]. Interestingly, a follow up study by Dai et al. [27] demonstrated that the substitution of histidine by methionine in the actin derived peptide increases its affinity for the SETD3 protein by 76-fold. On the other hand, the substitution of lysine with methionine at K27 and K36 residues was found in histone H3.3 [28,29]. At present, the oncogenic effects of these substitutions are primarily linked with the perturbation of proper lysine methylation [30]. However, the results of Dai et al. [27] suggest that SETD3 in vivo may act as a methionine methyltransferase. Domain Architecture The human SETD3 protein (NCBI Protein: NP_115609.2) consists of 594 amino acid residues and has a molecular weight of 67.26 kDa. In addition to the well characterized isoform 1, there are two isoforms containing 296 and 286 amino acids, respectively. The structural characteristics described hereafter refer to isoform 1. SETD3 has a 250-residue long SET domain (residues 80-329) which ensures specific recognition of the actin derived peptide, and most probably, the actin molecule itself. This domain is larger than a typical SET domain due to the presence of an inserted region (residues 131-254), designated as iSET. The regions that are responsible for AdoMet binding are located within the SET domain (residues 105-106, 275-279, and 313). Structural studies conducted in recent years have revealed the actual interactions occurring between SETD3 and S-adenosyl-homocysteine (AdoHcy), which is a product of AdoMet demethylation [18,31], or sinefungin (SFG; adenosyl ornithine), which is an AdoMet analog lacking the ability to transfer a methyl group [32,33] and anticipated as a binding site of AdoMet. The residues 350-475 of SETD3 are folded into a domain that structurally resembles the RuBisCO LSMT (large subunit methyltransferase) substrate binding domain [31]. In LSMT, the substrate binding domain interacts specifically with the RuBisCO large subunit [34,35]. Thus, it seems that the LSMT substrate binding domain present in the SETD3 protein may be involved in the recognition and binding of protein substrates, although experimental data supporting this hypothesis are scarce. The N-terminal and C-terminal regions (residues 1-22 and 549-594, respectively) of the SETD3 protein are considered to be disordered ( Figure 2A). Several single amino acid substitutions can significantly influence the catalytic activity and/or specificity of SETD3. For example, Guo et al. [31] reported that R215A and R316A reduced the affinity of protein histidine N-methyltransferase for the actin derived peptide substrate, and decreased the enzyme activity. A similar effect of diminished affinity to the actin derived peptide and lower enzyme activity was also found to be triggered by N256A and N256V substitutions [31,32], although the lowest binding affinity was observed with N256D substitution [31]. This finding suggests that the presence of a negative charge at this position may have a detrimental effect on substrate binding. However, the mentioned substitutions allow SETD3 to bind the variants of actin derived peptides with amino acid substitutions within the target sequence, and catalyze the methylation of lysine or methionine, as indicated above [27]. A different substitution at the same amino acid residue (N256F), in combination with W274A substitution, was also shown to trigger protein lysine methyltransferase activity to an actin derived peptide variant containing lysine, instead of histidine, in the target sequence [27]. Wilkinson et al. [18] observed that Y313A substitution affected the activity of SETD3 protein histidine N-methyltransferase, while Y313F substitution, which only removed the hydroxyl group present in the ortho position on the benzene ring, strongly decreased the binding of protein histidine N-methyltransferase to the actin fragment, as well as the enzyme activity [31]. This implies that the hydroxyl group of Y313 is critical for the proper recognition of the substrate by the SETD3 protein, and its catalytic activity. Structure The 3D structures of SETD3 in complex with an unmethylated or methylated actinderived peptides were successfully determined by applying the X-ray diffraction crystallography technique. Both structures were solved using crystals containing AdoHcy, which was added to the buffer to prevent methylation of a peptide substrate. AdoHcy is one of the products of this reaction and occupies the catalytic pocket of the enzyme, thus preventing the binding of AdoMet [18,31]. Another approach involves the use of SFG, which fits into the catalytic pocket as AdoMet but does not transfer the methyl group [32,33]. AdoHcy (and also most probably AdoMet) interacts with SETD3 in a cleft formed by the SET domain, which is additionally supported by a fragment of the iSET domain ( Figure 2B). Its adenine ring is located between the side chain of E104 and the aromatic ring of F327. The AdoHcy N6 and N7 atoms are supported by hydrogen bonds formed with the main chain carbonyl and amide groups of H279, respectively, while its C8 atom forms a hydrogen bond with the hydroxyl group of Y313 [31]. The mode of interaction of AdoHcy with SETD3 is analogous to that observed in other SET containing enzymes, such as LSMT [34] and SETD6 [39]. The peptide substrate derived from β-actin interacts with SETD3 in a narrow cleft formed by the SET domain including the iSET region-in the same cleft where AdoHcy is located. However, the peptide substrate for histidine methylation is located at the lowest part of a wider cleft on the surface of SETD3. This spacious cleft might serve as an interaction site for larger unidentified protein substrates, together with the RuBisCO LSMT substrate binding domain ( Figure 2B). The methylated H73 residue of β-actin fits into a hydrophobic pocket formed by W274, I311, and Y313 of SETD3 [31] (Figure 3). The imidazole ring of H73 is aligned parallel to the aromatic ring of tyrosine 313. Its orientation is determined by two hydrogen bonds-one formed between the N1 and N3 atoms of the imidazole ring and another between the guanidino group of R316 and the carbonyl group in the main chain of N275 [31]. According to a recent study, the substrate binding pocket of SETD3 is charged in a way that corresponds to the surface charge of the actin fragment fitting to it, which also contributes to the proper alignment of the substrate to the enzyme [33]. Interestingly, the β-actin derived peptide adopts a 3 10 helix at its C-terminus only when H73 is methylated. However, the overall structure of the complex is very similar to that before methylation, which is confirmed by a root mean square deviation of 0.19 and 0.32 Å over protein and peptide Cα atoms, respectively [31]. SETD3 structural investigations support the notion that the enzyme is primarily a histidine N-methyltransferase [17,18], and not a lysine N-methyltransferase, as it was initially classified [25,26]. The key argument for this is that the substrate-binding site of the SETD3 protein fits very well to the β-actin peptide, but it might be too shallow for the stable binding of the aliphatic side chain of a lysine residue. On the other hand, the wide cleft present above the substrate-binding pocket may allow the interaction of SETD3 with other protein substrates. It is worth noting, though, that substitutions of N256 in SETD3 to other amino acid residues influence the substrate-binding affinity and/or specificity. Importantly, in the case of structurally similar SET-domain-containing (SETD) enzymes, such as LSMT or SETD6, this position may contain a phenylalanine residue, which is responsible for enzyme interaction with the lysine side chain present in substrate proteins [31]. These findings substantiate the reclassification of SETD3 as a histidine N-methyltransferase. Paralogs The existence of SETD3 paralogs is still unknown. However, based on the amino acid sequence, it can be suggested that the SETD4 protein, with 40% similarity and 24% identity, may be considered as a potential paralog. SETD4 is a histone lysine Nmethyltransferase (EC 2.1.1.364), which catalyzes the methylation of histones H3 and H4 at K4 and K30 residues, respectively. It was reported that this enzyme regulates cell proliferation, differentiation, inflammatory response, and heterochromatin formation [40]. The domain structure of SETD4 resembles that of SETD3. Although the amino acid sequence of SETD4 is shorter than that of SETD3 and contains only 440 residues, the SET domain consisting of 226 residues is in the central part of the protein (residues 48-273). The N-terminus of SETD4 is also disordered (residues 1-24), similar to SETD3. In order to analyze the potential structural and functional convergence of SETD3 and SETD4, we predicted the structure of human SETD4 using the AlphaFold algorithm [41]. Interestingly, three out of the five residues participating in substrate binding in SETD3 (described below) are conserved in SETD4. Moreover, the Y313 residue, which ensures the appropriate alignment of the imidazole ring of histidine substrate in SETD3, is structurally conserved in SETD4 as Y272 ( Figure 4). This may signify that SETD4 shows potential SETD3-like protein histidine N-methyltransferase activity, although no experimental evidence is available to confirm this hypothesis. . Structural alignment of SETD3 amino acid residues interacting with H73 of β-actin and conserved residues of SETD4. The image was created in UCSF Chimera 1.15 software utilizing the coordinates deposited in Protein Data Bank file 6ICV and the SETD4 structure predicted by AlphaFold [41] using UniProt Q9NVD3 record as an input. Structural alignment was calculated using the MatchMaker tool in UCSF Chimera 1.15 software [36]. Notably, the overall fold of SETD3 is highly similar to that of RuBisCO LSMTs and SETD6, both of which are validated protein lysine methyltransferases. However, SETD3 has low sequence identity with RuBisCO LSMTs and SETD6 (24-25%) [31]. Therefore, these enzymes cannot be listed as closely related paralogs of SETD3, but it can be concluded that the fold of SETD3 is not unique. The Biochemical Features of SETD3 For many years, SETD methyltransferases were exclusively considered as enzymes responsible for the methylation of specific lysine residues at histone proteins and thereby for maintaining and altering the histone code [42]. Nevertheless, this viewpoint gradually changed as more number of nonhistone substrates for SETD methyltransferases were discovered [43]. Not surprisingly, SETD3 was initially thought as an enzyme that catalyzes the modification of histone H3 at K4 and K36 residues and regulates muscle cell differentiation in mice [26]. This was later confirmed by Chen and colleagues [44], who, however, also suggested that SETD3 might act on other nonhistone substrates in the cytoplasm, as the enzyme contains RuBisCO LSMT substrate-binding domain. Once the consensus on its role as a lysine-methylating enzyme began to take shape, SETD3 was identified as a long sought, actin specific histidine N-methyltransferase that catalyzes H73 methylation in the actin protein of metazoans [17,18] (cf. Figures 1 and 5). This discovery was made by two independent research groups with their own dedicated research strategy. Studies performed in our laboratory [17] were based on the extensive purification of the native rat enzyme from leg muscles, using different chromatographic methods, and the subsequent molecular identification of the enzyme by tandem mass spectrometry. After two independent and slightly different rounds of purification, SETD3 methyltransferase was found as the only logical candidate for the enzyme. This discovery was then confirmed by generating recombinant homogenous rat and human SETD3 and determining their actin histidine-methylating activity. Finally, an analysis of SETD3 deficient D. melanogaster larvae and the human HAP1 knockout (KO) cell line proved that actin did not undergo histidine methylation in both the examined sources [17]. At the same time, Wilkinson and colleagues [18] analyzed previous evidence supporting the substrate specificity of SETD3 and questioned whether histones were appropriate substrates for this enzyme. To identify the proteins that are methylated by SETD3, recombinant human wild-type and catalytically inactive variants of SETD3 were prepared and incubated with a total cytoplasmic extract of human HT1080 cells in the presence of [ 3 H]AdoMet. Autoradiography analysis revealed that the only detected band corresponded to a protein with a molecular weight of ≈42 kDa. Then, using mass spectrometry, the potential substrates were purified and identified. The most likely candidates were produced in E. coli and tested as SETD3 substrates in vitro. It was observed that only actin was methylated by the enzyme. The specific actin residues modified by SETD3 were identified by tandem mass spectrometry. Unexpectedly, no lysine methylation events were detected on the actin protein, and instead, the H73 residue was unambiguously identified as the sole target of SETD3 [18]. The actin molecule consists of small and large domains (red and blue, respectively), and each one is divided further into two subdomains: 1, 2, and 3, 4, respectively. ATP (or ADP) binds to the cleft between subdomains 2 and 4. The methyl-accepting H73 is located in a sensor loop spanning P70 to N78 (green). This residue is exposed to the surface of the actin monomer and seems to be easily accessible for SETD3. The model was prepared using UCSF Chimera [36] from the Protein Data Bank structures of β-actin (2BTF). Actin In vitro and in vivo experiments have proven that actin is the only known bona fide substrate of SETD3. There are three main isoforms of this protein-α, β, and γ-which differ only by a few amino acids at their N-terminus [45]. Under physiological conditions, actin exists as a 42-kDa monomeric globular protein (G-actin) that binds ATP and spontaneously polymerizes into relatively stable filaments (F-actin). The G-actin molecule consists of small and large domains, which are further subdivided into subdomains 1, 2, and 3, 4, respectively ( Figure 5). The cleft between subdomains 2 and 4 is occupied by ATP or ADP. The methyl-accepting H73 residue is located in a sensor loop (P70 to N78), inserted between subdomains 1 and 2. The residue is exposed to the surface of the actin monomer and can thus be easily accessed by SETD3 ( Figure 5). The activity of SETD3 on actin has, so far, been studied using two different substrates: homogenous recombinant human β-actin produced in E. coli and an array of synthetic peptides of varying lengths, corresponding to the sensor loop of actin. Of note, full length recombinant actin monomers were purified from bacterial inclusion bodies in denaturing conditions and refolded into a nucleotide free state that represents a quasinative and nonphysiological form of this protein [17]. As actin requires eukaryotic chaperonins for correct folding, it cannot be produced in its native form in bacteria [46]. Radiochemical studies employing quasinative actin and [ 3 H]AdoMet revealed the high affinity of human SETD3 toward both substrates with at least 60-and 300-fold lower K M values (≈0.8 and ≈0.1 µM) than their intracellular concentrations, respectively [17]. The enzyme was also found to exhibit slow activity with a k cat value of about 0.7 min −1 , which seems to be typical for methyltransferases acting on protein residues [47]. More interestingly, a comparison of the activity of SETD3 on either recombinant actin produced in E. coli or protein produced in S. cerevisiae, indicated that the enzyme catalyzed the methylation of only nucleotide free actin from bacteria. Thus, the yeast produced protein, which was nonmethylated due to the lack of SETD3 homolog in S. cerevisiae and expected to have a native conformation, could not serve as a substrate for SETD3 unless it was purified in the nucleotide free form [17]. Based on these results, it was interpreted that SETD3 may act on a specific form of actin monomers, plausibly nucleotide free actin, in a complex with one or more actin-binding proteins of unknown identity. This hypothesis is consistent with the current knowledge about SETD methyltransferases. Many of these enzymes form complexes with different proteins, and those interactions are important for their catalytic activity and substrate specificity [42]. Structural and biochemical studies using actin peptides have provided valuable data on the substrate binding and catalytic mechanism of SETD3. It was reported that actinderived peptides bind in a long groove at the surface of the SET domain of the enzyme, with the H73 residue located within the active site pocket [31,32] ( Figure 2B). The affinity of binding was found to increase with increasing peptide length (K M = 8.7 mM and 21 µM for 9-residue and 15-residue peptide, respectively) [17,32]. However, those peptides containing H73M or H73K mutation were still methylated at position 73 [27,48], which suggests that peptide recognition is mainly sequence specific, rather than targeted residue (histidine)specific, and, thus, SETD3 can target proteins other than actin, at residues other than histidine. Moreover, the substrate specificity of SETD3 can be altered by engineering critical amino acids in its active site. Only recently, a mutated variant of SETD3 harboring N256F and W274A substitutions was shown to exhibit a 13-fold higher affinity for lysine over histidine [48]. Other Substrates Studies on SETD3 employing peptide substrates allowed insight into the structural basis of H73 methylation and the catalytic reaction. However, it should be noted that this peptide based approach is a simplification. In fact, such a research model explores only local interactions occurring within the catalytic domain of SETD3, and ignores the entire spectrum of interactions occurring between the enzyme, particularly its RuBisCO LSMT substrate-binding domain, and the protein substrate. Thus, it is not unwise to speculate that RuBisCO LSMT is mainly responsible for controlling the substrate specificity of SETD3, and the enzyme may accept more substrates than only actin. Previous reports based on radiochemical assays have also shown that mammalian core histones, particularly histone H3, were the substrates for SETD3 [25,26,44]. However, such an activity of the enzyme was not detected in other works [18]. This apparent discrepancy might be explained by different sources of nucleosomes used in enzymatic assays. It seems that SETD3 may act on the isolated native nucleosomes [26,44], but not on recombinant ones [18] or free histone octamers [44]. If true, the targeted amino acid residue(s) must be verified, as data supporting H3 methylation at K4 and K36 sites [25,26] are unconvincing [18]. Finally, Cohn and coworkers [49] have shown that human SETD3 interacts with about 170 different intracellular proteins, including actin, which suggests that there may be many other substrates for this enzyme in mammalian cells. Inhibitors Although the M73-containing peptide is a poor substrate for SETD3, it has been found to exhibit strong affinity to the enzyme and inhibit the methylation of the H73 peptide. Based on this observation, actin based peptidomimetics that act as effective substrate competitive inhibitors of human SETD3 were developed [50]. These are 16-residue-long analogs of the actin peptide (66)(67)(68)(69)(70)(71)(72)(73)(74)(75)(76)(77)(78)(79)(80)(81), in which the H73 residue is substituted by a simple natural or non-natural amino acid. Among an array of tested peptide analogs, selenomethioninecontaining actin peptide was identified as the most potent inhibitor of the human enzyme, with an IC 50 value of 0.16 µM. Reaction Mechanism The imidazole ring of the histidine residue contains two nitrogen atoms at different positions: 1 (π) and 3 (τ) (Figure 1). These nitrogen atoms can be protonated, resulting in the formation of an imidazolium cation, and each of them can subsequently release a proton to produce a different imidazole tautomer (Figure 1). Both fully protonated and tautomeric forms of the imidazole side chain are believed to be present at physiological pH ≈ 7 in proteins [51]. Similar to other AdoMet dependent methyltransferases, SETD3 appears to catalyze a conventional S N 2 methylation reaction, in which the methyl group of AdoMet is transferred to the deprotonated Nτ nitrogen [32] (Figure 6). To facilitate this reaction, the side chain of N256 of the enzyme stabilizes the Nπ nitrogen of the substrate H73 residue in the protonated form, whereas the lone electron pair present at the deprotonated Nτ attacks the methyl group of AdoMet. This model of SETD3 catalysis is consistent with the findings that (i) the enzyme has an optimum pH of 7 and above for H73 methylation (pKa of 6.5 for histidine imidazole) [31], whereas a K73-containing actin peptide is readily methylated only at a pH above 9.5 (pKa of 10.5 for lysine side chain) [52], and (ii) the substitution of N256 by amino acids that cannot form a hydrogen bond with the protonated Nπ nitrogen results in a reduction or complete loss of SETD3 activity toward H73 residue [48]. Tissue Distribution and Intracellular Localization The SETD3 protein or its orthologs are present in most of the eukaryotic organisms, including vertebrates (Homo sapiens, Mus musculus), plants (Vitis vinifera), insects (Onthophagus taurus, D. melanogaster), and fungi (but not in S. cerevisiae) [17]. The profile of SETD3 expression in humans shows relatively low tissue specificity (Figure 7). The SETD3 mRNA is ubiquitously expressed at a similar basal level in most examined tissues, with the noticeable exception of the skeletal muscle, kidneys, and testes. The widespread expression of the enzyme is consistent with its function as an actin histidine methyltransferase because actin proteins are found in virtually all cells. The expression of STED3 has been shown to be highest in muscles, which is not surprising given the fact that muscle fibers are abundant in actin filaments [54]. This finding is also in good agreement with the enzymatic data, indicating the skeletal muscle as a rich source of actin specific histidine methyltransferase [14]. On the other hand, the augmented expression of SETD3 in kidneys and testes is more puzzling. It could be hypothesized that increased SETD3 expression is related to actin, which is an important protein in these two organs. It is well known that the dynamic remodeling of the actin cytoskeleton is important for efficient mammalian spermatogenesis [55] and for maintaining the functional structure of renal podocytes [56]. However, it cannot be ruled out that higher SETD3 expression in kidneys and testes is due to the role of this enzyme in the methylation of substrates other than actin. The intracellular localization of SETD3 is not well defined yet. Initial studies proposed that the enzyme is localized in the nucleus [26,49]. However, the enzyme was clearly detected in the cytosol [57] and mitochondria of mammalian cells [53]. Biological Effect of Actin Methylation by SETD3 It is now clear that SETD3 is mainly actin histidine methyltransferase, and actin is its most important physiological substrate. However, the exact role of actin methylation is not clear. Polymerization of Actin The presence of actin filaments ensures the stable structure and internal movement of cells [58]. β-Actin is the main cytoskeleton protein [59]. Actin polymerization involves nucleation, elongation, and steady state phases [60], and closely correlates with the concentration of actin monomers. Monomers are stabilized by ATP or ADP binding, but neither dimer nor trimer is stable and are therefore present in an extremely low concentration in the intracellular environment. The oligomer is only partially protected by the addition of four subunits [58]. Actin polymerization is followed by the hydrolysis of ATP to ADP and phosphate [61], which results in the polarity of actin filaments. The pointed end (-) of the actin filament is disassembled more freely, ensuring the presence of subunits that are added at the opposite, barbed end (+). Thus, there exists a balance between filament shortening and elongation [45] (Figure 8). Furthermore, it is well established that the remodeling of filaments requires many different proteins, including myosin, cofilin, profilin, capping proteins, or the Arp2/3 complex. These proteins, for example, promote phosphate dissociation in F-actin or nucleotide exchange in its G form [58]. Methylation of the actin protein at H73 also seems to be implicated in its remodeling, indicating the biological importance of the SETD3 activity. During the steady state phase of polymerization, ADP-actin complexes dissociate from the pointed end (-) of the filamentous actin. This is followed by nucleotide exchange (from ADP to ATP) and, consequently, ATP-actin associates mainly at the barbed end (+). ATP hydrolysis allows the translocation of subunits between the ends of the filament [45]. SETD3 is found to promote actin polymerization through H73 methylation [18]. Effect of Actin H73 Methylation Studies performed in the last 50 years attempted to elucidate the importance of H73 methylation in actin. Initially, it was indicated that such methylation is neither obligatory nor necessary for the proper functioning of actin [12,62]. Furthermore, actin with H73 substitutions by arginine or tyrosine residues was shown to polymerize as effectively as the nonmutated protein [62]. By contrast, a recent study revealed that lack of actin methylation affected the stability of actin monomers in SETD3-KO cells. The instability of actin monomers might lead to the accelerated depolymerization of actin fibers, and a loss of cytoskeleton integrity [17]. However, Wilkinson [18] reported that the methylation of actin promotes its polymerization, but without any impact on depolymerization. Thus, further research is needed to better understand the effect of H73 methylation on the stability of actin filaments. The Cellular Roles of SETD3 and Association with Signaling Pathways SETD3 is located mainly in the cytosol, and β-actin is the only cytosolic substrate described for this enzyme so far. However, it seems likely that the enzyme also acts on other substrates. Based on a proteomic approach, it was identified that more than 150 proteins, including cytoskeleton and signal proteins, receptors, hydrolases, and transcription factors, interact with SETD3 [49]. Therefore, it has been postulated that the enzyme may play a role in various biological processes, including myocyte differentiation [26], maintaining cytoskeleton integrity [17], cell cycle regulation and apoptosis [25], response to hypoxic conditions [49], carcinogenesis [44], and enterovirus (EV) pathogenesis [63]. The Functions of Cytosolic SETD3 In addition to its contribution to maintaining cytoskeleton integrity, SETD3 was shown to be involved in the pathogenesis of some EVs [63]. Although several studies have been performed on EVs, the precise mechanisms promoting their replication in target cells are unknown. It was shown that the formation of viral particles was diminished in SETD3-KO cells compared to wild type cells, which indicates that the enzyme supports the replication of viral genomes [63]. More interestingly, the level of replication in cells expressing the catalytically inactive SETD3 mutant was found to be in the control range, suggesting that the methyltransferase activity is not pivotal to viral multiplication. On the other hand, SETD3 was identified to strongly interact with viral protease 2A, and this interaction depends on the presence of both SET and RuBisCO LSMT domains in the enzyme structure [63]. It is well known that viral protease 2A, in combination with protease 3C, is essential for the completion of the EV life cycle. Neither the cleavage of the polyprotein into structural proteins during the replication cycle of EVs, nor the cleavage of the host protein, can occur without the activity of these proteases [64]. Moreover, they are implicated as possibly involved in suppressing stress and antiviral IFN-α/β responses [65]. These findings shed new light on the biological significance of the SETD3 protein, and highlight it as crucial for the successful reproduction of some EVs. Other Postulated Functions of the SETD3 Protein Attempts have been made to explore the potential role of SETD3 in carcinogenesis [44,[66][67][68]. The available information collectively suggests the importance of SETD3 in the development and progression of cancer [44,49], as discussed in the next section. The other assumed functions of SETD3, including myocyte differentiation, response to hypoxia, and cell cycle regulation, are attributed to the implied histone methylation by this enzyme or its nuclear localization. As the first proposed activity of SETD3 was H3 methylation, its role in the epigenetic regulation of chromatin was also considered [25,26]. The abundant presence of SETD3 in muscles has been indicated to induce myocyte differentiation. In C2C12 or H9c2 cells, the overexpression of SETD3 activated the transcription of MCK, Myf6, and myogenin genes, which code for proteins involved in myocyte differentiation, whereas SETD3 knockdown was found to inhibit the differentiation of muscle cells. Nevertheless, the transcriptional activation of muscle-related genes by SETD3 needs to be confirmed by further research [26]. It has also been reported that the transcription factor FoxM1 is bound and methylated by SETD3 in vitro [49]. FoxM1 is crucial for the self renewal and proliferation of cells [69]. This is in line with the observation that SETD3 strongly interacted with FoxM1 at chromatin in normoxia, but its association with FoxM1 was weaker under hypoxic conditions. Fur-thermore, SETD3, along with FoxM1, regulated the expression of VEGF. The dissociation of both SETD3 and FoxM1 from the VEGF promoter was suggested to increase VEGF expression and promote angiogenesis in hypoxic conditions [49]. The functions of SETD3 reported by various studies are summarized in Table 1. Although literature data point out that SETD3 is associated with several signaling pathways, this protein has relatively recently been recognized to act mainly as actin histidine methyltransferase. This implies that its significance in biological processes is largely unexplored and warrants more studies in the future. Regulation of gene expression [25,26,44] Response to hypoxia conditions The Role of SETD3 in Diseases The knowledge about the role of SETD3 in the pathogenesis of various diseases remains limited. However, since the discovery and molecular characterization of SETD3 as a histone H3 methyltransferase [25,26,44] and further studies redefining its biological role as an actin H73 methyltransferase [17,18], a growing body of evidence has suggested that the protein may play an ambiguous role in diseases, especially cancer or other abnormalities. Therefore, the following part of the paper summarizes the most current knowledge regarding the potential involvement of the SETD3 protein in pathogenesis, as well as its role as a biomarker in various diseases. Cancer Although the precise role of SETD3 in carcinogenesis is still unclear, available data confirm that the protein might act either as a cancer suppressor or as an oncogenesispromoting factor. Interestingly, the role of SETD3 varies in different abnormalities and is therefore difficult to comprehend. It was previously shown that an SET-domain-lacking fragment of the SETD3 gene translocated to the immunoglobulin lambda light chain locus in B-cell lymphomas [44], which resulted in the disruption of the SETD3 gene and appearance of a shorter form of the SETD3 protein lacking the SET domain. Unexpectedly, this form of the protein accumulated in cancer cells, where the wild type could not. The truncated SETD3 was proposed to act as a dominant negative mutant promoting oncogenesis [44]. Nevertheless, the exact mechanism underlying the oncogenic effect resulting from the overexpression of the short form of SETD3 in lymphoma remains unknown. The level of the SETD3 protein was observed to fluctuate during the cell cycle [57]. Specifically, it was highest in the S phase, but declined during the progression to the M phase. Such dynamic cell cycle dependent regulation of expression implicates a potential role for SETD3 in carcinogenesis. Indeed, the level of SETD3 was shown to be elevated in hepatocellular carcinoma (HCC) [57]. Two hypothetical mechanisms have been proposed for the decreased degradation of SETD3. The first one involves the mutational burden on the β-isoform of the FBXW7β tumor suppressor protein, which is required for the ubiquitination and proteolysis of SETD3 [57]. On the other hand, a couple of Cdc4 phosphodegrons (CPDs) were identified in the SETD3 sequence, and one of them, CPD1, was shown to be phosphorylated specifically by GSK3β. Not surprisingly, either a decrease in the activity of FBXW7β or GSK3β or mutations within the CPD1 region reduced the extent of degradation of SETD3 [57]. Moreover, it was recently reported that SETD3 is a poor prognostic biomarker in HCC patients [67] and patients with a high level of the protein had lower rates of recurrence free survival and overall survival after surgery. In addition, in vitro and in vivo studies revealed that SETD3 promoted the progression of HCC [57]. The use of SETD3 targeted shRNA resulted in the depletion of the protein and significantly inhibited the variability and colony formation of HCC cells [57]. Similar results were observed with the use of a xenograft tumor model, where the application of shSETD3 resulted in a decreased volume and weight of the abnormal tissues [57]. Surprisingly, the SETD3 protein inhibited metastasis in HCC cells. In vitro studies performed with Hep3B and SK-Hep-1 cell lines showed that SETD3 knockdown led to increased migration and invasion [67]. Furthermore, the SETD3-deficient SK-Hep-1 cells exhibited higher metastatic activity in the mice model than cells containing the functional gene [67]. In addition to promoting metastasis, the SETD3 protein was shown to regulate the expression of serine/threonineprotein kinase DCLK1 by DNA methylation. However, the exact role of SETD3 in DNA methylation remains to be investigated [67], while its DNA-methylating activity has never been described before. It was recently reported that circRNA transcribed from SETD3 gene exons 2-6 was downregulated in HCC, and the level of the circSETD3 transcript correlated with tumor size and the malignant differentiation of HCC [70]. CircSETD3 is postulated to act as an miRNA sponge that downregulates the level of miR-421, an essential promoter of HCC. Intriguingly, the latest report on the role of circSETD in nasopharyngeal carcinoma revealed the opposite function of circSETD, and indicated that the transcript seems to promote the migration and invasiveness of nasopharyngeal carcinoma [71] by attenuating miR-615-5p and miR-1538. This, in turn, results in the upregulation of MAPRE1 expression and inhibition of α-tubulin acetylation [71]. Thus, the actual role of circSETD3 in carcinogenesis is unclear. The role of SETD3 in breast cancer is largely determined by the expression of hormone receptors and the mutational status of the p53 protein. In triple negative breast cancer patients with a mutational burden within the p53 protein, the higher level of SETD3 protein was found to correlate with poor prognosis [68]. By contrast, in patients with estrogen receptor positive breast cancer, a higher level of SETD3 correlated with better clinical outcomes [68]. The SETD3 protein has been shown to regulate the expression of various genes associated with cancer progression, including FOXM1, ACTB, ASMA, ACTG, FSCN, and FBXW7. However, the regulation by SETD3 seems to be cell specific [68], and thus, it is difficult to decipher the role and mechanism of this protein. The SETD3 protein was also implicated in the resistance of cervical cancer (CC) to radiotherapy [72]. With the use of the radioresistant SiHa cell line and a parental cell line lacking radioresistance, it was demonstrated that the level of the SETD3 protein negatively correlated with radioresistance, and its expression was downregulated in radiotherapyresistant SiHa cells. Analysis of clinical samples from radiotherapy prone and resistant patients revealed comparable results [72]. The finding that SETD3 knockdown decreased the rate of cell death, DNA damage, and apoptosis raised a question regarding the mechanism involved in the protective effect of the SETD3 protein. The elevated level of this protein in CC was associated with decreased expression of KLC4, which was previously shown to participate in cell death by regulating DNA damage response in lung cancer cell lines [73]. However, additional studies are required for further clarification of the function of SETD3 in CC. The SETD3 protein has been recently proven to act as a regulator of cell apoptosis [74] in colon cancer. Its higher expression was positively correlated with the rate of programmed cell death following doxorubicin treatment. A total of 215 proteins have been identified to interact with the overexpressed SETD3 protein, among which some are linked to RNA metabolism. However, the role of SETD3 in RNA metabolism remains to be investigated [74]. Interestingly, it was also shown that apoptosis was maintained only by the wild type SETD3 protein, while the substitution of tyrosine 313 to alanine (Y313A) attenuated the effect of the protein on the process. This suggests that the methylating activity of SETD3 might be crucial in the regulation of apoptosis [74]. SETD3 was also found to act as a positive regulator of the p53 protein, although it did not directly interact with or methylate the p53 protein [74]. The SETD3 protein may act as a prognostic biomarker in cancer. It was proposed that SETD3, along with the N-lysine methyltransferase SMYD2 and bifunctional lysine specific demethylase and histidyl-hydroxylase NO66, can be helpful in the diagnosis and prognosis of renal cell tumors [66]. Furthermore, clinical data proved that the downregulation of those proteins correlated with shorter disease specific and disease free survival [66]. Similarly, among different methyltransferases, the SETD3 protein was identified to be a key player in the progression of bladder cancer [66]. Nevertheless, the significance of the protein in this particular cancer has not been investigated so far and needs to be studied in the future. The SETD3 protein also seems to have a prognostic value in clear cell ovarian carcinoma [75]. The role of the SETD3 protein in oncogenesis is ambiguous because it may act as an oncoprotein and increase the effectiveness of anticancer therapies (i.e., radiotherapy or doxorubicin treatment). SETD3 might also be helpful to stratify patients according to clinical prognosis. However, additional studies should be performed to obtain more detailed data on the role(s) of SETD3 in the development of various malignancies, their progression, and invasiveness. Several studies published so far have focused on the role of the SETD3 protein in cancer, while only a few have addressed the potential involvement of this protein in other pathologies. Other Diseases As mentioned in Section 4.3, the SETD3 protein has been shown to be involved in the transcriptional regulation of VEGF expression under normoxia and hypoxia [49]. Under hypoxic conditions, the attenuated interaction of the SETD3-FoxM1 complex and promotion of the VEGF expression may result in the onset of hypoxic pulmonary hypertension [76]. On the other hand, overexpression of the SETD3 protein limits VEGF expression and HIF-1 activation and, thus, protects against hypoxic pulmonary hypertension [76]. It was recently shown that the SETD3 protein might be involved in the progression of autoimmune diseases, including systemic lupus erythematosus (SLE) [77]. The disease is associated with an elevated level of CXCR5 in CD4 + follicular helper T cells [77]. CXCR5 promotes the migration and interaction of T cells with B cells which, in turn, results in the formation of plasma cells through the interaction of PD-1 with its ligands (PD-1L and PD-2L) and production of autoantibodies. The SETD3 protein was elevated in the SLE CD4 + cells, and its level correlated with a higher expression of CXCR5 [77]. The SETD3 protein also has a protective effect on ischemia-reperfusion (I/R)-induced brain injury [78]. The level of SETD3 was found to be positively correlated with neuronal survival. The neuroprotective role of the protein was proposed to be related to the actin histidine-methylating activity and regulation of F-actin polymerization [78]. Physiologically, SETD3 expression was downregulated by the activity of PTEN phosphatase as a result of I/R-induced injury. In addition, the downregulation of SETD3 expression results in an increased level of reactive oxygen species, decreased mitochondrial membrane potential, and ATP production [78]. However, further studies are required to understand the mechanism underlying the complex crosstalk between the activity of PTEN phosphatase and the SETD3 protein in neurons. Recently, it was reported that the actin histidine-methylating activity of the SETD3 protein plays a significant role in dystocia (delayed parturition) [18]. It was reported that the litter sizes of double mutated (Setd3 −/− ) mice were smaller than those of the wild type mice or mice with one functional allele. Nevertheless, this observation was inconsistent with the lack of anatomical abnormalities within the pelvis, and so the association of SETD3 with secondary dystocia was excluded [18]. A relationship between H73 methylation and uterine smooth muscle contraction was also proposed and verified experimentally. It was noted that the depletion of the SETD3 protein and actin H73 methylation resulted in a decreased signal induced contraction of primary human myometrial cells, while the intrinsic contractions were not affected [18]. Moreover, contractions induced by oxytocin and endothelin-1 were restored only by the catalytically active SETD3 protein but not by its mutated inactive form. All these data support the hypothesis that actin H73 methylation influences the signal induced contraction of smooth muscles [18]. The SETD3 protein was also shown to be involved in enteroviral infections [63]. Employing two human EVs-rhinovirus C15 (RV-C15) and EV-D68-SETD3 was selected as a hypothetical host factor essential for the infectiousness of EVs. The potential contribution of SETD3 in the pathogenesis of EVs is described in Section 4. An in vivo study indicated that SETD3 deficient (Setd3 −/− ) mice were viable and showed no symptoms of viral infection [63]. In the context of viral infections, the region encoding the SETD3 protein was recently shown to be an integration site in the precancerous human papillomavirus infections [79]. While only two reports are currently available regarding the importance of the SETD3 protein in viral contagiousness, it is extremely important, taking into account the current pandemic status, to investigate the role of host proteins in the progression of viral infections. Outlook Although studies have established that SETD3 is the long sought, actin specific histidine N-methyltransferase, the biochemical properties of this protein as well as the cellular processes it regulates are yet to be understood in detail. For instance, the crystal structure of the SETD3-actin complex has not been deciphered and attempts made so far to crystalize the complex were unsuccessful [31]. A possible explanation for this failure could be that the actual physiological form of actin bound and subsequently methylated by SETD3 is not known, and whether the substrate is F-actin, G-actin, or, perhaps, G-actin in a complex with unidentified protein(s) should be verified. However, data collected from experiments involving the purification of native SETD3 showed that the enzyme is tightly bound to myofibrils, suggesting that it forms a relatively stable complex with myofibrillar proteins [14,17]. Further work is needed to explain the functions of SETD3 methyltransferase in the cell nucleus. One may hypothesize that nuclear SETD3 exhibits different substrate specificity and targets histone H3, as has been previously shown for isolated human nucleosomes [44]. Intriguingly, avian histones were reported to undergo Nτ-methylation at histidine residues [80], and so it would be interesting to verify whether SETD3 might be responsible for such modification. If true, SETD3 would be recognized as another dual specificity protein methyltransferase whose target activity depends on its interaction with a specific (non)substrate protein(s) [81,82]. Alternatively, the enzyme might work as a scaffold protein, facilitating the formation of a yet unknown protein complex, similar to that observed in the case of enteroviral protease 2A [63]. The regulation of SETD3 activity is another topic that remains to be investigated. All studies to date have focused only on mammalian SETD3. However, the enzyme is prevalent in multicellular eukaryotes. Thus, it would be of considerable interest to analyze the orthologs from more evolutionarily distant species, particularly in the plant kingdom. It is still unclear whether SETD3 catalyzes the methylation of histidine residues in plant proteins, and if so, what would be the physiological importance of SETD3 in plant species. In conclusion, at the current research stage, our knowledge of the SETD3 protein seems to be in its infancy. Although a lot is known about the structure of SETD3 and the mechanism of actin H73 methylation, the understanding of the physiological importance of the enzyme is still very limited. Future research will need to address the above questions in more detail in order to gain in depth knowledge about SETD3.
11,515
2021-10-01T00:00:00.000
[ "Biology", "Chemistry" ]
A sewer overflow mitigation during festival and rainfall periods: case study of Karbala The objective of the present study to assess the performance of a suggested sewer line by using pipe jacking system (PJS) in order to enhance the sewage capacity and mitigate sewer flooding of historic pilgrimage city of Karbala, Iraq. The storm water management model (SWMM5) was used for this purpose. The simulation of exiting sewer system reveals that sewer discharge during peak pilgrimage period is more than 200% of the capacity of existing sewer line. Installation of SLL having a diameter of 2.5 m at a depth ranging between 12 and 22 m by PJS can reduce water depth in sewer pipe by 78%. The reduction of water depth at sewer pipe can reduce sewer overflow up to 70%, if the system is installed and managed properly. The methodology proposed in the paper can be applied in any location having similar problem with necessary modifications. Introduction Sewage flooding is a major problem in festival or pilgrimage cities where sudden influx of visitors and tourists during festival or pilgrimage periods put tremendous pressure on sewerage system and causes sewer overflow (Sharpley and Sundaram 2005;Shinde 2012;VandeWalle et al. 2012;Vijayanand 2012). Inundation of land and road with sewer due to sewer overflow causes sanitary and health problems as well as distress and hardship to urban populations. In many cases, the excess sewer is drained into storm networks or to natural water outlets, which eventually cause urban pollution and ecological imbalance in the long run. Various technical measures have been proposed to adopt or mitigate sewer overflow during extreme events such as larger floating population and extreme rainfall events (Aziz et al. 2011;Abdellatif et al. 2014). The most often used measures include use of tanker trucks, the direct link between sewage and stormwater networks by small pumps, sewage discharge directly into nearby water bodies to alleviate the sewer overflow. Leandro et al. (2009) and Sun et al. (2011) mentioned that technical measures usually prescribed to mitigate sewer overflow and sewage quality are not suitable for many cities, particularly for festival cities which experience huge influx of floating population (Stein and Partner 2015;Vazquez-Prokopec et al. 2010). This is especially true for old and historic cities, where sewer network often cannot be rebuilt due to number of reasons including heritage of the area, security matters, narrow streets, frequent the visits during a year, the old sewer networks, etc. Pipe jacking system (PJS) is often proposed in such situation. In PJS, works are done underground and only the manholes are used to appear on the surface. Therefore, it could be a good solution of the problems usually faced in old historic cities and in poor soil. Zhen et al. (2014) used steel pipe jacking method to mitigate the sewer overflow in Shanghai, China. Their study provided a reference for effective design and construction of steel pipe jacking. The pipe jacking method has also been used in Japan to installed 1 3 241 Page 2 of 9 pipeline more than 20 m below the earth's surface (Senda et al. 2013;Llopart-Mascaró et al. 2014). It has been used in Warsaw, Poland to install sewer network. Karbala, Iraq is the most important center of the world for Shit Muslim. Millions of people visit Karbala city during pilgrimage period. The sewer system of Karbala was built in 1970, with a capacity to handle the sewage of 436,500 populations. Population of the city has increased over the last four decades. The number of pilgrims is also increased with the increase in economic ability and mobility of people. Consequently, sewer system of Karbala often fails to handle the amount of sewer produced by large population. Sewer overflow occurs when the peak flow exceeds 10 times of the carrying capacity of sewer pipes of the city (Obaid et al. 2014a, b;Ying and Sansalone 2010;Zeferino et al. 2012). In such situation, the sewer water spills through gullies or manholes, causing flooding and environmental pollution. Changes in rainfall patterns, particularly those are related to global warming induced climate change such as, increase intensity of rainfall, have aggravated the situation. However, the sewer system of Karbala could not be upgraded with time to handle the pressure of increased population due to its historic nature and structure as well as the public sentiment to preserve the old historic religious structure of the city. The objective of the present study is to assess the performance of a proposed sewer network installed by PJS in enhancing the sewage capacity of the historic pilgrimage city of Karbala. Various models have been proposed and successfully applied to assess the performance of existing or proposed sewer network. Among those storm water management model (SWMM5) (Rossman 2010;Maalel and Huber 1984) is the most popular one. SWMM is developed by coupling runoff model with a two-dimensional surface flow model (Leandro et al. 2009;Sun et al. 2011). It allows different control options to simulate sewerage discharge under different scenarios. It also allows easy operation for efficient simulate of complex sewer system. Huber (2003) studied the patterns of generating wastewater in London using SWMM, and reported that SWMM has the ability to simulate both the total volume of sewer runoff and peak discharge rate efficiently. Yoo (2005) reported that the potential information produced by SWMM can be used to support decision making in order to develop better wastewater drainage system. In the present study, SWMM is used to simulate the sewer network in order to assess the efficiency of sewer network installed by PJS. Methods and materials PJS only needs to dig few manholes in the surface. Even the distance of manholes can be very far (up to 1000 m) compared to normal sewer network. This system can be used to construct pipe tunnels up to 3000 mm in diameter and construct both long (maximum length: 1000 m) and short lines. The minimum diameter of pipe in PJS is limited to person entry pipes as it requires people working inside the jacking pipe. Therefore, a minimum diameter of 1075 mm is recommended for the pipe installed by PJS. On the other hand, there is no upper limitation of jacked pipe diameter. The largest pipe can be nearly 3.7 m in diameter (Roe 1995). Cohesive soils are considered most suitable for PJS. However, pipe jacking is also possible in non-cohesive soil, if necessary precaution measures are taken, which includes using earth pressure balance machines to counterbalance the ground pressure, and using closed-face machines (Roe 1995). Due to above advantages, PJS is considered as most suitable for installing sewer network for Karbala city. A model was developed using SWMM5 to simulated sewer discharge with varying population and rainfall. The additional discharges caused by floating population were considered as a direct flow. Manning equation (Eq. 1) was used to express the relationship between flow rate (Q), crosssectional area (A), hydraulic radius (R), and slope (S) in all conduits (Gauckler 1867; Steel et al. 1985;Blansett 2011), where Q = flow rate, (m 3 /s); V = velocity, (m/s); A = flow area, (m 2 ); n = Manning's Roughness Coefficient; R = hydraulic radius, (m); S = pipe slope, (m/m). Description of the study area Karbala with an area of 5034 km 2 , is a city in Iraq, located about 100 km (62 mile), (Latitute: 32° 36′ 51″ N, Longitude: 044° 01′ 29″ E) southwest of Baghdad. It is made up of two districts, "Old Karbala"-the religious center, and "New Karbala"-the residential district containing Islamic schools and government buildings. The city center of Karbala has the old sewer system and the narrow streets, and therefore, redesign of sewer is very difficult. It has an estimated population of 436,500 in the city center (City Population 2013). During pilgrimage days, the population increases to more than 4 million (Jafria 2013). The Karbala in the map of Iraq is shown in Fig. 1a. The toposheets, collected from Directorate of planning of Kabala city, was used to prepare the base map. Several features such as settlements, roads, water bodies, vegetation and industrial areas were digitized and corresponding maps are generated. Total 64 urban sub-catchments are found in the toposheets. The boundary of those sub-catchments was delineated using GIS for preparing the base map of sub-catchments of the city as shown in Fig. 1b. Sewer system requires a variety of appurtenances to insure proper operation. These include manholes, inlets, inverted siphons, pumping stations, etc. Figure 2a shows the map of the sewer networks and wastewater treatment plants in the city center of Karbala. Six main sewer lines are used in the city center of Karbala to carry on the sewage to wastewater treatment plant as shown in Fig. 2b. Karbala is located in semi-arid region of Iraq. The climate of the city is characterized by cold winter and prolonged dry season. It experiences a hot desert climate with extremely hot and dry summer and cool winter. Most of the rainfall is received between November and April; however, rainfall is not high in any month. The monthly distribution of rainfall in Karbala city is shown in Fig. 3. The distribution of precipitation shows that the maximum rainfall is close to 22.5. (World Weather Information Service-Karbala 2014; Hussein et al. 2015). Detail description of SSL having a length of 13,594.6 m is given in Table 1. The descriptions of manholes are given in Table 2. Results and discussions The city center of Karbala is well covered by sewer network. Only some outlying areas of the center as well as nearby agricultural areas are still out of sewer network. Both storm and sewer networks of the city are more than half century old. During pilgrimage, stormwater networks are intentionally connected to the sewer network randomly in some places to carry 20% of sewerage. During sewer overflow, huge sewerage enters into stormwater network, which causes overflow in stormwater network. The situation deteriorates during rainfall. Capacity of stormwater network is insufficient in some places and therefore, it Fig. 4a. It shows that the population in the area varies between less than 305 people and closed to forty thousand people during normal days. The total number of cumulative population in the days from 10 to 21 of the month of Safar is approximately 20 million due to the influx pilgrims (Arbaeen visit). The distribution of these 20 million populations in Karbala city is shown in Fig. 4b. The figure illustrates that population for each sub-catchment of the city center of Karbala ranging from 616 people to about 12 million people during twelve pilgrimage days of the Safar month. Following the directives of the directory of water supply of Karbala city, the per capita per day water consumption for permanent resident is considered between 200 and 400 L. The water consumption at each sub-catchment of the Karbala city center for the normal days is shown in Fig. 5a. On the other hand, the water consumption for each sub-catchment of the Karbala city during the 12 pilgrimage days is shown in Fig. 5b. Following the directives of the directory of water supply of Karbala city, the per capita per day water consumption for floating population was considered between 100 and 200 L. The figure shows that water consumption in the study area by floating population varies from sub-catchment to sub-catchment, between less than 31 m 3 /day to about 1508,625 m 3 /day. High water consumption is estimated around the center of pilgrimage. The sewer discharge from each sub-catchment of the Karbala city center (m 3 /d/Sub-area) was estimated considering that per capita per day sewer delivery is from 160 to 320 L. On the other hand, the per capita per day sewer discharge by Fig. 6a, b, respectively. The figures show that the spatial distribution of sewer discharge follows the same pattern of water consumption. Proposed sewer line for Karbala city to be installed using PJS is shown in Fig. 7. SWMM model was used to simulate the sewer overflow changes with the installed sewer line. Amount of sewer overflow as well as the times and locations overflow before and after installation of sewer line is used for comparison. The results of average and maximum sewer discharge for suggested sewer line (SSL) are shown in Table 3a and b. Table 3a and b shows that the maximum discharge SSL is 4.6011 m 3 /s, and the velocity is 1.3284 m/s. The standard ratio of actual to maximum discharge is (q/Q) 0.5. The point of intersection on the left hand scale is found 0.56. Using this value, the depth of sewer flow in SSL is obtained as 1176 mm (0.56 × 2011). Similarly, the ratio of actual to maximum velocity (v/V) is obtained as 0.84. The minimum velocity in the pipe when it is carrying a flow of 2.3006 m 3 /s is thus be 1.12608 m/s (0.85 × 1.3284). The wastewater level in the SSL obtained using SWMM5 during the normal days is shown in Fig. 8a. The wastewater level in the SSL during the pilgrimage days (3th January) is shown in Fig. 8b. It should be noted that partial-flow diagram gives only approximate results, particularly for high velocities. The discrepancies between computed and actual flow conditions may be caused by wave formation, surface resistance, and other factors. It can be found from the analysis that velocities below 78% of the total depth decreased. There are three main pump stations in the pilgrimage zones (Bab Baghdad, Alsadia and P2 stations) to lift the sewage from the city center into the old water treatment plant. By applying SSL, all of these pump stations have to remove. The inlets of these pump stations have to connect directly into SSL. In the downstream of SSL, there is a pump station should be constructed, which is lifting the sewage from LINE-1, LINE-2 and LINE-3, and therefore, the sewage is supposed to go to a new water treatment plant. Consequently, the maintenance process will be easier after removing the pump stations in the pilgrimage zones. Sewer flooding before and after installation of SSL is shown in Fig. 9. The figures show that SSL will contribute up to 55% reduction of the rash in the average sewer discharges values while mitigates the peak sewage flows by 74% during the pilgrimage periods. The percentage of mitigation of flood has been estimated and checked by running of SWMM5 after installed SSL and removed all pump stations in the city center of Karbala. It was estimated by the model that sewer water depth is less than 70% in SSL during the critical period. Therefore, no sewer overflow in downstream areas during pilgrimage period. However, the results show that sewer overflow might happen in sub-pipes. The internal networks are required rehabilitate for complete mitigation sewer overflow in the city. Conclusions Performance of suggested sewer line by PJS has been assessed in Karbala city of Iraq, one of the most important holy cities of the world in order to mitigate the problem of sewer overflow. The study suggests that PJS can be a suitable approach for installation of sewer line in old heritage and densely populated areas for mitigation sewer overflow. It has been found that installation of new sewer line using PJS can reduce the sewer flooding up to 78% in Karbala city, if properly installed and managed. The rest amount of excess sewerage can be managed through rehabilitation or slight modifications of the existing sewer system. It is expected that the case study presented in this paper can be replicated in other cities experience huge floating population for modeling of sewer discharge and mitigation of sewer related problems.
3,640
2020-11-17T00:00:00.000
[ "Engineering" ]
Long-term evolution of a supernova remnant hosting a double neutron star binary An ultra-stripped supernova (USSN) is a type of core-collapse SN explosion proposed to be a candidate formation site of a double neutron star (DNS) binary. We investigate the dynamical evolution of an ultra-stripped supernova remnant (USSNR), which should host a DNS at its center. By accounting for the mass-loss history of the progenitor binary using a model developed by a previous study, we construct the large-scale structure of the {circumstellar medium (CSM)} up to a radius $\sim 100\,{\rm pc}$, and simulate the explosion and subsequent evolution of a USSN surrounded by such a CSM environment. We find that the CSM encompasses an extended region characterized by a hot plasma with a temperature $\sim 10^8\,$K located around the termination shock of the wind from the progenitor binary ($\sim 10\,$pc), and the USSNR blastwave is drastically weakened while penetrating through this hot plasma. Radio continuum emission from a young USSNR is sufficiently bright to be detectable if it inhabits our Galaxy but faint compared to the observed Galactic SNRs, and thereafter declines in luminosity through adiabatic cooling. Within our parameter space, USSNRs typically exhibit a low radio luminosity and surface brightness compared to the known Galactic SNRs. Due to the small event rate of USSNe and their relatively short observable lifespan, we calculate that USSNRs account for only $\sim 0.1$-$1$ % of the total SNR population. This is consistent with the fact that no SNR hosting a DNS binary has been discovered in the Milky Way so far. INTRODUCTION A double neutron star (DNS) binary is believed to be the fossil object from a binary system of two massive stars which have both exploded as core-collapse supernovae (SNe) in the past (e.g., Podsiadlowski et al. 2005). Observations of Galactic radio pulsars have revealed that some DNS binaries are in an orbit tight enough to merge within the cosmic age (Burgay et al. 2003). Indeed, previous observations for the short gamma-ray burst GRB 130603B have implied the association between the gamma-ray emission and kilonova in the DNS merger (Tanvir et al. 2013;Hotokezaka et al. 2013). Furthermore, recent gravitational wave detectors and rapid follow-up electromagnetic observations have succeeded in probing the coalescence of a DNS, confirming the link of these objects to the origin of short gamma-ray bursts and the nucleosynthesis of r-process elements (e.g., Abbott et al. 2017a,b;Tanaka et al. 2017). The formation of a DNS requires that the binary system is not disrupted by the evolution history of the massive stars all the way through their core-collapses. One of the plausible scenarios of DNS formation invokes an ultra-stripped supernova (USSN, Tauris et al. 2017;Yoshida et al. 2017). In a close binary consisting of two massive stars, the primary star first explodes as a SN. After a phase as a high-mass X-ray binary, the outer layer of the secondary star is stripped away in two steps: (1) the ejection of its hydrogen-rich envelope through a phase of common envelope (CE) interaction, and (2) the stripping of the helium layer through Roche lobe overflow (RLO). These binary interactions lead to the formation of an helium star ( 2M ), which eventually explodes as a USSN. Indeed, some of the rapidly evolving transients such as SN 2005ek (Drout et al. 2013), iPTF14gqr (De et al. 2018), and SN 2019dge (Yao et al. 2020) are suggested to be possible candidates for USSNe (Moriya et al. 2017). In addition, it has been proposed that during the operation period of the Zwicky Transient Facility (ZTF, Graham et al. 2019;Bellm et al. 2019), roughly 10 USSNe within 300 Mpc will be detected per a year (Hijikawa et al. 2019). Hence, it is expected that future surveys and follow-up observations of transients will enable us to examine in detail the validity of the USSN scenario as the formation mechanism of DNS binaries. Another way to experimentally test the USSN scenario is to search for supernova remnants (SNRs) hosting a DNS binary. After the explosion, the ejecta of the USSN sweeps up the surrounding CSM while expanding into the interstellar space. Intriguingly, this kind of system can be potentially detected as a SNR hosting a DNS binary, which we will refer to as an ultra-stripped supernova remnant (USSNR) hereafter. While the current SNR surveys have not identified any of these remnants so far, we note that the observable characteristics of a USSNR have not been discussed and quantified in the literature. It is hence essential to investigate the dynamical evolution and emission properties of USSNRs using a dedicated simulation model to shed light on how they can be identified. Tauris et al. (2013) developed a progenitor evolution model for the USSN, and showed that the masstransfer rate through RLO can be enhanced up toṀ ∼ 10 −5 M yr −1 in the last 0.1 Myr prior to the core collapse. Because the mass-transfer rate is orders of magnitude larger than the Eddington accretion rate onto the neutron star, a large fraction of the stripped gas escapes the binary system and distributes around the progenitor as CSM. Assuming a wind velocity v w ∼ 1000 km s −1 , the gas which has been expelled from the binary system in the RLO phase can reach a distance of ∼ 100 pc from the progenitor, implying that the evolution of the USSNR is heavily influenced by the CSM created by the RLO mass loading process. However, detailed models for the mass-loss history driven by binary interaction are in most cases not incorporated in the simulations of SNR dynamics, which is particularly critical for understanding the properties of USSNRs. In this study, we investigate the characteristics of a USSNR using a grid of one-dimensional hydrodynamic simulations. By employing the binary evolution model presented in Tauris et al. (2013), we first construct the large-scale structure of the CSM surrounding the USSN progenitor. We next calculate the hydrodynamics of the USSN ejecta interacting with the composed CSM and the resulted synchrotron radiation. Our simulations reveal that the blastwave of USSNRs has a difficulty in penetrating the hot plasma, which had been shaped by the preceding mass loss from the progenitor binary. Radio emission from a young USSNR is predicted to be bright enough to be detected if it inhabits our Galaxy, while its luminosity starts to decrease at t 10 3 years, making the USSNR observable for a relatively short time period. Besides, the low surface brightness of a USSNR predicted by our models at its typical diameters (D ∼ O(10 pc)) can serve as a key to the identification of these remnants in the future. This paper is organized as follows. In Section 2, we review the USSN scenario as a formation theory of DNS, and describe the progenitor models used in our simulations. In Section 3, we discuss the formation sequence of the CSM, followed by a description of the procedures for constructing our CSM models. In Section 4, we examine the hydrodynamic evolution of a USSNR and show the properties of the expected radio signals, including the light curve and surface brightness. Their implications are discussed in Section 5, and our results are summarized in Section 6. 2. PROGENITOR MODEL Tauris et al. (2013) investigated the binary stellar evolution of a 2.9M He star with a neutron star companion, having an initial orbital period of 0.1 day. They found that the He star reduces its own mass down to 1.5M through RLO, and suggested that the He star explodes as a USSN which can be a candidate for some rapidly evolving transients. Here we overview the stellar evolution of the progenitor of a USSN, which is crucial for understanding the formation of the CSM adopted in this study. Figure 1 shows the time evolution of the mass (M ), radius of the Roche lobe (R), escape velocity (V esc ), and mass-transfer rate (Ṁ ) of the USSN progenitor presented in Tauris et al. (2013). Here, the escape velocity is defined as V esc = 2GM/R, where G is the gravitational constant. When the progenitor is in the state illustrated by the blue line, its outer layer is stripped away by the companion neutron star through RLO. Until the core collapse, the He star experiences RLO three times; the first phase is at 1.78 Myr t 1.84 Myr during which the core has exhausted its He-burning fuel (A). The second is at t ∼ 1.851 Myr when the core C-burning has ended (B), and the third is at t 1.854 Myr in which the off-center O-burning is about to onset (C and D). The CSM around the USSN progenitor is hence expected to be shaped by these three phases of mass loss activities. We note that the progenitor spends most of its lifetime in the state shown by the orange line prior to A, and that the increase of the mass-loss rate is realized in the last 0.1 Myr before core collapse. The progenitor does not experience RLO in the detached phases, during which we conservatively assume a mass loss rate of 10 −7 M yr −1 . This mimics the stellar wind from the progenitor, but the mass and kinetic energy released by this wind are smaller than those carried by the gas stripped away through the RLO. Thus, we can assume that the stellar wind from the progenitor has an insignificant influence on the overall wind hydrodynamics, and that the consistency with the stellar evolution model is maintained. The model developed by Tauris et al. (2013) covers the lifetime of the He star only until 10 years prior to its core collapse. To trace the evolution up to the moment of the explosion, we use the final values of M, R, V esc , andṀ from the model for the last 10 years. The gas transferred from the progenitor first flows toward the neutron star with an accretion rate orders of magnitude larger than the Eddington accretion rate (Tauris et al. 2013). The neutron star cannot feed up anymore and thus drives the accreted gas outward by mechanisms such as propeller effect (Tauris et al. 2017). However, resolving the detail of this outflow dynamics is beyond the scope of this work. For simplicity, we assume that the material which has been stripped away from the He star launches outward spherically at the radius of the Roche lobe R with a velocity V esc and massloss rateṀ . Then, the mass density at the Roche lobe radius (Ṁ /4πR 2 V esc ) can be estimated. Given the density and velocity of the gas at the Roche lobe radius as an inner boundary condition, we can solve the hydrodynamics of the gas launched from the progenitor binary to model the CSM formation around the progenitor. Combined with a parametric survey described in the following sections, this strategy allows us to demonstrate the long-term evolution properties of a USSNR with the mass loss history of the progenitor taken into account. CSM FORMATION In this section, we describe our procedure for modeling the formation of the CSM surrounding the USSN progenitor. First, we construct the initial profile of the interstellar medium (ISM) in Section 3.1. We then explain our methodology for simulating the hydrodynamics of the mass-loss material in Section 3.2, and the properties of the composed CSM in Section 3.3. Initial setup The progenitor experiences a hydrogen-rich envelope ejection driven by the CE interaction before the stripping of the helium gas through RLO. The distribution of this expelled hydrogen-rich gas is important because it interacts with the helium gas released through RLO later on. Although some recent multi-dimensional simulations have succeeded in completely ejecting the hydrogen-rich envelope of a red supergiant through the CE interaction under some assumptions and realizations (Law-Smith et al. 2020;Lau et al. 2021, but see also Vigna-Gómez et al. 2021), the distribution of the material ejected by the CE interaction is still not completely understood. Figure 2 shows three models we adopt for the initial density profile of the CE material. We consider a situation where the ejected gas with a mass M CE = 10M is distributed within a radius R CE which smoothly connects with the ISM. Given that the characteristic timescale of the CE interaction is around thousands of years (Ivanova et al. 2013), the gas ejected with a speed ∼ 100 km s −1 can reach a radius R CE ∼ 10 18 cm. Since there is a variety in the ISM properties such as density and temperature (e.g., Berkhuijsen & Fletcher 2008;Draine 2011), we consider two ISM phases; a warm phase (ρ ism = 10 −24 g cm −3 , T ism = 10 4 K) and a hot phase (ρ ism = 10 −26 g cm −3 , T ism = 10 6 K). We remark that the thermal pressure in these two initial profiles are equal to each other. In addition, we prepare a reference model 'UNIFORM', in which a static and uniform ISM resides throughout the simulation domain with a density ρ ism = 10 −24 g cm −3 , to evaluate the effect of the CE ejection activity. The specific profiles of the initial density for each model are described in Table 1. The derivation of the exact value of ρ CE is explicated in Appendix A. We consider a static ISM profile (v = 0). The initial velocity profile of the CE component does not have an important role in the hydrodynamics of the CSM formation because the expected V CE is negligibly lower than the velocity of the wind from the progenitor binary. To verify this we conducted simulations in which the initial velocity of the CE component is assumed to be 100 km s −1 and confirmed that the outcome is not changed. We assume the temperature T = T ism and a solar metallicity throughout the entire profiles at this stage. A comparison of the results among these models enables us to evaluate how much the properties of the CE ejection affect the CSM formation and the subsequent SNR evolution. Wind hydrodynamics We solve the one-dimensional equations of ideal gas hydrodynamics where the internal energy is taken away by radiative cooling in spherical coordinates. The governing equations are described as follows: where ρ is the mass density, v is the velocity, p is the pressure, e is the specific internal energy, h = e + p/ρ is the specific enthalpy, n i and n e are the number density of ions and electrons. Λ(T ) represents the radiative cooling function, for which we employ the power-law formalism introduced by Chevalier & Fransson (1994). The energy loss by radiative cooling is calculated only in the optically thin region where τ ≤ 1 which is sufficient for tracing the evolution of the blastwave (see also Section 5.4). These governing equations are closed with the equation of state, p = (γ − 1)ρe, where γ = 5/3 is the adiabatic index. The equations are solved by a Roe Riemann solver with the second entropy fix by Harten and Hyman to treat the contact discontinuity and the shock wave (Harten & Hyman 1983). The numerical accuracy of the code used in this study is verified in Appendix B. We divide the simulation domain from 10 16 cm to 3 × 10 21 cm into 2047 zones in a logarithmic scale. Inside 10 16 cm as an inner boundary condition, we inject the blowing He-rich gas whose time evolution is described in Section 2. We trace the distribution of the chemical abundances by advection, assuming that no mixing of the chemical composition occurs. The abundance distribution is required in order to accurately estimate the number density of ions and electrons in the radiative cooling term. Figure 3 shows the snapshot of the density structure of the CSM at the moment of core collapse of the progenitor. The model 'WARM' and 'UNIFORM' share an identical CSM structure in the entire simulation domain. It is also the case for the model 'HOT' within ∼ 3 pc, but its outer configuration deviates from the other two models. The distribution of the density within ∼ 3 pc reflects the mass loss history. Namely, the dense CSM being distributed around r ∼ 0.01 pc and 0.1 pc are originated from the mass loss at points D and C in Figure 1. Yet, a segment resides around 10 pc in which the density is roughly constant with some fluctuations. This nonsmooth segment is created by the collision between the wind launched at point B and the reverse shock generated by the gas ejected at point A before. The ISM wall is located at a radius of 20 pc in the model 'WARM' and 'UNIFORM' and 30 pc in the model 'HOT', respectively. Composed CSM We will briefly elaborate on the importance of the CE component on the ISM profile. The reference model The dashed black line shows the distribution realized for the steady wind with its mass-loss rateṀ = 10 −7 M yr −1 . The distributions of the gas pointed out by cursive alphabets represent that they are from the mass loss activity referred in Figure 1. 'UNIFORM' without the CE component allows us to investigate the contribution of the CE component on the hydrodynamics of the wind. The results obtained from this reference model are found to be almost identical to the outcome from 'WARM', being nearly indistinguishable just in Figure 3. This can be interpreted as follows. The radius of the ISM wall is roughly determined by the balance between the ram pressure of the wind and the thermal pressure of the swept-up material (Weaver et al. 1977), which is computed as ∼ 20 pc in our simulations. The enclosed mass of the initial ISM profile at r ∼ 20 pc is ∼ 400M , indicating that the mass of the CE component can be regarded to be negligibly small. Hence, the composed CSM has similar characteristics between 'WARM' and 'UNIFORM'. We confirmed that even when considering a uniformly distributed hot ISM (ρ ism = 10 −26 g cm −3 , T ism = 10 6 K), the consequent CSM structure does not differ from the model 'HOT' significantly other than slight quantitative modifications. This implies that as long as the CE ejection before the USSN is considered within a range of typical time and energy scales, it does not play an important role in the formation of the CSM around the USSN progenitor. Figure 4 shows the temperature structure of the CSM at the moment of core collapse. Similar to the density structure, the models 'WARM' and 'UNIFORM' have the same temperature structure over the entire region. The model 'HOT' also possesses the identical distribution with the other models within 3 pc, but the quantitatively different structure is formed outside 3 pc. A hot plasma with ∼ 10 8 K is located in the vicinity of the ISM wall in all models. The location of the inner edge of this hot plasma coincides with the radius of the termination shock of the wind driven by RLO. The geometrical thickness of the plasma is ∼ 10 pc. The existence of this hot plasma region plays a critical role in weakening the SNR blastwave as it propagates through the region as discussed later. SNR EVOLUTION In this section, we investigate the evolution of a USSNR interacting with the CSM constructed in the previous section. In Section 4.1, we show the method to simulate the dynamics of the ejecta and the expected synchrotron emission, and the results are presented in Section 4.2. As was confirmed in the previous section, the solution derived from the model without a CE component ('UNIFORM') converges to that of 'WARM'. We will therefore examine results from the models 'WARM' and 'HOT' hereafter. Ejecta dynamics The initial profile of the USSN progenitor is taken from Moriya et al. (2017), who evolved the model of the He star previously presented by Tauris et al. (2013) further until core collapse. Then we attach the CSM composed in Section 3 to the progenitor while retaining the distribution of the density, velocity, temperature, and chemical abundance. We next examine the hydrodynamics of the SN explosion to obtain the SN ejecta structure. We excise the remnant mass M rem = 1.35M from the inner region of the progenitor, and inject an explosion energy E exp = 10 50 erg to the rest of the material in the progenitor (M ej ∼ 0.15M ) as a thermal energy following the method developed by Morozova et al. (2015). The explosion energy is chosen based on light curve models (Moriya et al. 2017), which is also consistent with that proposed by state-of-the-art simulations (Suwa et al. 2015;Müller et al. 2018). The profile is resolved into more than 4000 meshes with a logarithmic spacing, and the hydrodynamics of the ejecta is calculated by the same method as described in Section 3.2, except that a reflective condition is employed at the inner boundary. As a result, we obtain the time evolution of the blastwave velocity and the trajectory of Lagrangian particles, which are used to compute the energy distribution of relativistic electrons and the amplified magnetic field (see the next section). As the SNR evolves into the Sedov phase, its reverse shock begins to propagate towards the inner region and heats up the ejecta (e.g., Truelove & McKee 1999). Since the simulation domain is resolved under a logarithmic mesh, the high temperature in the inner region can cause small timesteps, making it difficult for the simulation to progress. To solve this numerical difficulty, we excise the Eulerian meshes in the innermost region within 10 18 cm when the blastwave radius has reached 10 19 cm. This does not affect the consistency of the simulations since the total gas mass within 10 18 cm at the moment of the excision is negligibly small and hence dynamically unimportant. This allows us to trace the long-term evolution of the USSNR within a reasonable simulation time. The computations are terminated at 10 5 years since the explosion. Particle acceleration and magnetic field amplification Once the gas is heated by the forward shock, the diffusive shock acceleration (DSA) imparts relativistic energies to the injected charge particles and induces amplification of the turbulent magnetic field (e.g., Fermi 1949;Drury 1983). The region shocked by the blastwave serves as a site of synchrotron emission from SNRs (Reynolds 2008;Dubner & Giacani 2015). In this study, we define the blastwave as the discontinuity which satisfies the following two conditions: (1) the pressure jump is the largest in the simulation domain, and (2) the Mach number is greater than 3. The latter is justified because strong shocks have a potential to drive DSA, whilst weak shocks are less capable of efficient particle acceleration, confirmed by the observations for radio relics in galaxy clusters (e.g., Botteon et al. 2020, and references therein). We first consider a Lagrangian mesh a s through which the blastwave passes at time t s . As the shock sweeps through the mesh, the charged particles are accelerated to relativistic energies, coupled with an amplification of the turbulent magnetic field. We model the energy densities of the accelerated relativistic electrons (u e ) and the magnetic field (u B ) in the Lagrangian mesh a s as follows: where e and B are the acceleration and amplification efficiencies, ρ sh is the mass density in the Lagrangian mesh a s , V b is the velocity of the blastwave, and v u is the velocity of the unshocked gas upstream of the shock, respectively. These parametrizations are conventionally used in the modeling of radio SNe (e.g., Chevalier et al. 2006;Chevalier & Fransson 2006;Matsuoka et al. 2019). These equations apply to the mesh only when The energy distribution of the accelerated electrons, N (a s , E), is described by a power-law distribution as follows: where E and p are the energy and the spectral index of the electrons, respectively. The coefficient C is determined by performing a normalization of the energy density: where E min = 2m e c 2 is used (see Section 5.6 for a discussion on the uncertainty related to E min ). As the system evolves, the ejecta expands and the blastwave propagates to the next Lagrangian mesh. Meanwhile, the relativistic electrons lose their energies by both synchrotron and adiabatic cooling, and the magnetic field also decays with the adiabatic expansion. We consider a Lagrangian mesh (a) which had been heated by the shock at mass coordinate a s and time t s , and assume that the relativistic particles are confined within the mesh and the magnetic field is frozen in the plasma. We calculate the cooling processes of the accelerated particles and the time evolution of the energy distribution following previous studies (e.g., Reynolds 1998;Orlando et al. 2011;Ferrand et al. 2014). An electron's energy E declines to E through synchrotron and adiabatic cooling, which can be written as follows: where c is the speed of light, q is the elementary charge, and m e is the mass of electron, respectively. The energy distribution of the electrons evolves following number conservation, i.e. As for the strength of the magnetic field, we consider a magnetic flux conservation in each Lagrangian mesh. Synchrotron emission Given the energy distribution of electrons and the strength of the magnetic field, the intensity of the synchrotron emission I ν can be calculated by integrating the radiative transfer equation written as follows: where j ν,syn , α ν,syn , and α ν,ff are the synchrotron emissivity, synchrotron self-absorption and free-free absorption coefficient, respectively (Rybicki & Lightman 1979), and R b is the blastwave radius. We also calculate the surface brightness Σ(θ) which is often used as a diagnostic observable for SNRs. Σ-D diagrams which show the relation between the surface brightness and the diameter of SNRs are commonly used for determining the distance to the objects (see e.g., Poveda & Woltjer 1968;Pavlović et al. 2013, and references therein). Since the surface brightness Σ(θ) is independent of the distance to the SNR, it can be a useful quantity for investigating the intrinsic nature of the USSNR compared to the rest of the SNR population. Σ(θ)δθ, the power per unit surface area and unit frequency emitted from a ring with sky projection angles θ to θ + δθ, can be evaluated by integrating the total power of the synchrotron emission per unit volume along the line of sight as follows: where d, ν = 4πj ν,syn , δA(θ) = δ(πd 2 θ 2 ), and ∆Ω(θ) are the distance to the SNR, the total power of the synchrotron emission per unit volume, the area of the ring with projection angle θ, and the total solid angle of the SNR. The angle-averaged surface brightness can then be estimated, which allows us to examine the position of USSNRs on the Σ-D diagram. Characteristics of a USSNR Firstly, we discuss the hydrodynamics of the interaction between the USSN ejecta and the CSM. In Figure 5, the time evolutions of the density and velocity profile are shown. Here we mention on the dependence of the density profile on the ISM state. We can see that the model 'HOT' has a larger radius of the ISM wall than the model 'WARM', even though these two models have initially the same pressure. This suggests that the ISM density is important for dictating the location of the ISM wall; the lower ISM density (hot ISM) allows the exploding SNR gas to further expand. This feature is critical for quantifying the surface brightness of the USSNRs (see Figure 9 and Figure 10). From the density distributions, we can see that the ejecta keeps expanding until t ∼ 10000 years but starts decelerating around the ISM wall. The system can expand further for another ∼ 3 and 10 pc at most from the location of the ISM wall in the model 'WARM' and 'HOT', respectively. This can be observed in the panel of the velocity profile; the system experiences fast expansion at t 3000 years, while after the collision with the ISM wall it only possesses several hundreds km s −1 of the outward velocity. This implies that the diameter of the USSNR is highly constrained by the location of the ISM cavity wall, which in turn depends on the pre-SN mass loss activity of the progenitor. This picture can be applied to all core-collapse SNRs in general, for which the diameters of SNRs are associated with the pre-SN mass loss activity of their progenitors (e.g., Yasuda et al. 2021a,b). Figure 6 shows the time evolution of the Mach number and the blastwave velocity. Within the first 300 years, these two quantities both in the model 'WARM' and 'HOT' behave similarly each other since in this phase the identical CSM structure is traced. We can see two epochs at which the blastwave accelerates at r ∼ 0.01 pc and r ∼ 0.1 pc respectively, where the CSM density drops by orders of magnitude. Correspondingly, the Mach number also increases by more than an order of magnitudes at the same time. Overall, the velocity stays at about 10 9 cm s −1 , leaving the USSNR active for the first 300 years. Furthermore, at 5 years t 50 years when the swept CSM mass begins to exceed the ejecta mass, the velocity of the blastwave decays proportional roughly to t −1/3 . This agrees with the expected time dependence of the velocity in the Sedov phase for a CSM density profile proportional to r −2 (Book 1994). The gradual increase of the Mach number during that phase can be also observed, due to the decrease of the upstream temperature (see Figure 4). After t ∼ 300 years, the blastwave decelerates down to ∼ 10 8 cm s −1 , and then simply disappears out, as well as the Mach number decreases rapidly down to O(1). This phenomenon can be observed both in 'WARM' and in 'HOT' though there are some quantitative differences between these two models. This is caused by the hot plasma at r ∼ 5 pc shown in Figure 4; as the blastwave plunges into the plasma where the sound speed is high, the Mach number of the blastwave quickly decreases down to unity. It is implied that such a weak shock cannot support an efficient DSA. Additionally, the density jump at r ∼ 3 pc can also give rise to the deceleration of the blastwave. In conclusion, this result indicates that the blastwave in a USSNR dies out by propagating into a region of hot plasma at 10 3 years. Figure 7 shows the long-term 1 GHz radio light curves from the models shown in Table 2. The observed flux density F ν shown in the right y-axis is normalized by a distance d = 10 kpc. The peak luminosity of the light curve is determined by synchrotron self-absorption with their shapes slightly modified by free-free absorption (see also Matsuoka et al. 2019). Note that for a USSN candidate iPTF14gqr non-detections of radio signals at the frequency 6 GHz and 22 GHz within 10 days have been reported, placing upper limits (De et al. 2018). In such a very early phase, free-free absorption completely damps the centimeter radio emissions, much more for 1 GHz (Matsuoka & Maeda 2020). Since more electrons are accelerated and magnetic field is more intensively amplified in the models which assume larger efficiencies for DSA, brighter radio emission from USSNRs can be expected in the model with ( e , B ) = (10 −2 , 10 −1 ) than those with ( e , B ) = (10 −3 , 10 −2 ). Besides, a harder spectral index increases the number of more energetic electrons in the shocked region, which also results in the luminous radio signals. This behavior can be confirmed by comparing the luminosity between the models with p = 2.1 and those with p = 2.5, 3.0. We note that there are no qualitative difference in the light curve behaviors between the two CSM models over the entire timespan up to 10 5 years. Actually as 'HOT' has a more extended structure than 'WARM' as seen in Figure 5, a difference between these two models is expected in their surface brightness as we will discuss later. We first look at the behaviors of the young USSNR at ages less than 1000 years, and compare them with SNe well-observed at the frequency ∼ 1 GHz even 1 year after their explosions such as SN 1993J (Martí-Vidal et al. 2011), SN 1995N (Chandra et al. 2009), and SN 2006jd (Chandra et al. 2012, and one of the youngest Galactic SNR Cas A (DeLaney & Rudnick 2003, the point plotted at t ∼ 300 years). As seen in Figure 7, our models show that young USSNRs at an age t ∼ 10 years and t ∼ 300 years produce fainter radio signal than those from the bright SNe and Cas A, respectively. The relatively weak emissions can be partially attributed to the shock velocity which is by a factor of a few lower than what is inferred for these objects (see, e.g., Fransson & Björnsson 1998). Another possible reason is that at t ∼ 100 years the blastwave is propagating at r ∼ 1 pc where the dense CSM formed by the mass loss driven by the RLO is absent. Then the density of the CSM swept by the blastwave is considerably small there, making the DSA less efficient. However, we note that the expected flux density of the radio emission from the USSNR at d = 10 pc keeps greater than 0.1 mJy within an age t 1000 years, which is bright enough to be detected by the present radio surveys such as Very Large Array Sky Survey (VLASS, Lacy et al. 2020), if it inhabits inside our galaxy. Next we discuss the properties of the light curves of USSNR at larger ages (1000 years t 10 5 years). At t ∼ 1000 years, the radio emission brightens by a factor to an order of magnitude compared to t ∼ 300 years, even though the synchrotron emission in this phase is optically thin to self-absorption. This enhancement stems from the interaction between the SN ejecta and the relatively dense CSM located at ∼ 3-10 pc; a larger amount of the gas injection into the shocked region leads to a larger number of the synchrotron emitting electrons, resulting in a higher radio luminosity. In addition, the compression of the gas around the blastwave by the collision with the dense CSM brings about the further amplification of the magnetic field through the conservation of the magnetic flux (see Figure 8). This can also be a cause of the brightening of the radio luminosity. We note that this brightening is one of the characteristics of a USSNR associated with the time dependent mass loss driven by RLO, since a CSM with a simple powerlaw distribution cannot reproduce such a rise in radio luminosity in the optically-thin regime. Yet, the subsequent radio signals are fainter than those observed from the Galactic SNRs enumerated in Table 3. The stalled blastwave at t ∼ 300 years can no longer execute efficient DSA any further. Even so, it is worth mentioning that SNRs discovered so far are biased towards bright Table 3 (black points with error bars), estimated by the distances to each objects. The right y-axis stands for the observed flux densities with which the source with the luminosity shown in the left y-axis is observed at a distance d = 10 kpc. The red dotted line indicates the detection limit of VLASS (Lacy et al. 2020). objects. Deep surveys such as VLASS will have potential to uncover the population of the SNRs as faint as the aged USSNRs. After the death of the blastwave, DSA will no longer be triggered, and the non-thermal emissions are forced to decline through adiabatic cooling. The timing of dominance by adiabatic cooling is roughly 1000 years, and is more-or-less determined by the location of the hot plasma (Figure 4). The hot plasma is formed by the interaction between the He-rich wind blown from the progenitor binary and the H-rich gas originated from the CE ejection or the uniform ISM. Our result implies that the location of the hot plasma in the CSM is key to determining the lifetime of the blastwave and hence the observable lifespan of the USSNR. We also observe oscillations of the light curves at t 10 4 years. This is an one-dimensional artifact due to the reflective condition at the inner boundary of the simulation domain. As the reverse shock of the USSNR brings along an inward gas flow back to the explosion center, it rebounds back to the outer interacting region. Then the material around the shocked region is compressed, inducing an amplification of the magnetic field through flux conservation. A repeating occurrence of this inward and outward motion results in the oscillation of the radio luminosity in our models for the aged USSNR. In practice, multi-dimensional dynamics should suppress the motion of the gas described above due to a broken spherical symmetry. Even so, it can be noted that the global evolution of the radio luminosity of the aged USSNR roughly follows an adiabatic evolution when averaged over a longer timescale. Figure 9 shows the time evolution of the surface brightness as a function of the sky projection angle. The model 'HOT' has fainter surface brightness and larger projection angles at which the surface brightness becomes maximum (θ max ) than those in the model 'WARM', because the model 'HOT' has a more extended CSM density structure than the model 'WARM' (see Figure 3). Yet the qualitative behavior of the surface brightness as a function of the sky projection angle is similar between these two models. θ max is mainly dic-tated by the location of the ISM wall, which prevents the gas in the shocked region from expanding any further outward (see Figure 5). As mentioned before, the hot plasma and the ISM cavity wall are shaped by the wind colliding with the CE and/or the ISM, which ultimately determines the detectability of the USSNR. The evolution of the relation between the surface brightness and diameter of the USSNR can be assessed by the Σ-D diagram shown in Figure 10. For the same reason as the relation between Σ and θ max (Figure 9), the model 'HOT' has a fainter surface brightness and larger diameter than the model 'WARM'. This results in the lower right position of the evolutionary path of the model 'HOT' in the Σ − D diagram. The magnitude of the surface brightness strongly depends on the parameters relevant to the DSA (i.e., p, e , and B ). The surface brightness of the model appears to be relatively faint compared to those of the Galactic SNRs in the models such as 'H SNR', 'I SNR', 'S SN', and 'S SNR', in which the expected flux density of the radio emission from the aged USSNRs are approximately 0.1 mJy. This poses a challenge to detection and is consistent with the current non-detection of the SNR hosting a DNS binary in our Galaxy. On the other hand, in all of our models the USSNR diameter is in the order of 10 pc, which is also typical of the observed Galactic SNRs (Pavlović et al. 2013). We suggest that a faint surface brightness combined with a diameter D ∼ 10 pc can be a characteristics of a USSNR, and might be useful diagnostics for searching SNRs hosting a DNS binary. At last we comment on the role of the ISM state on the radiative characteristics of the USSNRs. Comparisons of the solid and dashed lines in Figure 9 and Figure 10 demonstrate that the surface brightness of 'HOT' is fainter than that of 'WARM' at the same age. This is attributed to the fact that 'HOT' has a larger diameter and sky-projected angular size than 'WARM' and that the luminosities of these two models are similar to each other. Our simulations of CSM formation (Section 3) assume that the models 'WARM' and 'HOT' have the same thermal pressure but different densities in the initial profiles; we have shown that the model with a lower initial density leads to a larger diameter of the USSNR. Thus we conclude that the ISM density plays a role in determining the physical scale of the USSNR, which also affects the surface brightness. the SNR population. One is the observable lifespan of the SNR, t snr , defined here as the timescale in which the radio emission from the SNR can be detected. The other one is t sn , the time interval between subsequent SNe or the inverse of the SN rate in a galaxy. The number of active SNRs can then be estimated as t snr /t sn . As for USSNRs, Hijikawa et al. (2019) predicted the event rate of USSNe as 510.88 gal −1 Myr −1 in their feasible population synthesis model, leading to t sn ∼ 2 × 10 3 years 1 . For the SNR lifetime, t snr ∼ 100-10 5 years can be implied from our models depending on the DSA efficiencies and the spectral index of accelerated electrons. Hence, the expected number of active USSNRs can be derived as 0.002 − 20. These estimations involve uncertainties from observational conditions (e.g., sensitivity) as well as the DSA parameters. Models with high DSA efficiencies or hard power-law index for the accelerated electrons (e.g., 'H SN' and 'H SNR') probably over-estimate the observable lifespan; typical shock acceleration efficiency constrained by SNR observations are usually found to be lower than those inferred from the observations of radio SNe (Lee et al. 2012). Moreover, it has been suggested that the spectral index of the accelerated particles in young SNRs can be modified and steepened by non-linear effects associated with magnetic field amplification in an efficient DSA and Alfvénic drift effect (Vink et al. 2006;Zirakashvili & Ptuskin 2008;, whereas in mature SNRs it tends to follow the prediction by the standard DSA (Reynoso & Walsh 2015). The former is more appropriate for the situation considered in the present work, since our simulations indicate that the blastwave dies out at a young age in our CSM model. From these arguments, we can refer 'I SNR' as our fiducial models for the evolution of a USSNR, which predicts an observable lifespan t snr ∼ 10 4 years. Then we can further constrain the expected number of the observable USSNRs to be ∼ 2. Since the detected number of the Galactic SNRs reaches ∼ 400 (Green 2019), the most probable fraction of USS-NRs is then at most ∼ 0.5 % of all active SNRs. We note however that the quantification of the observable lifespan of the USSNRs involves uncertainties and depends on the sensitivity of the detectors as well. The expected number of active USSNRs in a galaxy, ∼ 2, poses a severe challenge on the search of USS-NRs. Radio observation facilities capable of deep surveys such as the Square Kilometre Arrays (SKA) are requisite to solving this difficulty. A Galactic SNR survey with a sensitivity ∼ 0.1 mJy is one of the solutions to search for USSNRs, as well as for eliminating the possible bias against faint SNRs. Another possible solution is to extend the search to other galaxies in the local group. SNRs producing radio emission brighter than ∼ 10 21 (d/Mpc) 2 (F lim /µJy) erg s −1 Hz −1 in the local group can be detected by making use of the deepest observation projects, where F lim is the maximum sensitivity of SKA (Braun et al. 2019). This sensitivity enables us to detect the radio emission from USSNRs (proved by the model 'H SN' and 'H SNR' in Figure 7). Assuming that the galaxies in the local group have the same proportion of USSNRs to all kinds of SNRs (∼ 0.5%), this attempt might offer an opportunity to discover USSNRs. General implications for stripped-envelope SNRs We have shown that the blastwave of a USSNR suddenly loses its punch by being blunted by the hot plasma. The lifetime of the blastwave is limited to 10 3 years, and the diameter is roughly a few 10 pc. The evolution of USSNRs is different from that elucidated classically. Generally, after the Sedov phase at ∼ 10 4 years, radiative cooling from the swept ISM drains the internal energy away from the system, leading to a fast deceleration of the blastwave. Through the pressure-driven snowplow phase and momentum-driven snowplow phase, the SNR merge with the surrounding ISM at t ∼ 5×10 5 years (Cioffi et al. 1988). On the other hand, the evolution of USSNRs is heavily influenced by the non-uniform CSM density distribution and the presence of a hot plasma in the vicinity of the ISM wall, both of which are attributed to the wind driven by the binary interaction. The binary interaction is a key physical process in the evolutionary behaviors of USSNRs that deviate from the classical picture of SNR evolution. Besides USSNe, it is widely believed that strippedenvelope SNe (Type IIb, Ib, and Ic SNe) are explosions of a massive star involved in binary interaction (e.g., Yoon et al. 2010;Ouchi & Maeda 2017;Fang et al. 2019). It can be speculated that the evolution of SNRs originated from stripped-envelope SNe also deviates from the classical theory. Considering that some fraction of the observed SNe are classified as stripped-envelope SNe (Type IIb, Ib, and Ic SNe, Eldridge et al. 2013), it is natural that some of the confirmed SNRs in our Galaxy also come from a stripped-envelope SN origin. Previously, in terms of hydrodynamics, the effect of the wind bubble and its multi-dimensional behaviors on the subsequent SNR evolutions have been investigated by making use of simple models for stellar mass loading (Tenorio-Tagle et al. 1990, 1991Dwarkadas 2005Dwarkadas , 2007, but models for mass loss history based on detailed binary evolution calculations have not been incorporated. We thus suggest that such stripped-envelope SNRs should be modeled with the mass loss history of the progenitor binary taken into account for their surrounding CSM environments (e.g., Yasuda et al. 2021a,b). Radio emission from the hot plasma region The velocity of the RLO wind is high, reaching ∼ 1000 km s −1 . It is therefore possible that in the formation process of the hot plasma driven by the RLO wind, electron acceleration and magnetic field amplification can happen through the DSA mechanism. Such effects can contribute to the radio luminosity and surface brightness of the subsequent USSNRs, and thus an evaluation of this process is required. Our simulations show that the velocity of the RLO wind shock is V sh,RLO ∼ 200 km s −1 . If we consider a hot ISM state (T ism ∼ 10 6 K), the Mach number of the shockwave launched by the RLO wind is in the order of unity. This indicates that the contribution from the hot plasma to the total flux of the radio emission from USSNRs is negligible in a hot ISM. On the other hand, in a warm ISM (T ism ∼ 10 4 K), the Mach number is large enough to sustain DSA. The hot plasma can then be a potential emitter of synchrotron radiation. Assuming that the region is optically thin for synchrotron radiation, the radio luminosity is written as L ν ∼ 4π 2 R 3 j ν,syn , where R is the position of the RLO wind shock. Based on the formulae introduced in this study, the luminosity can be roughly estimated as L ν ∼ 10 21 R 3 . Comparing this magnitude to the models, we can see that this contribution from the hot plasma in a warm ISM is negligibly small with respect to the predicted luminosities of young USSNRs (t 1000 years), but can be comparable to or even brighter than those of older USSNRs (t 1000 years), especially for the 'I SNR', 'S SN', and 'S SNR' models. In addition, we note that at later ages, the hot plasma can experience a compression from the expanding remnant, and the radio emission contribution from the plasma can be boosted further by this compression. Although the primary purpose of our study is on the modeling of USSNRs, the above discussion further advocates the importance of taking into account the CSM environment formed by the pre-SN mass loss activity of the progenitor in the USSNR emission model. Treatment of radiative cooling Apart from the models presented so far, we have also performed extra simulations in which radiative cooling occurs in regions with a broader range of optical depths with τ < c/v to approximate the contribution of photon diffusion to the energy loss. While this approach overestimates the energy loss from radiative cooling, it is helpful nonetheless for assessing the robustness of our results. In these models with an enhanced energy loss, we found that the blastwave velocity is decreased by a few percent. This confirms that the impact of radiative cooling on the overall dynamics is small enough that it plays an insignificant role in the modeling of USSNRs. Effects of non-linear diffusive shock acceleration We have employed the simplified treatment of particle acceleration and magnetic field amplification. In our study, non-linear effects in DSA are not considered, and the contribution of the pressure from cosmic-rays and its feedback to the hydrodynamics are not included. These effects can soften the energy distribution of accelerated electrons and could decrease the luminosity of non-thermal emission, including X-rays and gamma-rays (e.g., Vink et al. 2006;. Our estimate of the USSNR population can thus be altered by including such effects (see Section 5.1). On the other hand, however, the dynamics of the USSNR blastwave is mainly determined by the distribution of the CSM. The lifetime of the blastwave is mainly limited by its interaction with the hot plasma in the vicinity of the ISM wall formed by the pre-SN mass loss. Thus, improving the treatment of the microphysics in shock acceleration plays a secondary role in the observable lifespan of a USSNR. Parametrizations of e and E min There are two major simplifications in the parametrization for particle acceleration adopted in our study. First, some particle-in-cell simulations imply that the decrease of the Mach number (or the blastwave velocity) leads to a drop of the acceleration efficiency of protons (Caprioli & Spitkovsky 2014;Ha et al. 2018). This suggests the possibility that the acceleration efficiency of electrons also declines with a decreasing Mach number, while our study fixes e at a constant value with time. Second, it is believed that electrons with momentum greater than ∼ √ m e m p V b follow a power-law distribution even below the relativistic regime. However, we have fixed the minimum energy of the power-law distri-bution at E min in Equation (7) (see also Sironi & Giannios 2013). Hence, a decrease of the blastwave velocity leads to an increase of the number of electrons with a momentum p mom within √ m e m p V b p mom E min /c. This effect is not included in our models. In summary, our study is over-estimating the radio luminosities and the actual brightness of the USSNRs could be fainter if the above two factors are accounted for. However, the blastwave velocity in our calculations is in the order of ∼ 10 9 cm s −1 and the Mach number is sufficiently high in the young phase before the collision with the hot plasma. In the late phase (t 1000 years), the blastwave dies away rapidly. Therefore, the system considered in our study is not prone to the situation described above. Moreover, even if we include the two effects mentioned above in our modeling, the resulted radio luminosities should be fainter than those reported in Section 4.2, so that our conclusions on the characteristics and populations of USSNRs would not be affected qualitatively. Furthermore, we have examined two values for e shown in Table 2, and believe that the effect of the microphysics noted here can be investigated within this parameter space. Asphericity An aspherical configuration of the CE component and its effect on the wind hydrodynamics can be important as well. For instance it has been suggested that the material released by the CE ejection tends to distribute along the equatorial plane (Iaconi et al. 2019). Thus if the CE component resides in the vicinity of the SN progenitor it could affect the subsequent wind hydrodynamics. The gas ejected through the CE interaction should concentrate on the equatorial plane of the binary, while in the polar direction a static ISM should dominate. Then, the propagation of the wind driven by the RLO in the direction of the equatorial plane and the polar axis are regulated by the interaction between the ISM with and without the CE component, respectively. Our simulations in Section 3 show that the effect of the presence of the CE component is not significant regardless of the state of the ISM. From this point of view, by assuming a spherically blown wind from the progenitor binary, we can qualitatively speculate that the effect of possible non-spherical CE distributions would not be important. Besides, an anisotropy of the conformation of the wind can be expected to shape the non-spherical geometry of the CSM as proposed in the literature of Type IIn SNe (Patat et al. 2011;Katsuda et al. 2016;Kumar et al. 2019). It is worth investigating the multi-dimensional structures of the composed CSM taking into account the anisotropy of the circumstellar environment and the wind outflows. These aspherical configurations of the CSM can alter the properties of the radiation from the SNe or SNRs, which will be examined in detail in a future work (see also e.g., Kurfürst & Krtička 2019;Suzuki et al. 2019). SUMMARY In this paper, we have investigated the characteristics of a SNR hosting a DNS binary, which we have termed a USSNR, using a grid of numerical models. A USSN has been proposed to be a transient event preceding the formation of a DNS binary. Before the USSN, the He star envelope is stripped away by the companion neutron star and escapes the binary system. By employing the mass-transfer history presented by Tauris et al. (2013), we simulated the hydrodynamics of the wind expelled from the progenitor binary, and constructed the largescale CSM structure around the USSN progenitor up to ∼ 100 pc. A hot plasma is formed in the vicinity of the ISM wall, which is found to play a critical role in governing the lifetime of the blastwave of the USSNR. We also examined the dynamical and radiative evolution of a USSNR by considering a progenitor surrounded by the CSM composed by our simulation. We found that within the first ∼ 1000 years the blastwave traces the inner part of the CSM, producing a radio emission bright enough to be detected if the USSNR inhabits inside our Galaxy, though it is still fainter than those from typical SNRs. Once the blastwave collides with the hot plasma, it stalls rapidly and the radio luminosity also starts to decrease steadily. This dynamical behavior does not depend much on the strength of the CE ejection before the release of the helium gas from the progenitor binary. The surface brightness of the USSNR tends to be fainter than those of typical SNRs, while the diameter settles at D ∼ O(10 pc) similarly to the Galactic SNRs. Therefore, the USSNRs populate in the lower portion on the Σ-D diagram compared to the observed Galactic SNRs, and this can serve as a useful diagnostics for the search of a USSNR. We also confirmed that the initial ISM profile with a lower density allows the USSNR to expand further, leading to a lower surface brightness and a larger diameter. Furthermore, we evaluated the observable lifespan of a USSNR to be ∼ 10 4 years, defined as the time interval from the explosion to the point when the radio luminosity has declined beyond the detection limit of the present radio surveys. Combining the short observable lifespan of the USSNRs with the small event rate of USSNe, we conclude that the expected number of active USSNRs is less than one out of the observed 10 2−3 SNRs, which is consistent with the current non-detection of a SNR hosting a DNS. ACKNOWLEDGMENTS The authors thank the anonymous referee for his or her fruitful suggestions, and Haruo Yasuda and Norita Kawanaka for their comments which have deepened the discussion in our study. T.M. acknowledges the support from the Iwadare Scholarship Foundation in the fiscal year 2020 and from Japanese Society for the Promotion of Science ( In the simulation of the CSM formation, the value of ρ CE must be specified to determine the initial density profile. We consider a CE component with a total mass M CE = 10M ejected into a static uniform ISM. The required condition is where R ∞ = 3×10 21 cm is the outermost radius of the simulation domain. For the case ρ(r) = ρ CE exp(−r/R CE )+ρ ism , this can be analytically integrated, so that ρ CE M CE 8πR 3 CE = 7.96 × 10 −22 gcm −3 (A2) can be derived. B. TESTS FOR THE NUMERICAL CODE The numerical simulation code for the hydrodynamics employed in this study is verified in this section. Figure 11 shows the result of the shock tube problem with an adiabatic index γ = 5/3. At t = 0, a static (v = 0) gas is put into the simulation box, with a step function profile for its density and pressure centered at x = 0 as follows; ρ L = 1.0, v L = 0.0, p L = 1.0, ρ R = 0.125, v R = 0.0, p R = 0.1 (the subscripts L and R denote x < 0 and x ≥ 0, respectively). The numerical solution successfully reproduces the profiles given by the exact solution. Furthermore, Figure 12 displays the Sedov solution at t = 1.0 second in which an explosion energy E sedov = 1 erg is deposited into a uniform medium with ρ sedov = 1.0 × 10 −24 g cm −3 (Sedov 1959). The results are again in a good agreement with the analytical solutions for the density, velocity, and pressure profiles, as well as a good match of the shock radius given by R = 1.15(E sedov t 2 /ρ sedov ) 0.2 . These two experiments assure us a good accuracy of our numerical code.
13,697.6
2022-04-14T00:00:00.000
[ "Physics" ]
Drawing tubular fibres : experiments versus mathematical modelling A series of six experiments drawing tubular fibres are compared to some recent mathematical modelling of this fabrication process. The importance of fibre tension in determining the internal geometry of the fibre is demonstrated, confirming a key prediction of the models. There is evidence of self-pressurisation of the internal channel, where an additional pressure is induced in the internal channel as the fibre is drawn, and the dependence of the magnitude of this pressure on fibre tension is discussed. Additionally, there is evidence that the difference between the glass and furnace temperatures is proportional to the furnace temperature and dependent on the preform geometry. © 2015 Optical Society of America OCIS codes: (000.3870) Mathematics; (060.2280) Fiber design and fabrication; (060.2310) Fiber optics; (060.4005) Microstructured fibers. References and links 1. Y. M. Stokes, P. Buchak, D. G. Crowdy, and H. Ebendorff-Heidepriem, “Drawing of micro-structured fibres: circular and non-circular tubes,” J. Fluid Mech. 755, 176–203 (2014). 2. P. Buchak, D. G. Crowdy, Y. M. Stokes, and H. Ebendorff-Heidepriem, “Elliptical pore regularization of the inverse problem for microstructure optical fibre fabrication,” J. Fluid Mech. 778, 5–38 (2015). 3. M. J. Chen, Y. M. Stokes, P. Buchak, D. G. Crowdy, and H. Ebendorff-Heidepriem, “Microstructured optical fibre drawing with active channel pressurisation,” J. Fluid Mech. 783, 137–165 (2015). 4. H. Tronnolone, Y. M. Stokes, H. T. C. Foo, and H. Ebendorff-Heidepriem, “Gravitational extension of a fluid cylinder with internal structure,” J. Fluid Mech. submitted (2015). 5. T. M. Monro and H. Ebendorff-Heidepriem, “Progress in microstructured optical fibers,” Ann. Rev. Materials Res. 36, 467–495 (2006). 6. S. Xue, R. Tanner, G. Barton, R. Lwin, M. Large, and L. Poladian, “Fabrication of Microstructured Optical Fibres Part I: Problem Formulation and Numerical Modelling of Transient Draw Process,” J. Lightwave Technol. 23, 2245–2254 (2005). 7. G. Luzi, P. Epple, M. Scharrer, K. Fujimoto, C. Rauh, and A. Delgado, “Numerical Solution and Experimental Validation of the Drawing Process of Six-Hole Optical Fibers Including the Effects of Inner Pressure and Surface Tension,” J. Lightwave Technol. 30, 1306–1311 (2012). 8. G. T. Jasion, J. S. Shrimpton, Y. Chen, T. Bradley, D. J. Richardson, and F. Poletti, “MicroStructure Element Method (MSEM): viscous flow model for the virtual draw of microstructured optical fibers,” Opt. Express 23, 312–329 (2015). #247142 Received 5 Aug 2015; revised 26 Nov 2015; accepted 27 Nov 2015; published 15 Dec 2015 © 2016 OSA 1 Jan 2016 | Vol. 6, No. 1 | DOI:10.1364/OME.6.000166 | OPTICAL MATERIALS EXPRESS 166 9. A. D. Fitt, K. Furusawa, T. M. Monro, and C. P. Please, “Modelling the Fabrication of Hollow Fibers: Capillary Drawing,” J. Lightwave Technol. 19, 1924–1931 (2001). 10. A. D. Fitt, K. Furusawa, T. M. Monro, C. P. Please, and D. J. Richardson, “The mathematical modelling of capillary drawing for holey fibre manufacture,” Journal of Engineering Mathematics 43, 201–227 (2002). 11. Y. Chen and T. Birks, “Predicting hole sizes after fibre drawing without knowing the viscosity,” Optical Materials Express 3, 346–356 (2013). 12. A. L. Yarin, P. Gospodinov, and V. I. Roussinov, “Stability loss and sensitivity in hollow fiber drawing,” Phys Fluids 6, 1454–1463 (1994). 13. P. Gospodinov and A. L. Yarin, “Draw resonance of optical microcapillaries in non-isothermal drawing,” Intl J. Multiphase Flow 23, 967–976 (1997). 14. C. J. Voyce, A. D. Fitt, and T. M. Monro, “Mathematical Modeling as an Accurate Predictive Tool in Capillary and Microstructured Fiber Manufacture: The Effects of Preform Rotation,” J. Lightwave Technol. 26, 791–798 (2008). 15. C. J. Voyce, A. D. Fitt, J. R. Hayes, and T. M. Monro, “Mathematical Modeling of the Self-Pressurizing Mechanism for Microstructured Fiber Drawing,” J. Lightwave Technol. 27, 871–878 (2009). 16. R. Kostecki, H. Ebendorff-Heidepriem, S. C. Warren-Smith, and T. M. Monro, “Predicting the drawing conditions for microstructured optical fiber fabrication,” Optical Materials Express 4, 29–40 (2014). 17. L. Cummings and P. Howell, “On the evolution of non-axisymmetric viscous fibres with surface tension, inertia and gravity,” Journal of Fluid Mechanics 389, 361–389 (1999). 18. Schott Glass Company, Optical Glass (2014). 19. M. Trabelssi, H. Ebendorff-Heidepriem, K. C. Richardson, T. M. Monro, and P. F. Joseph, “Computational modeling of die swell of extruded glass preforms at high viscosity,” J. Am. Ceram. Soc. 97, 1572—1581 (2014). Introduction We report on a series of experiments drawing tubular glass fibres which have been performed to test the validity of a recent model of this fabrication process [1].The model of tubular fibres is part of a larger project to investigate the fabrication of microstructured optical fibres (or MOFs) with the techniques of mathematical modelling [1][2][3][4].MOFs are distinguished from solid optical fibres by the cross-sectional structure running along their length.The design of this cross-sectional structure, which acts to change the refractive index from that of the pure glass, gives the fibre certain optical and physical properties which are desirous in a range of applications (see, for instance, [5]).The fabrication process involves slowly feeding a preform of suitable geometry (typically 1-3 cm in diameter) into a heated region within a furnace and then stretching the softened glass to the dimensions of a fibre (typically external diameters of 120-250 µm and internal channel diameters in the order of the wavelength of light).Modelling is required to determine how the preform geometry deforms as it is drawn to fibre. Recent modelling by some of the current authors [1][2][3] has considerably advanced understanding of how the internal structures in MOFs deform during fabrication.In particular, it has been established that the competition between the stretching of the fibre and surface tension on the internal channels ultimately determines the shape and pattern of channels in the fibre that results from a given preform design and a choice of operating parameters [1].This is true for any preform design and sophisticated mathematical techniques have been developed to describe the geometry deformation for a class of optically important MOFs [2].An extra degree of control on the deformation of the geometry may be achieved by applying an overpressure to the internal channels as the fibre is drawn and this potentially allows a greater variety of internal fibre geometries to be achieved [3].Three-dimensional finite element simulations of fibre drawing have been performed by others for MOFs with up to 6 channels [6,7].Our approach of [1][2][3] is much more computationally efficient especially for MOF designs with many channels, which may not be feasible with 3D finite element simulations.MOFs of a specific class have recently been modelled with a focus on computational efficiency [8] by assuming that the channels are separated by thin walls, whereas our approach makes no such assumption and can therefore MOF treat designs more generally.The outer radius of the preform is denoted r 0 and the radius of the fibre is r L .The ratio between the inner and outer radii of the preform is ρ 0 and the ratio between the inner and outer radii of the fibre is ρ L . The annular tubes described in the current paper are the simplest fibre design containing internal structure and gaining understanding of the fabrication process for this simple case is a useful test of the modelling approach.This will serve as a guide to later experiments which will focus on more complex fibre designs with multiple channels to validate the more sophisticated modelling required for those cases [1,2]. Previous studies on tubular fibres have compared a model with experiments [9,10], but did not consider the role of fibre tension since the importance of this quantity has only recently been understood [1,11].Stability considerations in fibre-drawing theoretically restrict the choice of operating conditions [12,13], although the destabilising phenomenon of 'draw resonance' reported in the literature is not present in the current experiments.Further studies have investigated the role of additional effects via theory and experiments.Rapid rotation of the preform during the draw may be used to exert control over the geometry [14], for instance, and modelling has been used to investigate the utility of the practice of sealing the ends of the preform to prevent channel closure [15]. The models for drawing tubular fibres with and without active channel pressurisation (from [1] and [3], respectively) are given in Section 2. Details of the experimental materials, apparatus and procedures are given in Section 3. Section 4 summarises the results of the six experiments and compares this data with the model output, including discussions on the role of self-pressurisation and furnace temperature.Our conclusions are presented in Section 5. Mathematical model for drawing annular MOFs As shown in the schematic diagram in Fig. 1, the ratio of the radii of the inner and outer boundaries in the preform is denoted ρ 0 and the ratio of these radii in the fibre is ρ L .The preform is fed into the heated neck-down region with feed speed U 0 and is drawn off at the end of this region with draw speed U L .The ratio of these two speeds D is the draw ratio and, by conservation of mass, this is also the ratio of the preform and fibre areas, so that quantity in predicting the change in the geometric structure between the preform and the fibre.Validating this relationship with experiments is the principle aim of this paper.The fibre tension σ is a measurable quantity on current fibre drawing equipment and we introduce a dimensionless fibre tension parameter T , for convenience of mathematical modelling; these are related via where γ is the surface tension.Similarly µ 0 is the harmonic mean of the viscosity over the length L of the neck-down region over which the change in geometry occurs, and is related to a dimensionless parameter M where The neck-down zone corresponds to a region near the peak of Kostecki's temperature profiles [16] where the temperature of the glass is such that its viscosity is low enough to be malleable.Note that the temperature profile of the glass differs from the furnace profile because of the imperfect transfer of radiative heat to a semi-transparent material like glass.The modelling work of [1] and [3] introduced additional parameters for mathematical convenience to describe the annular geometry of the preform and the fibre.These are derived from the ratio of the inner and outer boundaries and are given by In practice, we apply our model by starting with a ρ 0 value (known or measured from the preform) and convert this to α 0 via Eq.( 4) for the purposes of the model.The model output for a set of drawing parameters (namely D and T ) is then obtained in terms of α L , see sections 2.1 and 2.2 below, which may then be converted to the more easily interpretable quantity ρ L via a rearrangement of Eq. ( 4), namely An additional quantity of interest is the outer radius of the fibre r L , which may be written in terms of the other parameters as where r 0 is the outer radius of the preform, and S 0 = πr 2 0 1 − ρ 2 0 .The inner radius of the fibre is then ρ L r L . The model also determines M and so gives µ 0 the harmonic mean of viscosity in the glass from Eq. (3) for a given neck-down length L. This may then be used to calculate the temperature in the glass T glass ( • C) corresponding to µ 0 via a temperature-viscosity relation, provided this is known for a particular glass. Summary of model for drawing tubular fibres without active pressurisation We now briefly summarise some key results from [1] which, among general results, presents a complete analytic solution for the drawing of annular fibres (see section 4 of that work for full details).The modelling approach developed in that paper is extremely general and can, in fact, deal with the complex cross-sectional design of a MOF (which features many holes) not just the simplest possible holey structure of an annular fibre. As stated above, for the purposes of this paper it is sufficient to consider the output of the model as simply the fibre geometry parameter α L for a given draw ratio D and dimensionless tension parameter T .It is straightforward to apply the model from [1] to evaluate α L in this way, as well as determine the additional parameter M for the viscosity of the glass (and the corresponding temperature in the glass).The cross-sectional area S and geometry parameter α at any position along the neck-down are related by Then at the fibre end of the neck-down, where α = α L and S = S L = S 0 /D, we obtain after a little rearrangement which is a cubic polynomial in (α L /α 0 ) 1/3 .It is straightforward to find the relevant root of Eq. ( 8).In practice, we find it more convenient to evaluate the roots of the cubic numerically (in MATLAB, for instance), but if desired α L may be expressed as a closed form solution, namely with An alternate expression for α L may be derived via a series expansion assuming, as in fibre drawing, that √ D >> 1, and then performing a perturbation expansion on Eq. (8).The first two terms of the resulting series are an excellent approximation to the relevant exact root of that equation and the resulting expression for α L is Once α L has been found it is possible to calculate the remaining parameter M , as given in [1] (note that M ≡ 1/(Mγ * ) in the notation of that paper, where M is the inverse harmonic mean of dimensionless glass viscosity over the neck down region).The relevant expression for that parameter is and this value for M is substituted into Eq.( 3) to obtain the harmonic mean of the dimensional glass viscosity µ 0 . Summary of model for drawing tubular fibres with active pressurisation As detailed in [3], including active pressurisation of the internal channels changes the mathematical structure of the fibre drawing model, as compared to the unpressurised model of [1].This means that for tubular fibres, there is no longer an exact solution of the type presented above, rather it is necessary to solve a system of differential equations to determine α L numerically. The applied pressure in the internal channel is denoted as p H .A dimensionless pressure parameter P is introduced and this is related to the dimensional pressure by Full details of the derivation of the following equations are given in [3], along with a complete discussion of various more mathematical aspects of the pressurised model.The fibre geometry parameter α L is found by the simultaneous solution of the differential equations where these equations are integrated forward in the independent variable τ from χ = 1 to χ = 1/ √ D, where χ = S/S 0 is the square root of the scaled cross-sectional area as it varies from preform to the fibre, such that at the top of the neck-down region χ = 1 (preform) and at the bottom of the neck-down region χ = 1/ √ D. The independent variable τ in the above equations is the 'reduced time' first introduced for fibre drawing by [17], which measures the time since the start of deformation of the cross-section (accounting for the scaling of the problem and a varying viscosity) from τ = 0 to τ = τ L , and is, therefore, also a measure of the distance travelled along the neck-down length from x = 0 to x = L.To determine the fibre geometry parameter α L we solve Eqs. ( 14)-( 15) from χ = 1 to χ = 1/ √ D with the initial condition α(χ = 1) = α 0 , representing the preform geometry at the top of the neck down region.The final geometry is thus α(χ = 1/ √ D) = α L .When performing pressurised fibre drawing care must be taken not to apply too much pressure or the fibre may catastrophically explode.An approximate criterion on the pressurisation parameter P was proposed in [3] for fibre explosion (so this may be avoided), that explosion occurs if Similarly, there is a possibility that the central channel will deform in the drawing process to the point that it closes completely (typically this happens for low fibre tension).As described in [3], this hole closure occurs if As discussed in detail in [3], the explosion criterion in Eq. ( 16) is preferable to a similar criterion given in [10].Equation ( 16) predicts explosion for a higher value of pressure than that predicted by the criterion in [10], which in turn permits fibres with a larger ρ L (that is, with thinner walls) to be drawn with confidence.Similarly, the closure criterion in Eq. ( 17) is preferable to the version in [10].The modelling in [3] indicated that the Table 1.Summary of the dimensions of the preform and the operational parameter values used in the six experiments.Additionally, the surface tension parameter for F2 glass, which was used in all the above experiments, is γ = 0.23Nm −1 and the neck down length was approximately L = 0.03m.The numbers in brackets which follow the ranges for T furnace , U draw and p H indicate the number of incremental steps taken to vary these operational parameters between the stated values during the course of a given experiment. Preform dimensions Operational parameters critical value of pressure given by Eq. ( 17) corresponds to a fibre with a very small diameter internal channel; crucially the channel is not fully closed at this predicted pressure value.The criterion in [10], however, predicts a smaller value of pressure and when used in the model of [3] this pressure was insufficient to prevent hole closure. Experimental materials and procedure The experiments were performed with F2 glass, a commercially available lead-silicate soft glass by the Schott Glass Company [18].This glass is ideal for prototyping MOF designs and validating models of fabrication since it is an excellent, less expensive analogue for pure silica glass, which is more commonly used in MOF fabrication.The two glasses have near identical surface tension and display a similar rate of change in viscosity during heating and cooling.The surface tension for F2 glass is γ = 0.23Nm −1 [1].A Vogel-Fulcher-Tammann temperatureviscosity relation for F2 glass is given in [1], and is Using a similar setup to that described in [16], we have measured the temperature profile inside the furnace used for these F2 glass drawing experiments.These measurements demonstrated that T glass is approximately 200-300 • C lower than the furnace temperature.Six tubular preforms were manufactured by extruding F2 glass through a suitably designed die.Two die designs were used, one where the ratio between the inner and outer die boundaries was 0.2 (used for experiments 1, 2, 5 and 6) and another where this ratio was 0.6 (used for experiments 3 and 4).The resulting tubular preforms have a ratio of inner and outer diameters ρ 0 slightly less than that of the die, due to the effects of die swell [19], gravity stretching and surface tension as the softened glass is extruded and cools; this is similar to the deformation seen in fibre drawing, and modelling of the stretching that occurs during the extrusion process is ongoing, see for instance [4].Additionally, the gravity stretching and other non-uniformities during the extrusion process result in preforms which vary slightly in outer diameter along their length, and we describe how to account for this variation in Section 4.1.Once cooled, a preform is cut into sections approximately 18 cm long and it is these shorter preforms which are drawn to fibre. Fibre drawing was performed on the 4 m soft glass drawing tower at the Institute of Photonics and Advanced Sensing (IPAS) at the University of Adelaide.This tower has the ability to measure quantities such as fibre diameter and fibre tension in real-time; the latter is crucial in validating the modelling approach.The preforms were held in a chuck connected to a hollow feed tube allowing the preform to be slowly lowered into the furnace.The top of the feed tube, and therefore the top of the preform, was open to the atmosphere.Each of the six preforms were drawn to 200-300 m of fibre and the operational parameters used in each experiment, namely the feed speed U feed , the draw speed U draw , the furnace temperature T furnace and the active pressurisation p H , are given in Table 1.In each experiment one of these operational parameters was systematically varied to determine its effect on the properties of the resulting fibre, while the other parameters were fixed.In 1 the minimum and maximum of the parameter that varied in each experiment is given.The number of incremental steps taken to vary a parameter between these minimum and maximum values is given in brackets after the relevant range in Table 1.For instance, in experiment 1 the furnace temperature was varied from 940 • C to 880 • C in six increments.After each change of an operational parameter the fibre was drawn for at least five minutes so it had reached a steady state. For each experiment, as the draw progressed the lengths of the fibre corresponding to a given choice of parameters within each experiment were divided into separate bands as they were wound round the drum.Five to nine samples were taken from within each band of fibre and the inner and outer diameters of each sample were measured using an optical microscope, from which the fibre diameter ratio ρ L was calculated.The values given in Section 4.2 below are an average of these measurements.The measurement error of ρ L is approximately 5%, as indicated by the error bars on each data point in the plots in Section 4.2.Additionally, the measurement error on the inner and outer fibre diameters is ±2 microns, the error on the tension is ±2 g, the error on U draw is ±0.1 m/min and the error on the applied pressure is ±10 Pa. Experimental results and model validation The properties of the six preforms, as well as the parameters used for each experiment are summarised in Table 1.Experiments 1 and 2 involved drawing preforms with a geometry of ρ 0 = 0.16 over a range of furnace temperatures.Experiments 3 and 4 were also drawn over a range of furnace temperatures, but used preforms with a larger internal channel; their geometries were ρ 0 = 0.515 and ρ 0 = 0.514, respectively.In these four experiments, as the furnace temperature (and therefore glass temperature) is decreased the tension in the fibre increases.The effect of varying fibre tension σ on the fibre diameter ratio is the key point of interest in this study, since an important implication of the modelling work [1,3] is that fibre tension determines the geometry deformation between preform and fibre.Knowledge of fibre tension, which is measurable during the draw on current equipment, obviates the need to know about glass temperature and neck-down length which are impossible to measure during a fibre draw. Experiment 5 used a preform of similar internal geometry to the first two experiments, with ρ 0 = 0.168, but varied the draw speed U d while keeping the furnace temperature fixed.Experiment 6 involved drawing a preform with ρ 0 = 0.17 over a range of active pressurisations.This last experiment aims to validate the model for annular fibres drawn under active pressurisation [3], as summarised in Section 2.2. Accounting for preform taper As described in section 3, each preform was produced by extruding softened glass through a purpose-designed die.Extruded preforms typically exhibit a taper in their outer diameter since the softened glass emerging from the die exit is stretched by the weight of the already extruded glass due to gravity.This variation in each of the six preforms, as measured prior to Fig. 2. The preform outer diameter 2r 0 shows taper and was calculated for the six experiments using Eq. ( 19). Figure 2(a) shows the calculated preform outer diameter versus fibre tension for experiments 1 (black), 2 (red), 3 (blue) and 4 (green).Figure 2(b) shows the calculated preform outer diameter versus draw speed for experiment 5. Figure 2(c) shows the calculated preform outer diameter versus applied pressure for experiment 6. the experiments, is indicated by the range of values given for the preform diameter 2r 0 in Table 1.The first number in the stated range corresponds to the part of the preform lowered into the furnace first.For all but experiment 3 the thickest part of the preform was drawn to fibre first. In the model the diameter of the fibre is very sensitive to the diameter of the preform, see Eq. ( 6).Thus, to compare the model with an experimental result it is necessary to use in the model the diameter of that part of the preform that gave rise to the portion of the fibre from which the experimental result was obtained.In practice, it is not possible to know the exact preform diameter that corresponds with a given point along the length of the fibre, meaning that it is necessary to somehow establish the appropriate values of 2r 0 and S 0 that are associated with each experimental result.As described below, for each fibre cross-section of interest we use measurements of the fibre together with the draw ratio to calculate the diameter 2r 0 of the corresponding preform cross-section. The ratio of inner and outer boundaries of the tube ρ 0 is constant within the measurement error along the length of the preform, even as the preform outer boundary varies in radius; note that the pieces cut from the same extruded preform have identical ρ 0 (experiments 1 and 2) or vary by a small amount (less than 2% for experiments 3 and 4).Since S 0 = πr 2 0 1 − ρ 2 0 , S L = πr 2 L 1 − ρ 2 L and D = S 0 /S L , the preform diameter is where (as stated) the fibre radius r L and diameter ratio ρ L have been measured and the draw ratio D is known, as is the preform diameter ratio ρ 0 .The computed preform diameters 2r 0 for each experimental cross-section are displayed in Fig. 2. Figure 2(a) is for the four experiments in which the fibre tension was varied by changing the temperature, while all other draw parameters were fixed, so that the preform diameter is plotted against the tension used to draw that part of the preform into fibre.Similarly Fig. 2(b) is for the experiment in which the draw speed was varied and Fig. 2(c) is for that in which the pressure was varied.As can be seen in Figs.2(a)-(c), the variation in radius of the preforms is, at most, less than 0.3mm.Although this is a relatively small variation in the preform it is important to take this into account when comparing the model output to the experiments, since fibre radius, as modelled by Eq. ( 6), is highly sensitive to small variations in the preform.Note that in Eqs. ( 2), ( 3), ( 6) and ( 13) the computed preform area S 0 = DS L = Dπr 2 L 1 − ρ 2 L should be used. Comparison of fibre diameter and geometry with model predictions The properties of the measured fibres for the six experiments are shown against the model predictions in Figs 3-6, where each set of figures for the various experiments compare the measured outer diameter and fibre diameter ratio with the modelling.Experiments 1-5 are modelled as outlined in Section 2.1 and experiment 6, where a range of active pressurisations are applied to the preform, is modelled by solving the differential equations given in Section 2.2.Recall that a large value of the fibre diameter ratio ρ L corresponds to a fibre with thin walls and conversely a small value of ρ L corresponds to a fibre with thick walls.The fibre measurements for experiments 1 and 2 are shown in Fig. 3.There is an excellent match between the predicted and observed outer diameter for both these experiments; within the measurement errors, all the measured outer diameters agree with the outer diameters predicted by our model.For experiment 1 the fibre diameter ratio ρ L is consistently underestimated by the model, although the trend of the model as tension varies is in line with the experimental observations.The model predictions for ρ L are in excellent agreement with the measured values for experiment 2, where the predicted values all lie within the measurement error of the observations. Note that for the four largest fibre tensions shown in Fig. 3(c) the fibre diameter ratio ρ L obtained in the experiment is within measurement error of the preform diameter ratio ρ 0 .This strongly implies that it is not surface tension alone acting to deform the geometry, since surface tension may only shrink the diameter ratio between preform and fibre.This suggests that another physical effect is present in the experiment but not accounted for in the model.Pressurisation is a likely candidate to explain the observed expansion of the geometry, and this will be discussed in more detail in Section 4.3. The fibre measurements for experiments 3 and 4, which used preforms with a larger ρ 0 , are shown in Fig. 4. The model underestimates the outer diameter and ρ L for both these experiments, with the model predictions falling well outside the measurement error.As in experiment 1 there is a systematic discrepancy between the observations and the model, with the outer diameter approximately 10µm smaller than the observations and ρ L exhibiting the correct (upward) trend as tension is increased.Again, this apparently systematic discrepancy suggests that an effect is missing from the model.There is a larger difference here than in experiment 1 and this would be consistent with an induced pressurisation effect, since such an effect would have a relatively more severe impact on a larger diameter internal channel where the radius of curvature is larger and therefore the effect of surface tension is weaker. The preform for experiment 5, which varied draw speed, was of similar dimensions to those used in experiments 1 and 2. The fibre measurements for this experiment are compared to the model in Fig. 5, where these results are shown against draw speed.There is excellent agreement between the model and the outer diameter measurements, but ρ L is consistently underestimated by the model.This discrepancy in ρ L is extremely similar to the results of experiment 1 (Fig. 3c).Note that the differences between the model predictions and measurements of the fibre outer diameter in Fig. 5(a) are actually of a similar size to those in Fig. 3(a), but are obscured here due to the much larger range of diameters on the vertical axis.Fibre tension was measured throughout the draw and increased with draw speed, as shown in Fig. 5(c). The fibre measurements for the pressurised experiment 6 are shown in Fig. 6.The draw was initially run without any active pressurisation, before the three successively larger pressures were applied.There is generally good agreement between all the measurements and the predicted values, especially given that ρ L varies over a much larger range here than in the previous experiments.A few points fall outside the measurement error, again with some apparently systematic underestimation of ρ L , but the trend as pressure is increased is in clear agreement with Figure 7(f) shows the difference between calculated pressure and the applied pressure for experiment 6 versus the applied channel pressurisation. the predictions of the model.The measured fibre tension increases slightly as the pressurisation is increased, as shown in Fig. 6(c). Evidence of self-pressurisation In several of the above model comparisons, in particular for experiments 1, 3, 4 and 5, the model underestimates the measured fibre diameter ratio ρ L , see Figs. respectively.The consistent discrepancy between the modelling and the observations suggests that there is a physical effect present in the experiments that is not currently accounted for in the model.As noted above, a few of the observed ρ L values are larger than the preform geometry ρ 0 which is not possible if the deformation is due to surface tension alone, since that effect acts to close the holes.A likely candidate is an induced pressurisation due to the change in geometry as the fibre is drawn, similar to the 'self-pressurisation' effect described by Voyce et al. [15].In that work, the preform was sealed to the atmosphere and a pressure was induced due to the change in the total volume of air enclosed in the internal channel as the fibre is drawn.Note that for our experiments, the tubular preforms were open to the atmosphere.The magnitude of pressure required to exactly match the data is determined by applying the pressurised annular fibre drawing model of [3], as given in section 2.2.This model is used in conjunction with an iterative bisection scheme to determine the exact value of pressurisation p H required to match the observations at each choice of operational parameters.This pressure p calculated is found for each observation in the six experiments and the results are shown in Fig. 7.As shown in Figs.7(a)-(d), in experiments 1-4, where furnace temperature (and therefore fibre Fig. 8.A microscope image showing the cross-section of a fibre from a recent experiment (left) and a schematic of the preform from which it was drawn (right).Note that the relative sizes of the holes in the fibre are larger than those in the preform.Since this fibre was produced from an unpressurised draw, this is further evidence of self-pressurisation. tension) was varied, the magnitude of the calculated pressure required to match the observations increases with fibre tension.Similarly, in experiment 5 the calculated pressure increases with draw speed; as shown in Fig. 5(c), fibre tension increases with draw speed, so this trend is consistent with the experiments which explicitly varied fibre tension.These calculated pressures are strong evidence that the accuracy of the model would be much improved by including a self-pressurisation effect.Establishing how this effect varies with the operational parameters requires further investigation and modelling.One hypothesis, suggested by Fig. 7, is that the magnitude of the self-pressurisation is proportional to the fibre tension. The calculated pressures for Experiment 6, which was actively pressurised, are shown in Fig. 7(f) as the difference between the applied and calculated pressures for each observation.Rather than being evidence of self-pressurisation, this may indicate that there is a discrepancy between the pressure as measured by the drawing tower and the pressure actually applied to the preform.This is not unexpected since the draw tower measures pressure in a hose some distance away from the preform.For p H = 0-3000 Pa there is approximately a 300 Pa difference, which is similar to the pressure offset suggested by other authors [16]. Self-pressurisation is also observed experimentally in drawing MOFs with more complicated internal geometries.One example from a recent experiment is shown in Fig. 8. Relationship between glass and furnace temperature The glass temperature T glass for each experiment is calculated via Eq.(18).The difference between the glass and furnace temperature is expressed as a percentage of furnace temperature for the six experiments in Fig. 9.For each experiment this percentage difference is almost constant as the operational parameter is varied.For experiments 1, 2, 5 and 6 this percentage is 25-27%, and for experiments 3 and 4, where the tubular preforms with larger holes were used, the percentage is 30-31%.This is strong evidence that the difference between the glass and furnace temperatures is best accounted for with a multiplicative factor based on the preform geometry, rather than a subtractive offset determined by operational parameters as has been assumed previously [16].The small amount of variation in the percentage offset throughout the course of each experiment is at least partly due to preform taper, which alters the thermal mass of the neck-down as the draw progresses.Fig. 9. Difference between measured furnace temperature and calculated glass temperature expressed as a percentage of furnace temperature for each experimental point.Note that for experiment 5 the relationship between draw speed and tension is given in Fig. 5(c) and for experiment 6 the relationship between pressurisation and tension is given in Fig. 6(c). Conclusions Six tubular preforms were drawn to fibre under a range of operational conditions.The comparison between the resulting fibres and the modelling predictions is extremely promising, despite the modelling consistently underestimating the observed geometry in several of the experiments.This discrepancy is revealing since the model predictions display the correct trend as fibre tension, for instance, is varied, thus suggesting that an important effect has been neglected from the modelling.A likely candidate for this effect is self-pressurisation which would, if present, account for the larger than predicted inner channel.Additionally, the experiment where the preform was actively pressurised showed good agreement with the modelling, although it appears that the apparatus which measures pressure in the draw tower may not be consistent with the pressure that is actually applied to the preform.The magnitude of self-pressurisation for each observation was calculated and it was shown that self-pressurisation increases with tension or draw speed throughout each of the experiments.We hypothesise that the mechanism by which this occurs involves the magnitude of the pressure increasing with fibre tension. There is evidence that the percentage difference between glass temperature and furnace temperature is constant, whereas previously the absolute difference between furnace and glass temperature was assumed to be constant.For the preforms with a larger diameter ratio this percentage was 30-31% and for the preforms with a smaller diameter ratio it was 25-27%. Future work will include further modelling and experiments to investigate self-pressurisation.In particular this will focus on determining how this effect manifests in more complicated crosssectional geometries where many channels are present. Fig. 1 . Fig.1.A schematic diagram of an annular preform (left) and a microscope image of a drawn tubular fibre (right).The outer radius of the preform is denoted r 0 and the radius of the fibre is r L .The ratio between the inner and outer radii of the preform is ρ 0 and the ratio between the inner and outer radii of the fibre is ρ L . Fig. 3 . Fig. 3. Comparison between experimental measurements (crosses) and model output (circles) for experiments 1 and 2. Figures 3(a) and (c) show the outer diameter and the fibre diameter ratio, respectively, versus tension for experiment 1. Figures 3(b) and (d) show the outer diameter and fibre diameter ratio for experiment 2. The dashed lines in Figs.3(c) and (d) represent the preform diameter ratio. Fig. 4 . Fig. 4. Comparison between experimental measurements (crosses) and model output (circles) for experiments 3 and 4. Figures 4(a) and (c) show the outer diameter and the fibre diameter ratio, respectively, versus fibre tension for experiment 3. Figures 4(b) and (d) show the outer diameter and fibre diameter ratio for experiment 4. The dashed lines in Figs.4(c) and (d) represent the preform diameter ratio. UFig. 5 .Fig. 6 . Fig. 5. Comparison between experimental measurements (crosses) and model output (circles) for experiment 5.Figure 5(a)-(b) show the outer diameter and the fibre diameter ratio, respectively, versus draw speed, with the preform diameter ratio shown as a dashed line in Fig. 5(b).Figure 5(c) shows the relationship between draw speed and the measured fibre tension.Note that the measurement error (not shown) on U draw is ±0.1 m/min. Figure 6 ( Fig.6.Comparison between experimental measurements (crosses) and model (circles) output for experiment 6. Figures6(a)-(b)show the outer diameter and the fibre diameter ratio, respectively, versus the applied channel pressurisation, with the preform diameter ratio shown as a dashed line in Fig.6(b).Figure6(c)shows the relationship between pressurisation and the measured fibre tension.Note that the measurement error (not shown) on the pressurisation is ±10 Pa. Fig. 7 . Fig. 7. Calculated pressurisation required for the model to exactly match the experimental data.Figures 7(a)-(d) are for experiments 1-4 and show the calculated pressure versus fibre tension.Figure 7(e) shows the calculated pressure for experiment 5 versus draw speed.Figure7(f) shows the difference between calculated pressure and the applied pressure for experiment 6 versus the applied channel pressurisation.
9,311.4
2016-01-01T00:00:00.000
[ "Physics" ]
A preference analysis and justification of Arabic written corrective feedback among instructors and undergraduates There has been extensive discussion on the need to use corrective feedback in writing within foreign language learning. Essentially, corrective feedback is one of the important tools in improving students’ skills in learning a language. This study aims to find out the preference and justification of written corrective feedback (WCF) through the use of Google Docs among instructors and students in a higher learning institute. The effects of the direct and indirect feedback with metalinguistic comments were also studied to determine their suitability in teaching and learning the Arabic language. Quantitative and qualitative data were collected to (1) identify the preferred type of feedback among instructors and students, (2) identify justification of the preferred feedback type, and (3) examine post-test score differences between types of written correction feedback. Two questionnaires were adapted and distributed to 93 first-year students and four instructors of Arabic language for Academic Writing. Two instructors and five students were interviewed to find out their justification of the preferred types of WCF. A total of 50 respondents were divided into two groups according to the type of WCF provided, and post-test scores between the types of feedback were compared to determine if there was any significant difference between the types of feedback. The findings show that instructors prefer indirect WCF with metalinguistic comments while students prefer direct corrective feedback with metalinguistic comments. Post-test scores indicate that higher scores were achieved by students who received indirect feedback with metalinguistic comments. This indicates that students are able to process indirect feedback that is supplemented with metalinguistic comments. Moreover, an online learning environment provides more opportunities for instructors to highlight the students’ errors more clearly. INTRODUCTION Written correction feedback refers to the teacher's reaction to the students' errors by informing them the error so that it can be corrected and not repeated in subsequent writing (Van Beuningen, 2010). Most teachers and students agree that producing a good medium for delivering corrective feedback is still relatively new in Arabic language teaching and learning. The idea of using online technology in the delivery of corrective feedback is an active initiative to draw students' attention to their mistakes in writing and that the role of feedback is to overcome these weaknesses. In general, WCF plays an important role in the formation of students' metalinguistic awareness through their attentiveness to restricted information (Bitchener & Ferris, 2012;Sato & Loewen, 2018). According to Heift and Hegelheimer (2017) and AbuSeileek and Abualsha'r (2014), students who receive corrective feedback through the use of computers while writing receive better results than those who do not receive it, as well as to learn from any mistakes they make. However, Bodnar, et al. (2017) found that not all types of computergenerated corrective feedback had positive effect on students' writing development. Hence, pedagogical use of technology in the delivery of corrective feedback is one of the issues that educators and teaching designers need to address in order to build meaningful student communicative interactions (Heift & Hegelheimer, 2017). The use of technology alone does not guarantee that every learning outcome planned would be achieved, but thorough planning is a must to ensure that students gain benefit from the feedback provided. Therefore, there is a need to study the technique of delivering computermediated corrective feedback that can help to improve students' writing skills. Although many studies have examined the effectiveness of computer-assisted corrective feedback (Tafazoli et al., 2014), not many studies have looked at the role of online learning as a platform for delivering corrective feedback in foreign language classrooms. Written corrective feedback in foreign language learning There is a long discussion about the need to use corrective feedback in learning foreign language writing. Truscott (1999), for example, criticized the ability of feedback in improving students' writing in which he described it as wasting teachers' and students' time. In addition, it has been claimed that corrective feedback does not help but hindering the development of students' writing skills (Daneshvar & Rahimi, 2014;Laurel & Mostafa, 2017). These statements have received negative criticism. Many studies have shown positive effects of corrective feedback in foreign language learning (Afitska, 2015;Van Beuningen et al., 2012). Ferris (1999), in a study, found that WCF is related to students' motivation that they have become independent to correct their own errors in their writing. Ferris and Roberts (2001) also claimed that corrective feedback has a positive effect on second language learners' writing and this is supported by Biber et al. (2011) where they found that students' accuracy, content and form of writing are improved as the results of corrective feedback given by teachers. Therefore, WCF is essential for second and foreign language learners to become proficient in the language they are learning. These conflicting findings has led to several other studies that seek more certainty on the role of WCF in second or foreign language learning. Among them is a study conducted by Amrhein and Nassaji (2010) which found that students prefer direct feedback while teachers prefer indirect feedback. One of the reasons students prefer direct feedback is because they have no knowledge of the principle of error correction used by teachers (Norouzian & Farahani, 2012). Based on these studies, there are several factors that lead to differences in findings regarding the effectiveness of WCF. Among others, students do not fully understand the feedback given (Razali, 2014), students only pay attention to the type of feedback they like (Schulz, 2001), student limited language skills and the scope of feedback given (Kang & Han, 2015). Related studies on technology-assisted corrective feedback Studies have been conducted on technology-assisted corrective feedback. The findings of these studies provide a positive indication of the effectiveness of corrective feedback that is delivered online. These include helping to develop students' writing skills (Duff & Li, 2009), improving communication skills through writing (Lee, 2005), reducing the psychological stress of students who do not like to receive face-to-face feedback (Vinagre & Munoz, 2011). There are also studies done in comparing technology-assisted corrective feedback to the traditional corrective feedback practices. It has been found that technology-assisted corrective feedback is more effective in helping students to identify mistakes in writing, and it encourages the habit of reviewing writing and improving their writing skills collaboratively (Fuente, 2016;Hosseini, 2012). The use of Google Doc is also seen as a potential platform to provide collaborative WCF. Various functions available in Google Doc, such as chat and word editing, can systematically aid the development of student writing skills (Diez-Bedmar & Perez-Paredes, 2012). A study was conducted by Hosseini (2012) through experiments on the use of written feedback using online annotators among English as a Foreign Language learners. The purpose of this experiment was to find out the effectiveness of technology-assisted correction feedback and feedback provided on paper. The results of this experiment showed that groups using the online system could identify more writing errors than groups that did not use the system. AbuSeileek and Abualsha'r (2014) conducted a study that focused on the use of functions found in Microsoft Word 2010 to give feedback on EFL learners, and found that the use of computer-assisted corrective feedback has a positive effect on students' achievement in the written test. However, the types of feedback preferred by instructors and students, and the impact of its use on foreign language teaching and learning through Google Doc require further study. The studies mentioned above examined the effect of technology-assisted corrective feedback on ESL and EFL learners, while there have been very few studies done on technology-assisted WCF among Arabic as Foreign Language learners. Among them is a study conducted by Abd Hamid et al. (2014) that studied the extent to which peer feedback through LMS can be used to support the pedagogical approach used by instructors. They found that there was an increase in the quality of students' writing, as well as a correlation between the number of words in the feedback given to the quality of subsequent writing. However, this study only examined feedback provided by peers, and not the teachers' corrective feedback on students' writing. While the above studies have highlighted the importance of corrective feedback, either traditionally or technology-assisted, in developing students' writing skills when learning a second or foreign language, very few studies have been conducted to identify the types of online corrective feedback among teachers and students, as well as their justification for such choices when teaching and learning a foreign language such as Arabic. Knowledge of their choices of feedback and its justifications is essential to designing the best approach to address students' weaknesses in Arabic writing. In addition, the knowledge of the effect of online corrective feedback on writing test scores is also important to determine the best pedagogy in learning Arabic writing. Thus, the objectives of this study are as follows: 1. To identify the types of written corrective feedback that instructors prefer in teaching Arabic writing using Google Doc. 2. To find out the instructors' justification for the preferred type of written corrective feedback used in teaching Arabic writing through Google Doc. 3. To identify the preferred types of written corrective feedback among students who learn Arabic writing using Google Doc. 4. To understand students' justifications for the preferred type of corrective feedback using Google Doc. 5. To examine post-test score differences between types of written correction feedback (direct corrective feedback with metalinguistic comments and indirect corrective feedback with metalinguistic comments). METHODS Participants Questionnaires were distributed to all instructors who teach the Arabic language through a blended learning approach (face-to-face and online) at an international Islamic university in Malaysia. It was also distributed to students who specialized in Arabic language to learn their views on the use of online WCF. The questionnaires were distributed to 93 registered first year students and four (4) instructors who taught the students in that particular semester. All instructors filled out the questionnaire with a response rate of 100%. Meanwhile, the students' response rate was 94.6% where 88 out of 93 students completed the questionnaire. All students are between the ages of 21 to 24 years old of which 70% are female and 30% are male. Of the four instructors who participated in this study, two are native speakers of Arabic while two are non-native speakers of Arabic. Two instructors and five students were selected to be interviewed in regard to justifications of the preferred types of WCF. The instructors selected taught Arabic language for Academic Writing during the study period, while the students were the first-year students, with three female students, and two male students. In order to determine whether there were differences in posttest scores based on the type of written correction feedback that students and instructors prefer, two groups were randomly selected and divided as shown in Table 1. Both types of feedback were selected based on the type of correction feedback that instructors and students preferred the most. Google Doc is used to provide feedback to the students. The procedure using post-test is depicted in Figure 1 and Figure 2. Research design This study employed mixed-method design which allows qualitative and quantitative data to be collected in order to answer the research questions set for this study. There were two sets of questionnaires developed for the purpose of data collection for Objective 1 and Objective 3. The first set of questionnaire was developed for instructors and the other set was developed for the students. The items in both questionnaires are the same, with the exceptions that the items were written to suit the respondents' role (instructors vs. students). Likert scale was used in the questionnaires. The questionnaires were adapted from studies done by Amrhein and Nassaji (2010), and Sayyar and Zamanian (2015) with several additional questions to answer the objectives set. Prior to being distributed to the respondents of the study, the questionnaires were distributed to three Arabic language experts and design instructors to validate the items contained therein. Some corrections were made based on the feedback from the expert, and a pilot study was conducted where 25 students who took Arabic language (excluding the actual respondents) answered the questionnaire. The pilot study was to obtain Cronbach's Alpha Reliability coefficient, where it came back as satisfactory with a value of 0.79. To answer Objective 2 and Objective 4, oneon-one interviews were conducted with two instructors and five students. The purpose of the interviews is to find out their justifications for type of WCF preferred and the role of Google Doc in connecting communication between instructors and students. The knowledge gained could help to design appropriate and effective pedagogical approaches in the delivery of WCF. As for Objective 5, a post-test was developed on Arabic language writing skills to assess the students' achievement. This writing test is divided into four sections: 1) content and sequence, 2) vocabulary, 3) grammar, 4) spelling. The test was reviewed and validated by three Arabic language experts. A scoring rubric was adapted from Jacobs et al. (1981) to determine the writing test score. 25 Arabic language learners involved with the pilot study, where 10 of them were interviewed to find out the clarity of the test instructions, and they said that the test was easy to understand and not confusing. At the end of the study period, the test was taken by the participants of this study. The correlation coefficients of post-test reliability for the elements of originality, consistency, flexibility and reliability of the instrument were 0.82, 0.79, 0.80 and 0.77 respectively at p < .005 levels. Based on the results of the above study, the Arabic language writing skill test is deemed appropriate and reliable to obtain a stable score from the respondents of the study. Data analysis In answering Objective 1 and Objective 3, the data obtained through Likert scale questions in which respondents were required to rate the type of corrective feedback on a scale of 1 to 5 depending on the benefit of its use. In this context, '1' means that corrective feedback is least beneficial to the students' writing skills, whereas '5' means that respondents think corrective feedback is very helpful for their writing. The mean and standard deviation were calculated to determine the value of each item. The answers to Objective 2 and Objective 4 were obtained through qualitative thematic analysis using justification provided by instructors and students. For the first coding stage, the independent responses from the participants were used as the initial code. The purpose of this is to develop the researcher's understanding of the justification given by the participants. As for research question 5, the data was analyzed using one-way ANOVA to see the difference in post-test scores based on the type of WCF provided to the students. RQ1 -The type of written corrective feedback (WCF) that instructors prefer in teaching Arabic language writing using Google Doc The findings show that instructors prefer indirect WCF over direct WCF. Table 2 explains that instructors are more likely to choose indirect WCF with metalinguistic comments when providing feedback to students. RQ2 -Justification for the preferred WCF type among the instructors who teach Arabic language using Google Doc One of the justifications given in selecting type of WCF is the students' metacognition. The instructors interviewed mentioned that WCF "help students think and understand" and "help students to talk about mistakes with friends" hence WCF has been one of the approaches they practise to help students to improve their writing skills in Arabic language. Instructors in this study assert that student-centred learning is their primary focus, and their experience makes them believe that "indirect feedback is more effective" in giving feedback. The workloads and assignments are also the reasons why instructors prefer indirect feedback. This is because direct feedback requires a lot of time and high focus on students' writing. Comments such as "time constraints", "busy with administrative tasks" and "large numbers of students" are among the reasons why direct feedback is not preferred by the instructors. Nevertheless, instructors' preference of feedback also being influenced by the level of Arabic language proficiency among the students. The instructors commented that "choosing how to give feedback depends on students' proficiency" and "not all students can understand indirect feedback" give the impression that the choice of feedback type depends on the students' level of Arabic language proficiency. It also shows that the level of confidence among the instructors towards the level of students' proficiency affects the type of feedback used. The instructors' justification for choosing the type of WCF is as shown in Table 3. Table 4 shows students prefer direct WCF combined with metalinguistic comments. The findings suggest that students perceive direct feedback with metalinguistic comments help them to improve their writing skills in Arabic language. Students' justification in selecting WCF via Google Doc The preference of feedback type among the students differs from the type of feedback that the instructors prefer. Some of the justifications given by the students why they prefer direct feedback with metalinguistic comments are "I am informed of all my mistakes in writing", "I want to know my mistakes and the type of mistakes I made", "We need the instructor's guidance to correct all mistakes", "I am weak in Arabic writing". These statements show that students are lacking of selfconfidence when it comes to learning the Arabic language. One of the student-participant mentioned that "I don't know what I did wrong in writing" and this proves the fact that the students' limited language ability in Arabic language made them prefer teacher-centered lessons, as they value the instructors' feedback to improve their assignments. Moreover, indirect feedback with metalinguistic comments encourages them to reflect on the errors that they have committed, although this may require higher level of language ability, hence increasing the amount of work they have to complete. The student's justification for the preferred WCF type is as shown in Table 5. Differences in post-test scores between types of written correction feedback between direct WCF with metalinguistic comments and indirect WCF with metalinguistic comments Table 6 shows the difference in mean scores between Group 1 (indirect feedback with metalinguistic comments) and Group 2 (direct feedback with metalinguistic comments). Mean score for Group 1 is 81.92, while Group 2's mean score is 71.84. Meanwhile, the One-way ANOVA test results in Table 7 showed that the difference in writing scores between the two types of feedback was statistically significant [F = 46.353, p <.05]. The post-test scores indicate that students who received indirect feedback with metalinguistic comments achieved higher scores than students who received direct feedback with metalinguistic comments. DISCUSSION Among the objectives of this study are to study the preferred type of WCF among instructors, and the preferred type of WCF among students in the context of Arabic language writing classroom. Although this study was conducted on non-Arabic native speakers who study writing in Arabic language, the findings are consistent with the findings of the previous studies conducted on students learning English Language. Many previous studies have found that teachers give high value to indirect feedback that includes metalinguistic feedback (Eslami, 2014;Simard et al., 2015). However, students have different perceptions of indirect feedback. They felt that indirect feedback does not help them in improving the quality of their writing in Arabic language. This part will discuss the findings of this study in line with past studies on WCF. Direct feedback vs indirect feedback The findings confirm that instructors' and students' perceptions and justifications for direct and indirect feedback are different. Instructors prefer indirect feedback with metalinguistic comments while students prefer direct feedback with metalinguistic comments. Based on the justification of instructors and students, it is believed that they have different reasons for choosing different types of feedback. Students' preference Although students prefer feedback being provided electronically rather than face-to-face (Chen, 2016), their level of Arabic language proficiency makes them prefer direct feedback with metalinguistic comments. The use of technology enhances students' motivation to learn independently and actively (Helen, 2013), however, their limited language ability causes them to think that indirect WCF does not help them in completing the tasks given by their instructors. Online learning is also considered as fun learning compensation; however, the abundance of tasks will increase their workloads (Nur Agung et al., 2020). As a result, they value the accuracy and speed of the feedback given. Razali (2014) found that students are very concerned with the grammatical accuracy of their assignments, and that their work should be error-free. The studentparticipants of this present study are very concerned that their assignments are full of errors, and this would cause them receive low grades for the assignments. This is the reason why they would want feedback that is fast and easy to understand so that they can rectify the errors easily. This is in line with Hyland's (1998) claim that students prefer the easier option of relying on their teachers' feedback in achieving better grades. On the other hand, indirect feedback that gives clues without the correction does not help them to improve their writing. Moreover, indirect feedback with metalinguistic comments also requires students to be more active in their learning and encourages reflection on the mistakes made (Hamel et al., 2016). This results in increased workload and demands for higher level of Arabic Language ability for such reflection. This situation causes students to prefer direct feedback over indirect feedback. Instructors' preference Compared to the students, instructors have different perceptions about the types of feedback that they need to provide. Most instructors find that direct feedback with metalinguistic comments takes up longer time. This indicates that the strategy for selecting the type of written feedback depends on the instructors' workload. Therefore, the findings show that instructors value students' autonomy and expect them to play an active role in correcting their own mistakes. Instructors also view indirect WCF leads to self-correction that can benefit and help students to remember mistakes made (Amrhein & Nassaji, 2010). Likewise, metalinguistic approaches contribute to long-term metacognitive development and language acquisition (Ebadi, 2014). As a result, students' preference of direct WCF contradicts instructors' preference of indirect WCF, which requires students to work harder, and also promotes students' learning autonomy. Moreover, all of the instructors mentioned that the best form of feedback depends on the context in which the feedback is given. Not only do they strive for student-centred learning, they would also need to consider students' motivation and the students' level of Arabic language proficiency, which would determine how far the feedback given could benefit the students in their learning. Because of these reasons, some instructors give feedback based on what they think the students would want, although this is not always the case. Furthermore, the instructors need to ensure that the type of errors made by students be stated clearly although they do not prefer direct WCF. WCF through Google Doc platform This study also aimed to examine whether there were differences in post-Arabic writing test scores between direct WCF with metalinguistic comments and indirect WCF with metalinguistic comments. Both types of feedback were provided using the Google Doc application as a learning platform. Students who received indirect written feedback with metalinguistic comments through Google Doc achieved higher scores than students who received direct written feedback with metalinguistic through Google Doc. The results of this study confirm previous studies which identify the effects of WCF through the use of technology (Seyyeedrezaie et al., 2017;Tabasi et al., 2013). Razali (2014) claims that students who received direct feedback may be able to correct the errors in the revised writing, but they may not be able to do self-correction in the new, subsequent writing due to the fact that direct feedback does not help students to think critically of the errors they commit. Razali further assert that students who received direct feedback may not understand the nature of the errors, hence have the tendency to repeat the same errors. This may be the case for the student participants of this present study where the students who received direct feedback were not able to critically analyse the errors they commit earlier, hence preventing them from producing writings that are error free. The results of this study are due to two factors, namely 1) the ability of indirect corrective feedback with metalinguistic comments to improve the quality of students' writing; and 2) an online learning environment that provides an opportunity for instructors to highlight more clearly the students' errors in their writing as well as giving comments to the students' writing. The combination of effective type of feedback, and the utilization of Google Doc application contribute in improving the quality of students' writing (Seyyeedrezaie et al., 2017). Indirect WCF with metalinguistic comments could help students to understand the errors they made (Ferris et al., 2000), while features that are available in writing collaboration applications, such as those in Google Doc, can play a role in facilitating and speeding up feedback, and this could lead to building knowledge on different dimensions (Salomon et al., 2003). It also promotes interactive language learning activities (Al-Olimat & AbuSeileek, 2015) and supports to improve students' achievement (AbuSeileek & Abu Sa'aleek, 2012). Although this study found that the achievement score of the group receiving indirect feedback was better than the group receiving the direct feedback, it was not in line with the findings of the study done by Varnosfadrani and Basturkmen (2009) who found that the achievement score of the group which received direct feedback was better than the group which received indirect feedback. The difference in the findings of this study is that the teaching methods used may be different. The study done by Varnosfadrani and Basturkmen (2009) used limited face-to-face teaching mode to provide feedback, while this current study used Google Doc application that is accessible anytime and anywhere. The easy-to-use features of Google Doc give instructors an opportunity to interact more with their students outside of the classroom. In addition, the approaches used in the teaching of Arabic language are different in terms of language structure and grammar compared to English which results in different types of feedback given. This study is not without its limitation. First of all, it must be acknowledged that the number of participants of this study is small, hence the findings of this present study do not reflect all contexts of learning Arabic language via Google Doc. Moreover, there are other factors that cannot be controlled by the researcher, such as social interaction that the students may have during the study. During the study period, the students may have communicated with the students from the controlled group, or other people who were not part of this study. They may have learnt from each other, and this may affect the results of the post-test that was done. Therefore, future research that are looking into the use of Google Doc as a means of delivering WCF within the context of teaching and learning Arabic language as Second and/or Foreign Language need to address these issues so that better results and findings could be yielded. Hence, this study proposes that blended mode learning should be used to ensure that feedback can be given more effectively. It is also suggested that feedback, be it direct or indirect, should be done together with oral feedback or student-teacher conference so that the students would understand the nature of the errors they have commit (Razali, 2014), hence helping them to learn the language better. CONCLUSION This study has highlighted the importance of WCF in improving the quality of students' writing in Arabic language. The knowledge gained from this study, i.e. the difference between the types of WCF preferred among instructors and students, provides more ideas to formulate appropriate approaches to teaching Arabic language to non-native speakers. The findings show that technology also plays a role in facilitating feedback. The use of Google Doc is seen as a means of enhancing interaction between instructors and students in improving Arabic writing. The findings also show that there are significant differences in post-test test scores between groups using direct feedback with metalinguistic comments and indirect responses with metalinguistic comments. The findings of this study may have pedagogical implications for Arabic language writing instructors as they choose the types of written corrective feedback (WCF) to be used in their teaching. It is advisable that instructors not use one-size-fits-all approach as different approach to WCF may make a difference between being a provider of the correct form or being an initiator who provides help through the feedback, but not giving out the correct form directly to the students.
6,650
2021-01-31T00:00:00.000
[ "Education", "Linguistics" ]
NGCN: Drug‐target interaction prediction by integrating information and feature learning from heterogeneous network Abstract Drug‐target interaction (DTI) prediction is essential for new drug design and development. Constructing heterogeneous network based on diverse information about drugs, proteins and diseases provides new opportunities for DTI prediction. However, the inherent complexity, high dimensionality and noise of such a network prevent us from taking full advantage of these network characteristics. This article proposes a novel method, NGCN, to predict drug‐target interactions from an integrated heterogeneous network, from which to extract relevant biological properties and association information while maintaining the topology information. It focuses on learning the topology representation of drugs and targets to improve the performance of DTI prediction. Unlike traditional methods, it focuses on learning the low‐dimensional topology representation of drugs and targets via graph‐based convolutional neural network. NGCN achieves substantial performance improvements over other state‐of‐the‐art methods, such as a nearly 1.0% increase in AUPR value. Moreover, we verify the robustness of NGCN through benchmark tests, and the experimental results demonstrate it is an extensible framework capable of combining heterogeneous information for DTI prediction. • The approach using molecular docking requires a known 3D structure of proteins, whereas the complex structures of known protein ligands are scarce and generally unavailable. • The approach by ligand similarity employs the knowledge of known ligand interactions to make predictions.Nevertheless, if the target has insufficient ligands, the results may be poor. • Machine learning is the most popular and effective approach at present, which can fully explore the relevant characteristics of drugs and the potential drug-target interactions. In recent years, many machine learning-based methods have been proposed to predict potential DTIs.They mainly consist of the kernel method, matrix decomposition and multi-source information integration. According to chemical and genomic information, Yamanishi et al. 6 used nuclear regression for DTI prediction and constructed a BLM model using bipartite graphs.Van Laarhoven et al. 7 defined a gaussian interactive section core depending on the topological characteristics of the adjacency matrix and then used the kernel least squares (KRLS) algorithm to predict DTIs.Pahikkala et al. 8 also employed the Kronecker regularized least squares (KRLS) algorithm, but they utilised the drug characterization based on 2D compound similarity and the Smith-Waterman similarity characterization of the target.The kernel-based methods only employ simple linear combinations, relying on several individual kernels to generate the final kernel matrix.This may be inappropriate if the linearity between the kernels is not obvious. Matrix factorization is also widely used for DTI prediction.The dual-nucleated Bayesian matrix decomposition (KBMF2K) proposed by Gonen et al. 9 maps target proteins and drug compounds into the subspace of Bayesian by estimating the interaction network and using similarity in the subspace.Hao et al. 10 established a drug-target prediction model called DNILMF based on logical matrix decomposition.This model constructs two new kernel matrices, performs nonlinear diffusion between these two matrices and the two original similarity matrices, and predicts drug-target interactions by gathering neighbour information.Ding et al. 11 proposed a multiple kernel-based triple collaborative matrix factorization (MK-TCMF) method to predict DTIs.Multi-kernel learning (MKL) algorithm can regulate the weight of each kernel matrix according to the prediction error.The aforementioned methods utilise direct drug-target associations.This is challenging because the known information about the interaction is often incomplete. With the rapid development of bioinformatics, various drugs, proteins, genes and other types of data have also been adopted for DTI prediction.Wan et al. 12 constructed a large integrated network by combining data from multiple heterogeneous networks, captured the topological characteristics of the integrated network by using neighbourhood aggregation technology 13 and reconstructed the topological representation of all relational matrices.Yu et al. 14 developed an ensemble model (KenDTI) based on both biochemical characteristics of drugs via network integration and molecular sequences via word embedding to predict DTIs.Shao et al. 15 regarded DTI prediction as a link prediction problem and proposed an end-to-end model based on heterogeneous graphs with attention mechanisms (DTI-HETA).Fu et al. 16 proposed a multi-view graph convolutional network (MVGCN) framework for link prediction in biological networks by combining the similarity network to build a multi-view heterogeneous network and obtain node attributes.In addition, a Neighbourhood Information Aggregation (NIA) layer was designed for inter-and intra-domain information updating.Ren et al. 17 utilised to obtain the embedded representation of the drugs and targets.The performance of network prediction tasks using graph convolution technology for large-scale graph data has been significantly improved 18 owing to the application of graph neural networks. 19In multi-source data processing, it is usually easy to concatenate the features of different data sources.Therefore, how to make full use of the contributions of data from varied sources to efficiently fuse the DTI prediction is the key to improve the DTI prediction accuracy. Motivated by the recent success of deep learning techniques in learning powerful representations from complex data, [20][21][22][23] Zhang et al. 24 introduced related datasets for DTI prediction.Excluding the previously mentioned self-supervised learning framework, MGPDR, introduced by Ren et al., 17 Chu et al. 25 proposed the model, HGRL-DTA, which was a novel approach for learning drug-target binding affinity prediction through hierarchical graph representation.By incorporating both global affinity relationships and local chemical structures of drugs/target molecules, and utilising message broadcasting strategies, the model can synergistically integrate hierarchical information.The heterogeneous graph automatic meta-path learning-based DTI prediction method (HampDTI), proposed by Wang et al., 26 employed a node-type specific graph convolutional network (NSGCN) to learn the embedding of drugs and targets using meta-paths learned from a heterogeneous graph.The embedding from multiple meta-path graphs has been combined to predict new DTIs. The advantage of a deep learning method is its ability to identify hidden interactions between drugs and targets.However, they still have room for improvement in the following two aspects: (1) DTI prediction is to discover new DTIs.How to select truly interaction-free drug-target pairs is a thorny issue; (2) networks and reduce the feature information of drug or target to a low-dimensional feature representation.Based on these lowdimensional feature vectors, the spectral graph-based convolutional neural (GCN) network is further applied to learn the drug or target features and avoid inaccuracy caused by the noise and incompleteness of large-scale biological data.We compare NGCN with other methods to demonstrate its effectiveness and gradually increase the number of networks to prove the integration capability of NGCN.The results demonstrate that NGCN is promising for drug-target interaction prediction. | PRELIMINARIE S Drug-target interaction prediction of network syncretic aims to conduct prediction tasks by jointly utilising different views to exploit the complementarity. Recently, there have been significant efforts towards integrating heterogeneous information from multiple networks.They can be roughly divided into two types of processes: • Gather multiple networks to build a large integrated network and extract information for prediction. • Extract feature information from each network and then fuse them for similarity or correlation prediction. It is difficult to distinguish the discrepancies between different networks while constructing large integrated networks.And if the number of integrated networks is too large, computations on such a network will become challenging due to the increasing network complexity. Extracting information from each network and making fusion predictions are the primary ways for drug-target interaction prediction.The process is mainly composed of three steps: (1) extracting drug or protein information from each network; (2) feature fusion and dimensionality reduction; and (3) correlation prediction or drug relocation prediction based on extracted feature information. Information extraction on a single network is the key step in network fusion.Common feature extraction consists of matrix decomposition and random walk with restart (RWR).The former usually decomposes the incidence matrix into two eigenvectors and minimises the loss of vector reconstruction.However, this strategy might lead to information loss and fail to capture the global characteristics of the incidence matrix. As for RWR, a pre-defined restart probability is introduced into the random walk with restart to identify the direct or indirect relationship between nodes of network.Suppose A and D are adjacency matrix and diagonal matrix, respectively.D i,i = ∑ n j=1 A i,j , the one-step probability transition matrix  can be yielded by normalising the adjacency matrix. Next, we introduce a t-step RWR vector r t , and r t i means the probability of visiting node i after t step transitions.Let r 0 i be the n-dimensional initial one-hot vector.A RWR process is defined as: where p represents the probability of restart, and its value controls both global and local structural characteristics of the network.By iteratively executing the above process, we can get the diffusion state r i of the node, which is a high-level representation of the structural characteristics in the network.Given two nodes in a network, if they share similar diffusion states, it means these two nodes have similar neighbourhood characteristics in the network. 27 | ME THOD The diffusion state is inaccurate, partially because the network data set in the experiment is noisy and incomplete.Luo et al. 27 improved the diffusion component analysis method (DCA) 28 and proposed the clusDCA for dimension reduction in the form of effective matrix decomposition.It is combined in our proposed model, NGCN, herein. The NGCN first conducts the RWR process on each drug or protein within each similar network to acquire the distribution of each drug or protein node, termed as the diffusion state.The diffusion state captures its topological relationship with all other nodes in the heterogeneous network.Subsequently, the improved clusDCA algorithm is employed to compute the low-dimensional representation of the nodes.Leveraging the learned low-dimensional features of drugs and proteins (where each row in the low-dimensional drug features represents a feature vector of a drug and each column in the low-dimensional protein features is a feature vector of a protein), NGCN executes spectral graph convolution to further refine the features of drugs and proteins.Finally, the drug-target matrix is reconstructed to identify unknown drug-target interactions.Details of the NGCN model are depicted in Figure 1. | Diffusion state of nodes by RWR Our network data consists of homogeneous interaction networks, such as PPI network, and heterogeneous interaction networks, such as protein-disease association networks.For the input homogeneous interaction networks (e.g.drug-drug interaction networks), we compute the "diffusion state" of each drug or target by directly running the RWR algorithm on each of these networks.As for heterogeneous interaction networks, we need to build similarity networks (e.g. to build protein-protein similarity network through proteindisease association networks), perform the RWR on the derived similarity networks and then run the RWR process on these similarity networks to obtain the diffusion states of drugs or proteins.Overall, we construct similarity networks for drugs, based on (i) drug-drug (1) interactions, (ii) drug-disease associations and (iii) drug-side-effect associations.In the similar way, we construct similarity networks for proteins, based on (i) protein-protein interactions and (ii) proteindisease associations. Further, we can use the Jaccard similarity coefficient to calculate similarity between drugs, which is based on common neighbours and the union of sets of all neighbours of the two drugs. Given two nodes i and j, their similarity within a heterogeneous network is defined as: Then the diffusion state of each network can be obtained by running the RWR process on each similarity network, as described in Equ 2. | Performing feature reduction and feature extraction Owing to the data quality and dimensionality issues, the diffusion state of drugs and targets produced by RWR may be error-prone. In particularly, in case of the integration of multiple networks, it is often inconvenient to implement topological features directly by using the high dimensionality of the diffusion state.To address these problems and obtain important topological feature information about nodes from the diffusion state, we adopt a new diffusion component analysis method (clusDCA 29 ) to perform feature reduction on diffusion state feature.Given node i , we model the probability assigned to node j in the diffusion state of node i as follows: (3) NGCN uses drug-protein association network, protein-protein association network, drug-drug interaction network, drug-disease network, protein-disease association network and drug-side effect network.We first obtain the diffusion state matrix (i.e. on each network to obtain a distribution of each drug or protein node, which captures its topological relations to all other nodes in the heterogeneous network) of each network through the RWR algorithm.The improved clusDCA algorithm is then used to calculate the low-dimensional representation of the nodes.We add spectral GCN to update the node feature before reconstructing the drug-target matrix.NGCN effectively learns topology-preserving node features that are useful for predicting drug-target interactions by enforcing the reconstruction of the original individual networks.Finally, the updated node properties are considered to reconstruct the drug-target matrix. | 5 of 11 In order to reduce feature dimension more quickly and conveniently, clusDCA achieves rapid decomposition of the diffusion state via matrix decomposition.By modifying the formula, we have: To optimise the objective function, we use singular value decomposition (SVD) in this process.Let L represent the logarithmic diffusion state matrix of the network.We define the SVD of the matrix L as follows: where U, Σ, V ∈ R n×n .Let the low-dimensional feature matrix be In terms of SVD, we calculate X as follows: where U d represents the first d singular vectors and Σ 0.5 d is the 0.5 power of the first singular values. To integrate heterogeneous network data, DCA of the above single network needs to be extended to a multi-network case. More specifically, let L = L 1 , … , L K denote the set of logarithmic diffusion state matrices obtained through the diffusion states R c = S 1 , … , S K of K input networks.Then, the following objective function needs to be optimised: where w r j represents the network-specific feature of each node i in the network r, and the node feature x i is shared among all K networks.The above objective function can also be optimized by SVD. | Updating feature information Although we have obtained the low-dimensional representation of drug or target nodes, the node features need to be further updated due to the noisy and uncertain biological information.Here, we use the spectral graph-based convolutional neural network for updating features. Given the node feature X (u) , u ∈ {drug, protein}, we update the features from each X (u) through spectral graph convolution to obtain a new representation of X (u) .For the similarity network of u ∈ {drug, protein}, we specify Ã(u) = A (u) + I N and diagonal matrix D(u) where We then apply spectral convolution to obtain a new representation of nodes feature H (u) : where , à = A + I N means the adjacency matrix combining self-connection, ( ⋅ ) represents a non-linear function like ReLU or sigmoid, and W (u) is a weight matrix.Therefore, the new representation H drug of the drugs can be obtained through the drug similarity matrix A (drug) and the drug feature X (drug) , and the new representation H protein of the protein can be obtained in the same way. | Reconstructing drug-target matrix According to the obtained drug and target characteristics, we need to reconstruct the drug-target matrix for the purpose of prediction. Topology-preserving learning of the node embedding 12 is a proved good way to reconstruct the drug-target prediction matrix.Given n drug nodes and m protein nodes, the reconstructed DTIs matrix can be expressed as: where D r ∈ R d×n , P r ∈ R d×m are specific mapping matrices of drug and protein, m and n represent the number of drugs and proteins, respectively, and r means a protein interaction. The above equation states that the values of the edge mapping of the drug features and the target features through the mapping functions D r and P r can be reconstructed by doing the inner product of the mapped vectors.Natarajan and Dhillon et al. 28 also used similar reconstruction strategies to solve the prediction problem.In the training process, the summation of the squared reconstruction errors of all edges is minimised by learning unknown parameters.So, given a drug-target edge weight vector Y, we define the reconstruction loss of the edge weight value as: (5) By minimising the final objective function, gradient descent training can be carried out. | Pseudocode of NGCN The pseudocode for NGCN is provided in Algorithm 1 below. | Dataset In the whole training process, the dataset of our experiment is the same as that used by Luo et al. 27 There are four types of nodes in the dataset including drug nodes, protein nodes, disease nodes and side effect nodes.There was no exception; those isolated nodes were excluded. The dataset includes two kinds of similarity network and six types of association networks.The latter consists of drug-protein association network, 30 protein-protein association network, 31 drug-drug interaction network, 30 drug-disease network 32 and protein-disease association network 32 and drug-side effect network. 33These networks can be used to construct corresponding similarity networks with respect to proteins and drugs.Among them, the former is generated by the similarity of the gene sequence of proteins, and the latter is constructed by the similarity of the medical chemical structure. | Superiority in DTI prediction A drug-target pair with a interaction is considered a positive sample, and a drug-target pair with an unknown interaction is generally viewed as a negative sample.To measure the performance of NGCN in predicting DTIs, we first performed 10-fold crossvalidation on all positive pairs and a set of randomly sampled negative pairs, whose number was 10 times as many as that of positive samples.This scenario basically stimulated the practical situation in which the DTIs are sparsely labelled.For each fold, a randomly chosen subset of 90% positive and negative pairs was used as training data to construct the heterogeneous networks and then train the parameters of NGCN, and the remaining 10% positive and negative pairs were held out as the test set. We compared NGCN with six baseline methods, including NeoDTI, 12 DTINet, 27 BLMNII, 34 MOLIERE, 35 NetLapRLS 36 and HNM. 37Two evaluation indicators including AUPR (the area under the precision-recall curve) and AUROC (the area under the receiver operating characteristic curve) were used to measure performance. In Figure 2, we can observe that NGCN has better performance than other methods, which is higher than the best method.In addition to known DTI data, the chemical structure, protein sequence information and other properties of drugs and targets can also be determined through their various functional roles in biological systems, such as protein-protein interactions and drug-disease associations.By integrating disparate information from heterogeneous data sources, methods such as DTINet, NeoDTI and HNM can further improve the accuracy of DTI predictions.However, there are still some limitations to these approaches that need to be addressed. For example, HNM method only considers three different types of data to make relationship prediction, thus discarding a lot of valuable information.In addition, methods such as BLMNII and MOLIERE only take relatively simple forms (such as bilinear linear or log-linear functions), which may not be sufficient to capture complex hidden features behind heterogeneous data.The reason for NGCN's excellent performance lies in its initial utilization of RWR to compute the diffusion state of nodes for each network, followed by its integration with clusDCA for dimensionality reduction operations.In this manner, the noise in the data is substantially reduced.To verify the performance of NGCN under sparse positive samples, we changed the number of samples and specified the proportion 1:10 for positive and negative examples.It is observed that the performance of all other algorithms decreased.In contrast, NGCN still achieved the best prediction performance.This shows that even in the case of sparse labelling, the prediction performance of other methods is still inferior to the NGCN method.In addition, we performed statistical significance tests at the 95% confidence level on the results of the NGCN and NeoDTI (the best performance method in the comparison experiment) using 10-fold cross-validation.The results show that the observed differences between the two methods are statistically significant. Since the data may be redundant, for example, there are multiple homologous proteins for one protein or multiple highly similar drugs for one drug in the dataset, which may negatively affect the performance.Therefore, we applied the same strategy as Luo et al. to reduce the impact of data redundancy by removing drug-target associations of similar drugs or targets in the drug-target interaction matrix.We eliminated drug-target associations in which the Jaccard similarity in the association network was greater than 0.6, the structure similarity score in a medicinal chemical similarity network exceeds 0.6, and the identity score in the protein-protein sequence similarity network exceeds 0.4. In the experiment, we kept the ratio 1:1 for negative and positive samples.As expected, after the deletion of similarity, NGCN performance declined but was still superior to other baseline methods. | Effects of NGCN components In this paper, we propose a multi-network integration algorithm, termed as NGCN and apply it on drug-target interactions prediction using GCN model.We use GCN to aggregate neighbourhood features to further improve the availability of features.The spectralbased graph convolution network (GCN) method introduces filters from the perspective of graph signal processing to define graph convolution, where the graph convolution operation is interpreted as removing noise from the graph information.In order to evaluate the performance of GCN part, we implemented a multi-networks integration framework without updating features (i.e.use the spectralbased graph convolutional neural network for updating features), to evaluate the effects of the proposed NGCN.We compared our method, NGCN, with these various approaches to validate the effects of the feature updating operation, and the experimental results are reported in Table 1.The results show that the feature updating operation of our proposed NGCN algorithm demonstrates substantial superiority on the task of predicting drug-target interactions. | Robustness In the experiment, we mainly evaluated the influence of parameters and the robustness of NGCN.The robustness of NGCN was tested by changing the number of networks related to the drugs or target, the feature dimension and the hyperparameters of NGCN.All experimental results were obtained by adopting 10-fold cross-validation. We start from examining the effects from aggregating multiple heterogeneous networks on the predicted results.We only used drug-protein association matrices (i.e.drug similarity network, drugdrug association network, protein-protein association networks, protein similarity network and drug-protein association network) to conduct performance evaluation.Through training, we observed that the prediction performance was significantly reduced compared to the original model, NGCN, which obtained the features from all networks.We also increased the number of networks associated with disease and side-effects.Under expectation, it is observed that the prediction performance could be improved by adding drug-and protein-related networks.Experiments show that aggregating heterogeneous information in the networks generated by multiple data sources is able to improve the prediction accuracy.Furthermore, we applied NGCN to predict drug-target interactions under different feature dimension conditions and compared the AUPR values of the predicted results.According to the experiment of Wang et al., 29 the dimension of the feature vector in the diffusion state dimension of 10%-20% achieved the best results.We expanded the scope of the study to 10% to 30%, and we set the drug dimension to 80, 110, 140, 170, 200 and protein dimension to 200, 250, 300, 350 and 400. From the observations, there was little impact on the predicted results (see Figure 3). We further investigated the impact of hyperparameters on experimental performance.Here, we mainly studied the influence of restart random walk probability p on the experimental results.In the test, we considered the restart probability value between 0.4 and 0.7 to observe the performance stability under different probabilities.In Figure 3, it can be seen that when the restart probability is varied from 0.4 to 0.7, NGCN achieves stable performance. Thus, these parameters have little impact on the experimental performance. TA B L E 1 Performance of drug-target interaction prediction under different settings (No. positive:No.negative = 1:1). integrated a large number of unlabeled drug molecular map information and target information and designed a pre-training framework, MGP-DR(molecular graph pretraining for drug representation), for drug pair representation learning.The model used a self-supervised learning strategy to mine contextual information within and between drug molecules to predict drug-drug interactions and drug combinations.The graph convolutional neural network was F I G U R E 2 Comparison between NGCN and related methods.We apply 10-fold cross-validation in our experiments and compare NGCN with six other prediction methods (including NeoDTI, DTINet, BLMNII, MOLIERE, NetLapRLS and HNM) in terms of prediction effects.The yaxis describes AUPRC for measuring prediction performance.(A) Specifying proportion 1:1 for positive and negative examples.(B) Specifying proportion 1:10 for positive and negative examples.(C-F) Several strategies to remove data redundancy: (C) Removing DTIs sharing similar drugs.(D) Deleting DTIs sharing similar diseases.(E) Deleting DTIs with drugs showing similar side effects.(F) Pruning DTIs with similar drugs or proteins. F I G U R E 3 Robustness of NGCN.(A) Effects of aggregating multiple heterogeneous networks.(B) Effects of drug dimensions.(C) Effects of protein dimensions.(D) Effects of restart probability. | 3 of 11 CAO et al. d ≪ n.In this case, w T i x j is a low-dimensional approximation, and the next term log ∑ and w i describe the topology of the network, x i represents the node feature, and w i can be regarded as the context characteris- tics of node i.The clusDCA takes a set of observed diffusion states S = s 1 , … , s n as input, and uses the sum of squared errors as the objective function: • Step 1: the diffusion state S i for drug or target is derived by performing RWR algorithm (as shown in Equ 2) on each network.•Step 2: clusDCA takes the diffusion state set R c 1 = S 1 , … , S 4 of the drug and the diffusion state set R c 2 = S 5 , … , S 7 of the pro- • Step 4: the drug-target matrix Y rec is reconstructed by Equ 13, after obtaining the updated features H drug and H target . : Pseudocode of NGCN = S 1 , … , S 4 of the drugs and the diffusion state set R c 2 = S 5 , … , S 7 of the proteins; The spectral graph convolutional neural network is then employed to further ALGORITHM 1 The best performance results are highlighted in bold.
6,049.4
2024-03-20T00:00:00.000
[ "Computer Science", "Medicine" ]
What Do Radio Waves Tell Us about the Universe? Radio astronomy began in 1933 when an engineer named Karl Jansky accidentally discovered that radio waves come not just from inventions we create but also from natural stuff in space. Since then, astronomers have built better and better telescopes to find these cosmic radio waves and learn more about where they come from and what they can tell us about the universe. While scientists can learn a lot from the visible light they detect with regular telescopes, they can detect different objects and events – such as black holes, forming stars, planets in the process of being born, dying stars, and more – using radio telescopes. Together, telescopes that can see different kinds of waves – from radio waves to visible light waves to gamma rays – give a more complete picture of the universe than any one type of telescope can on its own. think that about 100 billion more galaxies (each with their own 100 billion stars) exist. Almost all of these stars are invisible to your eyes, which cannot see the dim light from distant stars. Your eyes miss other things, too. The visible light that your eyes can see is only a tiny portion of what astronomers call the "electromagnetic spectrum, " the whole range of different light waves that exists. The electromagnetic spectrum also includes gamma rays, X-rays, ultraviolet radiation, infrared radiation, microwaves, and radio waves. Because human eyes can only see visible light, we have to build special telescopes to pick up the rest of that "spectrum" -and then turn them into pictures and graphs that we can see. WHAT IS A RADIO WAVE? Light is made up of tiny particles called "photons." Photons in visible light have a medium amount of energy. When photons have a little bit more energy, they become ultraviolet radiation, which you cannot see but which can give you a sunburn. With more energy than that, photons become X-rays, which travel right through you. If photons possess even more energy, they become gamma rays, which come out of stars when they explode. But when photons have a little less energy than visible-light photons, they are known as infrared radiation. You can feel them as heat. Finally, we call the photons with the least energy "radio waves. " Radio waves come from strange spots in space -the coldest and oldest places and the stars with the most material stuffed into a small space. Radio waves tell us about parts of the universe we would not even know existed if we only used our eyes or telescopes that see visible photons. WAVELENGTH AND FREQUENCY Radio astronomers use these radio photons to learn about the invisible universe. Photons travel in waves, like they are riding a roller coaster that just uses the same two pieces of track over and over [1]. The size of a photon's wave -its wavelength -tells you about its energy. Figure 1 shows waves with two different wavelengths. If the wave is long, it does not have much energy; if it is short, it has a lot of energy. Radio waves do not have much energy, and that means they travel in big waves with long wavelengths. Radio waves can be hundreds of feet across or just a few centimeters across. Astronomers also talk about how many of these waves pass a spot every second -the radio wave's "frequency." You can think of frequency by imagining a pond of water. If you throw a rock into the water, ripples travel across Electromagnetic spectrum The visible light that we can see is just a tiny part of the "electromagnetic spectrum." Visible light is made of photons with medium energy. Photons with more energy are ultraviolet radiation, X-rays, and gamma rays (gamma rays have the most energy). Photons with less energy are infrared and radio waves (radio waves have the least energy). Photon Light is made of particles called photons, which travel in waves. Wavelength The size of the wave a photon travels in. Frequency The number of light waves that pass by a spot in one second. the pond. If you stand in the water, the waves hit your ankles. The number of waves that smack into you in one second tells you the frequency of the waves. One wave per second is called 1 Hertz. A million waves per second is 1 MHz. If the waves are long, fewer of them hit you every second, so long waves have smaller frequencies. Radio waves have long wavelengths and small frequencies. RADIO PIONEERS The first radio astronomer did not mean to be the first radio astronomer. In 1933, a man named Karl Jansky was working on a project for Bell Laboratories, a lab in New Jersey named after Alexander Graham Bell, who invented the telephone. Engineers there were developing the first phone system that worked across the Atlantic Ocean. When people first tried making phone calls on that system, they heard a hissing sound in the background at certain times of the day. Bell Labs thought that noise was bad for business, so they sent Karl Jansky to find out what was causing it. He soon noticed that the hiss began when the middle of our galaxy rose in the sky and ended when it set (everything in the sky rises and sets just like the Sun and Moon do). He figured out that radio waves coming from the center of the galaxy were messing up the phone connection and causing the hiss. He -and the phone -had detected radio waves from space [1]. Jansky opened up a new, invisible universe. You can see a picture of the antenna used by Karl Jansky to detect radio waves from space in Figure 2. Inspired by Janksy's research, a man named Grote Reber built a radio telescope in his backyard in Illinois. He finished the telescope, which was 31 ft across, in 1937 and used it to look at the whole sky and see where radio waves came from. Then, from the data he collected from his radio telescope, he made the first map of the "radio sky" [2]. Hertz 1 Hz means that one wave passes by a spot in one second. One megahertz means one million waves pass by every second. Figure 1 Photons travel in waves. The length of each wave is called a wavelength. RADIO TELESCOPE TALK You can see visible light because the visible-light photons travel in small waves, and your eye is small. But because radio waves are big, your eye would need to be big to detect them. So while regular telescopes are a few inches or feet across, radio telescopes are much larger. The Green Bank Telescope in West Virginia is more than 300 ft wide and can be seen in Figure 3. The Arecibo Telescope in the jungle in Puerto Rico is almost 1,000 ft across. They look like gigantic versions of satellite TV dishes, but they work like regular telescopes. To use a regular telescope, you point it at an object in space. Light from that object then hits a mirror or lens, which bounces that light to another mirror or lens, which then bounces the light again and sends it to your eye or a camera. When an astronomer points a radio telescope at something in space, radio waves from space hit the telescope's surface. The surface -which may be The founder of radio astronomy, Karl Jansky, stands with the antenna he built that detected the first radio waves identified as coming from space. Source: NRAO. While instruments like the Green Bank Telescope, pictured here, may not look like traditional telescopes, they work much the same way but detect radio waves instead of visible light. They then turn those radio waves, which human eyes cannot see, into pictures and graphs that scientists can interpret. Source: NRAO. metal with holes in it, called mesh, or solid metal, like aluminum -acts like a mirror for radio waves. It bounces them up to a second "radio mirror," which then bounces them into what astronomers call a "receiver." The receiver does what a camera does: it turns the radio waves into a picture. This picture shows how strong the radio waves are and where they are coming from in the sky. RADIO VISION When astronomers look for radio waves, they see different objects and events than they see when they look for visible light. Places that seem dark to our eyes, or to regular telescopes, burn bright in radio waves. Places where stars form, for example, are full of dust. That dust blocks the light from getting to us, so the whole area looks like a black blob. But when astronomer turns radio telescopes to that spot, they can see straight through the dust: they can see a star being born. Stars are born in giant clouds of gas in space. First, that gas clumps together. Then, because of gravity, more and more gas is attracted to the clump. The clump grows bigger and bigger and hotter and hotter. When it is huge and hot enough, it starts smashing hydrogen atoms, the smallest atoms that exist, together. When hydrogen atoms crash into each other, they make helium, a slightly bigger atom. Then, this clump of gas becomes an official star. Radio telescopes take pictures of these baby stars [3]. Radio telescopes show the secrets of the nearest star, too. The light we see from the Sun comes from near the surface, which is about 9,000oF. But above the surface, the temperature reaches 100,000oF. Radio telescopes help us learn more about these hot parts, which send out radio waves. The planets in our solar system also have radio personalities. Radio telescopes show us the gases that swirl around Uranus and Neptune and how they move around. Jupiter's north and south poles light up in radio waves. If we send radio waves toward Mercury, and then catch the radio waves that bounce back using a radio telescope, we can make a map almost as good as Google Earth [4]. When they look much farther away, radio telescopes show us some of the weirdest objects in the universe. Most galaxies have supermassive black holes in their centers. Black holes are objects that have a lot of mass squished into a tiny space. This mass gives them so much gravity that nothing, not even light, can escape their pull. These black holes swallow stars, gas, and anything else that comes too close. When that unlucky stuff feels the black hole's gravity, it first spirals around the black hole. As it gets closer, it goes faster and faster. Huge jets, or columns, of electromagnetic radiation and matter that does not make it in to the black hole (sometimes taller than a whole galaxy is wide) Receiver The part of a radio telescope that takes the radio waves and turns them into a picture. form above and below the black hole. Radio telescopes show those jets in action ( Figure 4). Massive objects like these black holes warp the fabric of space, called spacetime. Imagine setting a bowling ball, which weighs a lot, on a trampoline. The trampoline sags down. Weighty stuff in space makes space-time sag just like the trampoline. When radio waves coming from distant galaxies travel over that sag to get to Earth, the shape acts just like the shape of a magnifying glass on Earth: telescopes then see a bigger, brighter picture of the distant galaxy. Radio telescopes also help solve one of the biggest mysteries in the universe: What is dark energy? The universe is getting larger every second. And it gets larger faster and faster every second because "dark energy" is the opposite of gravity: Instead of pulling everything together, it pushes everything farther apart. But how strong is dark energy? Radio telescopes can help scientists to answer this question by looking at "megamasers" that occur naturally in some parts of space, a megamaser is kind of like a laser on Earth, but it sends out radio waves instead of the red or green light that we can see. Scientists can use megamasers to pin down the details of dark energy [5]. If scientists can figure out how far away those megamasers are, they can tell how far away different galaxies are, and then they can figure out how fast those galaxies are speeding away from us. A FULL TOOLBOX If we only had telescopes that picked up visible light, we would be missing out on much of the action in the universe. Imagine if doctors had only a stethoscope as a tool. They could learn a lot about the patient's heartbeat. But they could learn so much more if they also had an X-ray machine, a sonogram, an Dark energy Dark energy acts like the opposite of gravity and pushes everything in the universe farther apart. Megamaser A natural laser in space that sends out radio waves, instead of red or green light like the kind that comes from a laser pointer. Figure 4 Galaxies that have supermassive black holes at their centers can shoot out jets of material and radiation, like those seen here, that are taller than the galaxy is wide. Source: NRAO. MRI instrument, and a CT scanner. With those tools, they could get a more complete picture of what was happening inside the patient's body. Astronomers use radio telescopes together with ultraviolet, infrared, optical, X-ray, and gamma-ray telescopes for the same reason: to get a complete picture of what is happening in the universe.
3,168.2
2016-02-03T00:00:00.000
[ "Physics" ]
Enhancing loco-regional adaptive governance for integrated chronic care through agent based modelling (ABM) Introduction : Moving from existing segmented care to integrated care is complex and disruptive. It is complex in the sense that the type of changes and the timeframe of these changes are not completely predictable. It is disruptive in the sense that the process of change modifies but also is influenced by the nature of interactions at the individual and organisational level. As a consequence, building competences to govern the necessary changes towards integrated care should include capacity to adapt to unexpected situations. Therefore, the tacit knowledge of the stakeholders (“knowledge-in-practice developed from direct experience; subconsciously understood and applied”1) should be at the centre. However, the usual research and training practices using such a knowledge (i.e. action research or case studies), are highly time-consuming. New approaches are therefore needed to elicit tacit knowledge. One of them is agent based modelling (ABM)2  through computer simulation.   The aim of this paper is to make a “showcase” of an agent-based model that uses the emergence of tacit knowledge and enhances loco-regional adaptive governance for improving integrated chronic care. Theory/Methods : We used a complex adaptive system’s lens to study the health systems integration process. We applied key components of ABM to assess how health systems adapts through the dynamics of heterogeneous and interconnected agents (agents are characterised by their level of autonomy, heterogeneity, and interactions with other agents). The agent-based model was developed through a process where concept maps, causal loop diagrams, object-oriented unified modelling language diagrams and computer simulation (using Netlogo©) were iteratively used. Results : The agent-based model was presented to health professionals with variable experience in healthcare to elicit their perceptions and tacit knowledge. It  consisted of agents with certain characteristics and transition rules. Agents included providers, patients, networks’ or health systems’ managers. Agents can adopt or influence the adoption of integrated care through learning and because of being aware, motivated and capable of decision making. The environment   includes institutional arrangements (e.g., financing, training, information systems and legislation) and leadership. Different scenarios were created and discussed. Key rules to strengthen adaptive governance were reflected on. Discussion and conclusion : This study is an initial step of an exercise to use ABM as a means to elicit of and enhance tacit knowledge to strengthen governance for integrated care. It is expected that the study will foster dialogue between actors of loco-regional projects to integrate health and social care for chronic diseases in Belgium (a new program initiated by federal authorities). Suggestions for future research : Future research is expected to continue developing methods that combine ABM with participative exploration approaches to make better use of tacit knowledge in strengthening loco-regional governance for the development of integrated care. References : 1- Kothari, A. et al. The use of tacit and explicit knowledge in public health: a qualitative study. Implement. Sci. 2012;7, 20. 2- Anderson, J., Chaturvedi, A. & Cibulskis, M. Simulation tools for developing policies for complex systems: modeling the health and safety of refugee communities. Health Care Manag. Sci. 2007;10, 331–339. CITE THIS VERSION Macq, Jean ; Deconinck, Hedwig ; Van Durme, Thérèse ; Lambert, Anne-Sophie ; Karam, Marlène ; et. al. Enhancing loco-regional adaptive governance for integrated chronic care through agent based modelling (ABM) .17th International Conference on Integrated Care (Dublin, Ireland, du 08/05/2017 au 10/05/2017). http://hdl.handle.net/2078.1/184684 Le dépôt institutionnel DIAL est destiné au dépôt et à la diffusion de documents scientifiques émanant des membres de l'UCLouvain. Toute utilisation de ce document à des fins lucratives ou commerciales est strictement interdite. L'utilisateur s'engage à respecter les droits d'auteur liés à ce document, principalement le droit à l'intégrité de l'oeuvre et le droit à la paternité. La politique complète de copyright est disponible sur la page Copyright policy DIAL is an institutional repository for the deposit and dissemination of scientific documents from UCLouvain members. Usage of this document for profit or commercial purposes is stricly prohibited. User agrees to respect copyright about this document, mainly text integrity and source mention. Full content of copyright policy is available at Copyright policy Enhancing loco-regional adaptive governance for integrated chronic care through agent based modelling (ABM) Many countries (including Belgium) are attempting to Move from existing segmented care to integrated care. It is complex in the sense that the type and timeframe of changes are not always predictable. It is disruptive in the sense that the process of change modifies but also is influenced by the nature of interactions between agents. Building competences to govern the necessary changes towards integrated care should consider this, particularly at loco-regional level (for networks covering between 100 000 and 500 000 people). Acknowledging the tacit knowledge and cognitive heuristic of the stakeholders is key for that learning process. The aim of this paper is to make a "showcase" of an agent-based model (ABM) that build on and make explicit tacit knowledge and cognitive heuristics between stakeholder to enhance loco-regional adaptive governance for improving integrated chronic care. Building competencies to govern health and social care at loco-regional level by taking into account tacit knowledge and cognitive heuristics Making a « showcase » of ABM that foster sharing of tacit knowledge between stakeholders We used a complex adaptive system's lens to study the health systems integration process. Complex adaptive systems (CAS) are made of "agents" that interact, adapt, learn from experience, self-organise, and behave unpredictably. CAS are open systems. As a consequence, they are influenced by the environment and influence it. Complex adaptive systems features amongst other the following behavior: path dependency; emergent "order", ttransition phases, causal loops, scale-free networks Generally, CAS seek equilibrium. Using the lenses of complex adaptive system to study the health systems integration process We applied components of ABM to assess how health systems adapts and move towards integrated care. ABM allows simulating the different behaviors of CAS. The agent-based model was developed through a process where storytelling, concept maps, group voting process (with Wooclap ©) , object-oriented unified modelling language (UML) diagrams and computer simulation (using Netlogo ©) were iteratively used. With different groups of MPH d-students. Story telling and UML was initially done with students following a course on "systemic approach in public health". Based on that and on exchanges with the other authors, the main author developed progressively an ABM in Netlogo. This was shown to student following an optional module on coordination and networks organization to improve its calibration It was finally exchanged with 1 st year MPH students to identify likely scenarios of changes and discuss it. Simulating the behavior of a loco-regional system with Netlogo© and sharing it with MPH students to progressivelly improve it System evolution Chosing between best alternatives? Behavior of the system over time (centered on ratio « cost » over « health » simulated) Sharing tacit knowledge and elicit cognitive heuristics This is the initial step of an exercise to use ABM as a mean to take advantage and enhance tacit knowledge to strengthen governance for integrated care. It is expected that it will be used to foster dialogue between loco-regional projects to integrate health and social care for chronic diseases in Belgium (a new program initiated by federal authorities). Future research should continue the development of methodology combining ABM with participative approaches to make better use of tacit knowledge in strengthening loco-regional governance for the development of integrated care. Moving away from intervention evaluation towards system monitoring: promoting the development of methodology combining ABM with participative approaches to make better use of tacit knowledge
1,752.8
2017-10-17T00:00:00.000
[ "Medicine", "Computer Science" ]
Application of vibrational spectroscopy in the quality assessment of Buchu oil obtained from two commercially important Agathosma species (Rutaceae) Quality assessment of natural raw materials and derived consumer products is often done using conventional analytical techniques such as liquid and gas chromatography which are expensive and time consuming. This paper reports on the use of vibrational spectroscopy techniques as possible alternatives for the rapid and inexpensive assessment of the quality of ‘ buchu oil ’ obtained from two South African species; Agathosma betulina and Agathosma crenulata belonging to the Rutaceae family. Samples of A. betulina (55) and A. crenulata (16) were collected from different natural localities and cultivation sites in South Africa. The essential oil was obtained by hydrodistillation and scanned on Near infrared (NIR), mid infrared (MIR) and Raman spectrometers. The spectral data obtained was processed using chemometric techniques and orthogonal partial least squares discriminant analysis (OPLS-DA) was used to clearly differentiate between A. betulina and A. crenulata. The OPLS-DA technique also proved to be a useful tool to identify wave regions that contain biomarkers (peaks) that contributed to the separation of the two species. The three spectroscopy techniques were also evaluated for their ability to accurately predict the percentage composition of seven major compounds that occur in A. betulina ‘ buchu ’ oil. Using GC – MS reference data, calibration models were developed for the MIR, NIR and Raman spectral data to predict/profile the major compounds in ‘ buchu oil ’ . A comparison of the three spectroscopy techniques showed that MIR together with PLS algorithms produced the best model (R 2 X=0.96; R 2 Y=0.88 and Q 2 Ycum=0.85) for the quantification of six of the seven major oil constituents. The MIR model showed high predictive power for pseudo-diosphenol (R 2 =0.97), isomenthone (R 2 =0.97), menthone (R 2 =0.90), limonene (R 2 =0.91), pulegone (R 2 =0.96) and diosphenol (R 2 =0.85). These results illustrate the potential of MIR spectroscopy as a rapid and inexpensive alternative to predict the major compounds in buchu oil. Introduction Agathosma species 'buchu' are medicinal shrubs belonging to the Rutaceae family. The species are widely distributed throughout the Western Cape province of South Africa which harbours two commercially important species: A. betulina (Bergius) Pillans and A. crenulata (L.) Pillans (Spreeth, 1976). The two species are known as 'true buchu' however, due to their characteristic leaf shapes A. betulina is known as round leaf buchu and A. crenulata as long leaf buchu (Fig. 1). The buchus are integral to the traditional healing practices of inhabitants residing in the South Western Cape where they are used to treat renal disorders and chest complaint (Van Wyk and Wink, 2004). In addition, the essential oil obtained from A. betulina, commonly referred to as 'buchu oil' is used as a flavourant to enhance fruit flavours such as black current notes while in perfumery it is used as a fragrance material (Simpson, 1998;Turpie et al., 2003;Van Wyk and Wink, 2004). Due to widespread commercialization of buchu-containing products, both locally and abroad, there has been an increase in the demand (albeit with fluctuation) for buchu oil. The oil from A. betulina has however gained market favour due to its unique organoleptic properties compared to A. crenulata (Posthumus et Webber et al., 1999). The need to correctly identify A. betulina and A. crenulata plant material and oil thus became apparent. Currently, the identification is based on leaf shape however, due to the emergence of hybrids and the variable leaf shape observed between several populations it may not always be a reliable character (Blommaert and Bartel, 1976;Pillans, 1950). Gas chromatography coupled to mass spectrometry (GC-MS) is the analytical tool used for analysis of buchu oil. However, the method is time consuming, expensive and requires skilled personnel (Qiao and Van Kempen, 2004). The need to identify fast, efficient and cost-effective methods for analysing buchu oil is important so as to supply a product of consistent high quality and reduce the risks of financial losses due to the supply of low quality oil. Vibrational spectroscopy has been identified as an important alternative method in quality assessment of raw material and herbal products. The technique has already been used in the inspection and analysis of raw materials and to quantify constituents in a wide range of products (Lin et al., 2009). In the food and beverage industries, spectroscopy is used for in situ analysis of moisture content, fat, protein, sugar and acid levels (Osborne and Fearn, 1993;Pedersen et al., 2003). In the pharmaceutical industry spectroscopy is used in process monitoring and product quality control that includes: raw material identification, content and particle size uniformity and moisture (Reich, 2005). The advantages over current analytical techniques include that it is robust, efficient, non-destructive, non-evasive, inexpensive and require little or no sample preparation (Schulz et al., 2004). In this study, the use of three vibrational spectroscopy techniques (Near infrared, mid infrared and Raman) to characterise and classify buchu oil from A. betulina and A. crenulata was evaluated. In addition, the techniques were evaluated for the quantification of the major constituents in the commercially important A. betulina oil. Chemometric tools were used to develop calibration models that would assist in the rapid prediction of major buchu oil components. Selection and preparation of plant material Seventy one A. betulina and A. crenulata plants (wild and cultivated) from 19 different locations in the South Western Cape region of South Africa were obtained (Table 1). All samples were kindly supplied by Chicken Naturals. Several individual plants were harvested in both commercial plantations and also from the wild. The essential oil was obtained through hydrodistillation of the aerial parts using a Clevenger-type apparatus. The oils were stored at − 20°C prior to analyses. Gas chromatography-mass spectrometry (GC-MS) Analysis of the distilled oils was done using gas chromatography-mass spectrometry (GC/MS). An Agilent 6860 N chromatograph fitted with an HP-Innowax, 60 m × 250 μm polyethylene glycol column (film thickness 0.25 μm) was used. The following oven temperature was used: start at 60°C, rising to 220°C at 4°C/min, holding for 10 min, and then rising to 240°C at 1°C/min. Helium was used as the carrier gas at a constant flow of 1.2 ml/min, pressure of 24.79 psi (split 1:200). Chromatograms were obtained on electron impact at 70 eV, scanning from 35 to 550 m/z. Identification of the major compounds was done based on retention indices and library data bases that include Mass Finder ® and NIST ® . The percentage composition of the major compounds was obtained from the flame ionization detector (FID) peak areas according to the 100% method (Kamatou et al., in press). NIR spectroscopy measurements The Near infrared spectra of the oils were recorded on a NIRFlex N500 liquid cell spectrometer (Büchi Labortechnik AG, Flawil, Switzerland). High precision cells (cuvettes) of 0.20 mm path length (Hellma GmbH & Co, KG, Müllheim, Germany) were used. The oil spectra were collected in the transmittance mode between the wave regions of 4000 and 10,000 cm (2500-1000 nm). NIRWare 1.2 was used for operating the instrument and obtaining spectra. Approximately 50 μl of sample was aliquoted into the cuvette, placed on the spacer. A total of 32 scans were accumulated for each sample with a spectral resolution of 4 cm (1501 data points). The procedure was done in triplicate and the average spectra obtained in MS Excel ® for chemometric analysis. MIR spectroscopic measurements The mid-infrared spectra of the oils were recorded in the range of 550-4000 cm (~18,000-2500 nm) on an alpha-P Bruker spectrometer mounted with an ATR diamond crystal (Bruker OPTIK GmbH, Ettlingen, Germany). OPUS 6.5 was used for obtaining the spectra. The essential oil sample (10 μl) was placed directly on the surface of the ATR diamond crystal and spectral data obtained in the absorbance mode. A total of 32 scans were accumulated for each sample with a spectral resolution of 4 cm (2436 data points). The procedure was done in triplicate and the average spectra obtained in MS Excel ® for chemometric analysis (Baranska et al., 2005). Raman measurements FT-Raman spectra were recorded using a Nicolet NXR 9650 spectrometer equipped with a laser, emitting at 1064 nm and a germanium detector cooled with liquid nitrogen. Approximately 5 μl of essential oil was aliquoted into the center of a steel disk and placed on a xy stage. Spectral data of individual oils were accumulated using OMNIC software from 64 scans with spectral resolution of 4 cm in the range of 100-4000 cm (100,000-2500 nm) (8090 data points). Laser power of 100 mW was supplied by an unfocused beam (Baranska et al., 2006). Data analysis Chemometric analysis of the spectral data was performed using SIMCA-P + 12.0 software (Umetrics AB, Malmo, Sweden). Orthogonal partial least squares discriminate analysis (OPLS-DA) was performed on MIR, NIR and Raman spectral data for discrimination of the two Agathosma species. Spectral data were centered and the whole wavenumber region was used without spectral pretreatment for this analysis. Partial least Table 2 Oil composition of samples used in this study (A. betulina n = 55 and A. crenulata n = 16). Component Relative retention indices (RRI) squares (PLS) regression analysis was carried out on NIR, MIR and Raman spectra to set up calibration models of A. betulina essential oil constituents. Principal component analysis was initially done to identify any strong outliers (scores scatter plot) or moderate outliers (DmodX) that could be removed from the dataset. Pretreatments that were used include multiple scatter correction (MSC), standard normal variate (SNV) and second derivative. The whole wavenumber regions and cross validation with the prediction error sum of squares (PRESS) method were used to estimate the predictive ability of the model. Response permutation was applied to determine the appropriate number of PLS components to include in the model and hence avoid overfit. The model was fitted on centered spectral data while univariate (UV) scaling was applied to the reference (GC) data. A training set and a test set were defined for external validation by randomly selecting 70% of observations to include in the training set and the remaining 30% (test set) were used to evaluate the predictive ability of the model. Statistical accuracy was described by the correlation coefficient (R 2 ) and root mean square error of prediction (RMSEP) for observations in the prediction set. GC-MS reference analysis The results show that there were no qualitative differences between the two species when comparing the major compounds (% area N 1) listed in Table 2. A. crenulata is characterized by high pulegone content ranging between 50 and 66%. Agathosma betulina is characterized by the presence of diosphenol (15-35%), pseudo-diosphenol (12-30%), isomenthone (4-26%) and limonene (5-24%). These compounds also occur in A. crenulata although some are found in very low quantities (b 2%). The commercially important sulphur containing compounds are characteristic of A. betulina (cis and trans-8mercapto-p-methan-3-one). Although these occur in small amounts, they are responsible for the characteristic organoleptic properties of the oil. Fig. 2 shows the total ion chromatograms of the two species highlighting the major compounds and the corresponding structures. These results are consistent with previous reports on the chemical profiles of A. betulina and A. crenulata essential oils. Fluck et al. (1961) observed that qualitatively, the two species had similar compounds however, A. betulina contained high diosphenol, while A. crenulata contained high levels of pulegone. Other reports also confirmed the occurrence of high pulegone levels as a marker compound for identifying A. crenulata oil and high diosphenol and low pulegone content for identifying A. betulina oil (Blommaert and Bartel, 1976;Collins and Graven, 1996). Classification and discrimination of Agathosma species A two component OPLS-DA model was successfully used to discriminate between A. betulina and A. crenulata oils using MIR spectral data. The model explained 96.5% of the total variation in X (R 2 X cum predictive + orthogonal) and the goodness of prediction of the model was 97.8% (Q 2 cum = 97.8). Most of the predictive information was found in the first component which showed that 89.7% variation in X was related to the separation of the two species (Fig. 3). 15.5% variation in X (orthogonal) is systematic variation that did not contribute to the separation. Separation of the two classes was very good as can be seen from the high R 2 Y (goodness of fit) and Q 2 Y values of 98% and 97.9%, respectively. Fig. 4a is the loadings plot that shows the correlation between the wavenumbers and the two species. The positive loadings are correlated to A. betulina while negative loadings are correlated to A. crenulata. The indicated peaks show the regions that are highly responsible to the separation of the two species. Fig. 4b confirms these regions of high magnitude and high reliability in the separation of the two species. The wavenumbers (1388-1394; 1635-1652) indicated on the far top right are correlated to chemical profiles associated with A. betulina while in the bottom left corner the wavenumbers (1281-1289; 1675-1694) are correlated to A. crenulata. The region in the center is high risk to scrupulous correlation and is therefore not used to distinguish the species as it can give false information (Eriksson et al., 2006). The results obtained show that there is considerable variation between A. betulina and A. crenulata which can be confirmed using GC data. In addition the OPLS-DA model developed using MIR data reliably separated the two Agathosma species. Overall, the results demonstrate that MIR with the aid of chemometrics can be used to rapidly determine the authenticity of buchu oil. Vibrational spectra of A. betulina essential oils Three vibrational spectroscopy methods (MIR, NIR and Raman) were compared for analysis of major compounds in the essential oils of A. betulina. The obtained exemplary spectra for each of the techniques are represented in Fig. 5. The MIR spectrum shows characteristic key bands/peaks that can be assigned to the major compounds in the oils (Fig. 5a). In contrast, NIR spectrum consists of broad overlapping bands that can be applied in the quality assessment but may not be useful for assessment of individual compounds (Fig. 5b) (Baranska et al., 2006). Raman spectrum like MIR shows characteristic key bands that can be assigned to specific components in the oils (Fig. 5c). The spectrum however contains a lot of noise compared to MIR and may require smoothing which has a disadvantage in that it may mask or remove certain characteristic signals resulting in the loss of useful information. Quantification of major compounds Fifty-five A. betulina essential oils were used to develop linear calibration models for the prediction of seven major compounds which occur in varying concentrations in the oils (Table 2). Calibration models were developed from MIR, NIR and Raman data to directly predict the major compounds from the spectral data. A comparison of the three models was made to identify the technique that produced the best model for prediction. The results obtained are presented in Table 3. MIR proved to provide the best predictive model with only 3 PLS components explaining 96% variation in X (R 2 X = 0.96) and 88% variation in Y (R 2 Y = 0.88). The predictive ability of the model was obtained as 85% (Q 2 = 0.85). The whole wavenumber region of the spectra was included for this calibration model. Restricting the model to only the finger print region of the spectra did not improve the predictive power of the model. No strong or moderate outliers were identified thus the model included all 55 observations. Although three pretreatment methods (SNV, MSC and second derivative) were applied to the dataset, they seemed to distort the model resulting in unsatisfactory prediction quality. The model displayed in Table 3 therefore shows data without pretreatment. According to Eriksson et al. (2006) a good model is one where both R 2 (cum) and Q 2 (cum) should be N0.5 and the difference between the two values should not be N0.2. In this study, the MIR model gave R 2 and Q 2 values of close to 1 and the difference between them is 0.11. In addition the number of components used together with the results from response permutation, shows that the model is not overfitted. The model is therefore good for the prediction of the major components. The NIR results gave a reasonable calibration model with eight PLS components explaining 99% variation in X (R 2 X = 0.99) and 89% variation in Y (R 2 Y = 0.89). The predictive ability of the model was 73% (Q 2 = 0.73) which is lower than what was observed for MIR data. The whole spectra were used for calibration and pre-processing with SNV improved the original model compared to the other two preprocessing methods (Table 3). No outliers were removed from the dataset. Although the model obtained was good, the difference between R 2 (cum) and Q 2 (cum) is 0.26 which is above 0.2 and may therefore give unsatisfactory prediction. The number of components used is also higher (8) than for the MIR (3) which indicates that the model might have been overfitted and hence the lower predictive ability. Overall, although the NIR technique can be used for predictions, the MIR technique presents a much better model with a better predictive ability. Raman spectroscopy did not yield satisfactory results. The number of PLS components used to build the model was higher (10) compared to the other two models. In addition, pre-processing of the spectral data did not significantly improve the model. The difference between the R 2 X (0.99) and Q 2 (0.55) was 0.44 which shows that the model was heavily overfitted. Manually reducing the number of components resulted in the Q 2 (cum) being reduced to less than 0.5. Once again no outliers were identified in the model and all the observations were included in the model. The unsatisfactory model obtained with the Raman data could be a result of the noisy spectra obtained with Raman measurements. Smoothing of the spectra did not improve the model which might imply that the method is not sensitive enough for quantification (Baranska et al., 2006). From the results presented in Table 3, MIR showed the best calibration statistics and the results for prediction of the seven major compounds using this model is presented in Table 4. The model developed using MIR data was evaluated for its ability to analyse all the seven Y-variables (compounds) in a single PLS model. This was done by creating a PCA of the Y matrix alone. The model had only 3 components which is small compared to the number of variables which implies that the Yvariables are correlated and therefore a single model could be used for all the Y-variables (Wold et al., 2001). A review of the PCA loadings plot also showed that there is no strong clustering within the Y-variables and thus a single PLS model was sufficient to predict all the components. The results presented in Table 4 show that most of the major compounds were well predicted using the MIR calibration model. Good predictive quality was observed for six out of the seven compounds (pseudo-diosphenol, isomenthone, menthone, limonene, diosphenol and pulegone) where R 2 values were N0.8. In addition, the RMSEP values (indicating the average difference between the measured and predicted response) for the compounds are low considering the range of concentrations used implying that reliable predictions are possible. In contrast cis-8-mercapto-p-methan-3-one was not well predicted using this model with R 2 = 0.45 and RMSEP = 1.9. The dispersion of points near the calibration line was significant showing that the errors were large (results not shown). The poor prediction could also have been a result of the narrow content range of the component in the samples used (1-11%). Overall, PLSR was a useful tool in modeling and analysing several compounds together which gives a simple overall picture than one separate model for each component (Wold et al., 2001). MIR gave the best prediction model which is consistent with the results from previous researchers' that obtained the best prediction using a model based on ATR-IR data compared to NIR and FT-Raman for determination of lycopene and ß-carotene in tomato fruits (Baranska et al., 2006). Conclusion In this study the feasibility of using vibrational spectroscopy as an alternative tool in the quality assessment of buchu oil has been shown. The technique was proven to be reliable in chemotaxonomic characterization of buchu oil where the use of leaf shape may not necessarily be reliable especially where hybridization is prominent. Although MIR, NIR and Raman spectroscopy can be used for quality assessment of buchu essential oils, MIR was shown to produce the best calibration model with the best predictive capacity for quantification of the major compounds. In this regard, spectroscopy was again proven to be a fast, reliable and inexpensive technique in the profiling of major compounds that occur in the commercially important buchu oil.
4,866.4
2010-10-01T00:00:00.000
[ "Chemistry" ]
Micro Fabry-Pérot Interferometer at Rayleigh Range The Fabry-Pérot interferometer is used in a variety of high-precision optical interferometry applications, such as gravitational wave detection. It is also used in various types of laser resonators to act as a narrow band filter. In addition, ultra-compact Fabry-Pérot interferometers are used in the optical resonators of semiconductor lasers and fiber-optic systems. In this work, we developed a micro-scale Fabry-Pérot interferometer that was constructed within the Rayleigh range of the optical focusing system. The high precision that is conventionally required for the optical parallelism and the surface accuracy of the mirrors was not so critical for this type of Fabry-Pérot interferometer. The interferometer was constructed using a gold-coated silicon microcantilever with reflectivity of 92% and a dielectric multilayer flat mirror with reflectivity of 85%. The focal spot size of the laser beam is 20 μm and the cavity length is approximately 20 μm. The finesse was measured to be approximately 25. The interferometric characteristics of the device were consistent with the theoretically calculated performance. The developed micro Fabry-Pérot interferometer has the potential to make a marked contribution to advances in optical measurements in various micro sensing system. In this work, we have developed a micro Fabry-Pérot (FP) interferometer with high sensitivity to realize high-performance feedback damping of the thermal vibration of a silicon microcantilever that is intended for use in an atomic force microscope (AFM). FP interferometers have been used in various high-precision optical interferometry applications, such as gravitational wave detection 1 . To date, there have been many studies of normal-sized FP interferometers 2,3 , but only a few studies have addressed smaller types of FP interferometers 4,5 . There have been several studies of feedback cooling of the thermal vibration of micro cantilevers [6][7][8][9][10][11][12][13] . Recent studies found that the measured signal-to-noise ratio determines the limits of the feedback cooling performance 6,7,12 . We used an FP interferometer rather than a Michelson interferometer to improve the measurement sensitivity and thus increase the signal-to-noise ratio. In conventional FP interferometers, the polished end faces of optical fibers 10,11 and micro mirrors from the surface of a multilayer dielectric mirror 12,13 formed by focused ion beam microfabrication have been used as cavity mirrors. However, use of these methods for the mirror has led to issues such as low finesse due to optical diffraction from the fiber output aperture and problems with the parallelism of the optical alignment and the interferometer, along with difficulties in the microfabrication process. In addition, these methods do not use the merits of the Rayleigh range. In this work, we have developed a micro FP interferometer that uses the optical merits of the Rayleigh range of the focal system to simplify the optical system and improve the interferometric performance. The interferometric characteristics of this FP interferometer show good agreement with the theoretically calculated performance. Figure 1 shows the experimental system that was used to measure the interferometric characteristics of the micro FP interferometer. A He-Ne laser (wavelength of 632.8 nm; laser power of approximately 1 mW) was used as the light source for the interferometer. The micro FP interferometer is constructed using the gold-coated surface of a microcantilever and a dielectric multilayer flat mirror. We measured the vibration of a commercially available silicon microcantilever (OMCL-AC240TN, Olympus Corporation) that is intended for use in AFMs. Figure 2 shows a scanning electron microscope image of this microcantilever. The micro cantilever's length, width, and thickness are 240 μm, 40 μm, and approximately 2.3 μm, respectively, and it is composed of single-crystal silicon. The natural oscillation frequency of the microcantilever is 77.6 kHz, and the catalog value of its spring constant is about 2 N/m. One single-side surface of the microcantilever was coated with gold to increase the laser reflectance using an ion-beam sputtering device that is commonly used for preprocessing before scanning electron microscope observation. The coating thickness was chosen to be as thin as possible while ensuring that sufficient reflectivity (92%) was obtained because reductions in both the natural oscillation frequency and the Q factor of the microcantilever were observed when a thick gold coating was used. The coating thickness was estimated to be approximately 25 nm based on the coating characteristic curve of the ion sputtering device. Ideally, the coating should be done only on the area, where the laser beam was irradiated. However the coating was done on the whole front surface of the cantilever, because it was easier than the partial coating. In case of the silicon micro cantilevers, we found dielectric multilayer coating was difficult on them. We tried several times, however the microlever were broken in all cases, probably due to the surface stress induced by the coating. It was one of the reason why we chose the gold coating. Methods The other side of the FP interferometer is formed by the dielectric multilayer flat mirror. The optical flatness and reflectance of this mirror were λ/10 and 85%, respectively. The diameter and thickness of the mirror were 30 mm and 1 mm, respectively. A laser beam with a diameter of 4 mm was focused using a spherical lens, which has a focal length of 80 mm and an F number of 20. The focal spot size was estimated to be approximately 16 μm under the assumption of the diffraction limit. The Rayleigh range was estimated to be approximately 250 μm, and the cavity length was approximately 20 μm. The optical system was set in a vacuum chamber at a pressure of approximately 4 × 10 −3 Pa. The interferometric signal was separated using a beam splitter and measured using an avalanche photodiode. The microcantilever was driven using a lead zirconate titanate (PZT) piezoelectric actuator. The signal was measured using an oscilloscope and a fast Fourier transform (FFT) analyzer. The vacuum circumstance was not essential for this experiment. It was only for obtaining a clear thermal vibration signal of the micro cantilever. The same optical characteristics of it were also obtained in the atmospheric pressure. Results The reflectance values of the microcantilever and the dielectric multilayer mirror were 92% and 85%, respectively. The reflectance of the microcantilever differs from that of the dielectric mirror. For the FP interferometer that was constructed using a pair of mirrors with different reflectances, the theoretical interferometric reflectance R was calculated to be where R 1 and R 2 are the reflectances of the multilayer mirror and of the microcantilever, respectively 14 . δ is the phase shift of each transmitted light wave due to the change in the cavity length L C and is given by δ λ = πL 4 / C . Figure 3 shows the interferometric reflectance R as a function of the cavity length, as calculated using eq. (1) for various values of R 2 . We can see that the minimum interferometric reflectance could not be 0% when R 1 and R 2 differ from each other. In the case where R 1 = R 2 , the minimum reflectance is 0%. In the case where R 1 > R 2 , the minimum reflectance increases as R 1 decreases. for R 1 and was located between R 2 and 1. The open circle in Fig. 4 is related to the experimental conditions (where R 2 = 0.85). Figure 5 shows the reflectance of the micro FP interferometer as a function of cavity length. R 1 and R 2 were 0.85 and 0.92, respectively. The FP interferometric characteristics were measured by varying the cavity length using the PZT actuator. The gray solid line indicates the theoretical calculation results obtained using eq. (1). The blue solid circles are the experimental results, which showed good agreement with the values on the theoretically calculated curve. Scale fitting was only performed for the horizontal scale. The finesse of the interferometer was measured to be 25. Figure 6 shows the FFT signal of the thermal vibration of the microcantilever, which is used as one of the mirrors of the micro FP interferometer, at maximum sensitivity. The frequency resolution of the FFT analyzer is 0.5 Hz. The data are averaged over 1000 measurements. The gray solid line is fitted to the experimental results using a Lorentzian curve. The quality factor Q was measured to be approximately 2000. The thermal vibration amplitude was approximately 5 pm. Discussions In the vicinity of the focal point of the focusing optical system, the laser beam wave fronts are sufficiently flat to allow the FP interferometer to be constructed. The Rayleigh length l L is given by where λ is the wavelength of the laser, f is the focal length and D is laser beam spot diameter on the lens. In this experiment, it was estimated to be about 250 μm, which is long enough than the cavity length (20 μm). It is the reason why flat mirrors can be used as the reflectors of the FP interferometer placed in the focusing optical system. Figure 7 shows a comparison of the retro reflectivity properties of the two types of optical reflecting systems, when the mirrors of the FP interferometer are not located in parallel with each other; this behavior is caused by the retroreflective effect. In case (b), the optical axis of the reflected beam is oriented parallel to the optical axis of the incident beam by the retroreflective effect, which makes it possible for the two beams to interfere. Consequently, in the micro FP interferometer, the requirement for parallel orientation of a pair of mirrors is greatly reduced when compared with the normal-type FP interferometer ((a)). We could observe the interference fringes, even when the reflected laser beam pattern from the interferometer was not completely overlapped with that of incident laser beam. The demand for optical flatness in the micro FP interferometer is also much weaker when compared with that for the normal-type FP interferometer because of the reduced cross-sectional area of the laser beam. The optical flatness of the reflection mirror were only λ/10, by which we cannot obtained the fines of 25 in case of the normal type FP interferometer. Another characteristic of the micro FP interferometer is that it has a large free spectral range because of its short cavity length. For the practical fabrication of this type of FP interferometer, we think that it is one of the method to contact a small dielectric multilayer mirror to the basement of the microcantilever with a thin spacer using the optical contact bonding. Conclusions We have developed a micro Fabry-Pérot interferometer that is constructed within the Rayleigh range of the optical focusing system and demonstrated that the interferometric characteristics of this interferometer were consistent with the theoretically calculated characteristics. The conventional high precision required for the optical parallelism and the surface flatness of the mirrors was not so essential for the micro FP interferometer. We believe that the proposed micro FP interferometer has the potential to make a marked contribution to advances in optical measurements in various micro sensing system.
2,553.6
2018-10-12T00:00:00.000
[ "Physics" ]
Non-linear fitting with joint spatial regularization in Arterial Spin Labeling Multi-Delay single-shot arterial spin labeling (ASL) imaging provides accurate cerebral blood flow (CBF) and, in addition, arterial transit time (ATT) maps but the inherent low SNR can be challenging. Especially standard fitting using non-linear least squares often fails in regions with poor SNR, resulting in noisy estimates of the quantitative maps. State-of-the-art fitting techniques improve the SNR by incorporating prior knowledge in the estimation process which typically leads to spatial blurring. To this end, we propose a new estimation method with a joint spatial total generalized variation regularization on CBF and ATT. This joint regularization approach utilizes shared spatial features across maps to enhance sharpness and simultaneously improves noise suppression in the final estimates. The proposed method is evaluated at three levels, first on synthetic phantom data including pathologies, followed by in vivo acquisitions of healthy volunteers, and finally on patient data following an ischemic stroke. The quantitative estimates are compared to two reference methods, non-linear least squares fitting and a state-of-the-art ASL quantification algorithm based on Bayesian inference. The proposed joint regularization approach outperforms the reference implementations, substantially increasing the SNR in CBF and ATT while maintaining sharpness and quantitative accuracy in the estimates. Introduction Arterial spin labeling (ASL) is a non-invasive MRI technique for quantifying local tissue perfusion (Detre et al., 1992). The method utilizes magnetically labeled blood water by<EMAIL_ADDRESS>(Kristian Bredies<EMAIL_ADDRESS>(Rudolf Stollberger) ing the blood water spins upstream the imaging region. After waiting a specific period of time, called the post labeling delay (PLD) which accounts for the time the magnetically labeled blood needs to flow into the region of interest, an image is acquired. This so called label image is subtracted from a second image, the control image, acquired without magnetization alterations of the inflowing blood. From this difference image, also known as perfusion weighted image (PWI), the cerebral blood flow (CBF) can be quantified using a general kinetic model (Buxton et al., 1998). The recommended clinical ASL protocol (Alsop et al., 2014;Telischak et al., 2014) consists of single-delay pseudo-continuous ASL (pCASL) (Dai et al., 2008) combined with segmented 3D data acquisitions such as gradient and spin echo (GRASE) (Feinberg et al., 2009;Günther et al., 2005) or turbo spin echo (TSE) stack of spirals (SoSP) (Ye et al., 2000;Dai et al., 2008) readout due to efficient background suppression (Ye et al., 2000) and SNR gains (Alsop et al., 2014) of these methods. For single-delay acquisitions the signal in the PWI depends on both, the CBF and arterial transit time (ATT). Hence, the accuracy of CBF estimation from single-PLD ASL data is dependent on both factors. Another limitation is that for cases with prolonged ATT (ATT > PLD) some of the labeled blood may remain in larger vessels. This leads to bright spots and an overestimation in CBF in bigger vessels and an underestimation of CBF in brain areas. The bright spots are known as vascular artifacts and can complicate clinical diagnosis in patients with stroke, steno-occlusions, or moya-mayo disease (Zaharchuk, 2012). A way to improve the clinical interpretation of single-delay ASL images is by applying additional coefficient of variation maps obtained from the multiple PWIs (Mutsaerts et al., 2017). Another way to reduce misquantification is by using a longer PLD, ensuring that the blood has sufficient time to reach the tissue. However, this leads to longer acquisitions and additionally to a lower SNR due to the T1-relaxation of the labeled blood. Alternatively, multi-PLDs can be used to sample the inflowing blood at several time points, from a short PLD to long PLD. By fitting the acquired signal to a kinetic model, the potential bias in CBF due to unknown ATT can be reduced (Buxton et al., 1998). However, the recommended segmented acquisitions have the drawback of a low temporal resolution with increased sensitivity to inter-segment motion. Therefore, only a limited number of PLDs can be acquired in a clinically acceptable time. Recently, accelerated single-shot 3D acquisition strategies (Ivanov et al., 2017;Boland et al., 2018;Spann et al., 2019) were implemented to overcome this drawback, at the cost of reduced SNR. This makes the estimation of reasonable quantitative ATT and CBF maps from this low SNR perfusion weighted time series challenging. The standard voxel-wise non-linear least squares (NLLS) fitting approach leads to outliers in low-SNR voxels. To this end, a weighted delay approach proposed to reduce outliers in the quantitative maps. Further improvements could be achieved by inclusion of spatial priors on the CBF map in a Bayesian inference model (BASIL (Chappell et al., 2010)). This stabilizes the fitting approach and reduces noise, ultimately leading to improved CBF estimates but introduces spatial blurring. Exploiting all available spatial information by means of joining the individual regularization of each unknown into a single, joint regularization functional can further improve reconstruction quality. Such an approach has been successfully applied in the context of relaxometry (Knoll et al., 2017;Wang et al., 2017;Maier et al., 2018). Joint regularization utilizes information present in each map, such as tissue boundaries, by means of advanced spatial regularization functionals to avoid the loss of small features and promotes overall sharper parameter maps. In this study, we propose a new non-linear fitting algorithm with joint spatial constraints on the CBF and ATT map to stabilize the estimation procedure and hence enhance the image quality. To improve the motion robustness of the 3D acquisition, we combine the proposed method with a single shot CAIPIRINHA accelerated 3D GRASE readout. The method is evaluated on synthetic phan-tom datasets including simulated pathologies, on six healthy subjects, as well as on seven stroke patients and compared to NLLS and BASIL without regularization on ATT (BASIL w/o) and with regularization on CBF and ATT (BASIL w/ ). Fixing notation Throughout the work we fix the following notations. The image dimensions in 3D are denoted as N i , N j , N k , defining the image space U = R N i ×N j ×N k with x = (i, j, k) defining a point at location (i, j, k) ∈ N 3 . u ∈ U N u expresses the space of unknown CBF-and ATT-maps with N u = 2 in this case. The measured data space is denoted as D = C N i ×N j ×N k and consists of N d perfusion weighted images derived from Control/Label (C/L)-pairs, measured at time t = (t 1 , t 2 , . . . , t N d ) ∈ R N d + . Parameter fitting From a statistical point of view the problem of identifying the unknown parameters u = (u 1 , u 2 , . . . , u N u ) ∈ U N u given a series of noisy measurements d = (d 1 , d 2 , . . . , d N d ) ∈ D N d can be solved via maximum likelihood estimation. Assuming the measurements at time t n are generated by some function A φ,t n : u → d n with fixed parameters φ the likelihood function of measuring d is given by p(d|u, A φ,t n ). The realization of p depends on the noise distribution in the measurements d. Under the assumption that additive independent and identically distributed zero-mean Gaussian noise with variance σ 2 (AWGN) corrupts the measurements d, the multivariate likelihood function turns into a product of single-variate functions. It is common to minimize the negative logarithm of the likelihood function, which is equivalent to maximizing the likelihood, as it turns the product into a sum and improves the numerical stability. Omitting constant terms with respect to u yields u * ∈ arg min u∈U Nu (1) which resembles the well known minimum least squares problem with · 2 being the standard L 2 -norm. Typically, several measurements with varying sequence parameters are necessary to quantify tissue parameters. Especially in cases with a non-linear relationship between acquired signal and parameters, fitting is performed in an iterative fashion. The ASL signal model The quantification of CBF and ATT is based on the standard model for pseudo-continuous ASL (pCASL) (Buxton et al., 1998) which reads as where u = ( f, ∆) and f amounts to CBF in ml/g/s, but is normally quoted in ml/100g/min, and ∆ to ATT in seconds. The a priori known parameters of equation 2 are combined into the variable φ = (M 0α , T 1 , T 1b , τ). It is assumed that T 1 , the apparent longitudinal relaxation decay constant of the tissue, amounts to 1.33 seconds at 3T. T 1b is the longitudinal relaxation decay constant of blood, assumed to amount to 1.65 seconds at 3T . τ corresponds to the labeling duration, α is the labeling efficiency and set to 0.7 (Dai et al., 2008) and t n is the acquisition time point, i.e. the sum of post labeling delay and labeling duration, for the n th measurement. Further, the bloodbrain partition coefficient λ is assumed to be 0.9 ml/g (Herscovitch and Raichle, 1985) thus 1/T 1 app ( f ) = 1/T 1 + f /λ, and M 0α = αM 0 /λ with M 0 being the acquired proton density weighted image. Regularization As the acquired PWI images suffer from poor SNR the problem of quantifying CBF and ATT typically suffers from numerically instabilities. A method to incorporate a priori knowledge of the parameters u into the maximum likelihood estimation problem 1 is known as maximum a posteriori estimation and leads to with γ > 0 balancing between the data fidelity term and the regularization R. R(u) includes known information about u such as its statistical distribution or spatial features, e.g. u should consist of piece-wise constant areas. As the variance σ 2 is in general unknown, we will not consider σ 2 fixed but as something that can be chosen in the reconstruction process. Thus, we combine it with the regularization parameter γ. The introduced prior can lead to a biased estimate of u with reduced uncertainties (Brinkmann et al., 2017). Thus a trade-off between faithfulness to acquired data and the prior needs to be determined according to the expected noise in the data. The most basic form consists of classical Tikhonov regularization which penalizes outliers in the parameter maps in an L 2 -norm sense (Tikhonov and Arsenin, 1977). An extension to this basic form consists of penalizing the gradient of the maps which is known as H 1 regularization (Tikhonov and Arsenin, 1977), leading to a smoother appearance but comes at the cost of blurred edges. To preserve edges and to obtain a better visual impression, a sparsity promoting functional is usually preferred which can be realized by posing an L 1 -norm based constraint on the sparse domain of the unknowns (Donoho, 2006;Lustig et al., 2007). As u is usually not sparse in its native domain, a sparsifying transform such as a finite differences operation or a wavelet transformation is used. The total variation (TV) functional of Rudin-Osher-Fatemi (ROF) (Rudin et al., 1992) is based on an L 1 -norm combined with a forward finite differences operator. This combination can be interpreted as a spatial piece-wise constant prior which is known to be prone to stair-casing artifacts in the final reconstruction results . In order to avoid these stair-casing artifacts but leverage the edge-preserving feature of TV a generalization termed total generalized variation (TGV) functional was proposed by Bredies et al. (2010). In the context of MRI, TGV 2 , which enforces piece-wise linear solutions by balancing between a first order and approximated second order derivative, was shown to yield excellent reconstruction results, preserving fine details and edges while maintaining the denoising properties of TV (Knoll et al., 2010). In the discretized form the TGV 2 regularization is realized via a minimization problem of the following form The favorable properties of TGV 2 can be further improved by sharing common feature information between the unknown parameter maps by joining the TGV 2 functionals utilizing a Frobenius norm in parametric dimension (Bredies, 2014). Recently, this combination was shown to yield improved reconstruction results compared to separate regularization on each map in the context of quantitative T 1 mapping (Maier et al., 2018) and multi modal image reconstruction (Knoll et al., 2017). The combination by means of a Frobenius norm is justified by the assumption that quantitative maps share the same features at the same spatial positions. To incorporate the Frobenius norm the following adaptations to the TGV 2 semi-norm with v = (v 1,l , v 2,l , v 3,l ) N u l=1 ∈ U 3×N u constituting the approximation of 3D spatial derivatives, and for the symmetrized gradient χ = (χ 1,l , χ 2,l , χ 3,l , χ 4,l , χ 5,l , χ 6,l ) N u l=1 ∈ U 6×N u 2.5. The non-linear, non-smooth optimization problem The combination of TGV 2 with equation 3 leads to which is a non-linear problem in the unknowns u and nonsmooth due to the L 1 -norms of the TGV 2 functional. Recall that for the ASL signal, the non-linear operator A φ,t n (u) is defined by equation 2 and u amounts to u = ( f, ∆). A similar problem arises in model-based quantification of T 1 and M 0 (Roeloffs et al., 2016;Wang et al., 2017;Maier et al., 2018). The problem is thus solved in analogy via a two-step procedure. First the data fidelity term is linearized in a Gauss-Newton (GN) fashion, second the linearized, non-smooth sub-problem is solved using a primal-dual splitting algorithm. The linearized sub-problem for each linearization step k is given by Constant terms stemming from the linearization at position u k are fused with the data byd k n = d n − A φ,t n (u k ) + DA φ,t n u k and the matrix DA φ,t n | u=u k = ∂A φ,tn ∂u (u k ), i.e. the derivative of the signal with respect to each unknown, can be precomputed in each linearization step. The additional weighted L 2 -norm penalty 2 improves convexity of the function and resembles a Levenberg-Marquadt update if the weight matrix M is chosen as M k = diag(DA φ,t n | T u=u k DA φ,t n | u=u k ) or a simpler Levenberg update if M k is chosen as identity matrix. It was shown by Salzo and Villa (2012) that the GN approach converges with linear rate to a critical point for non-convex problems with non-differential penalty functions if the initialization is sufficiently close. By exploiting the Fenchel duality it is possible to transform the problem in equation 8 into a saddle-point which overcomes the non-differentiability issue of the L 1 -terms. Problems of the form in (9) can be efficiently solved using a first order primal-dual splitting algorithm (Chambolle and Pock, 2010) in combination with a line search (Malitsky and Pock, 2018) to improve the convergence speed. The detailed derivation is given in the Appendix A. Pseudo-code for the implementation can be found in Appendix B. Reference Methods For comparison of the proposed algorithm we used the nonlinear least squares (NLLS) as well as the Bayesian Inference for Arterial Spin Labeling MRI (BASIL) method Groves et al., 2009 (Smith et al., 2004;Woolrich et al., 2009;Jenkinson et al., 2012) and uses Bayesian inference to estimate the unknown parameter maps. It incorporates fixed non-spatial priors as well as adaptive non-local spatial smoothing priors for the parameters. The spatial smoothing prior is used for CBF and is directly based on evidence in the data. The smoothing strength is adjusted based on the local support in the specific area in the data. The arterial (macro-vascular) contribution flag was set to "OFF" in BASIL to facilitate comparability to the proposed method which currently implements the pCASL model omitting the local arterial contribution. In addition to this standard form of BASIL, termed BASIL w/o, a simple duplication of the line associated with spatial priors in the starting script enables priors on both, CBF and ATT, as described in Groves et al., 2009). This modification serves as second BASIL reference and is termed BASIL w/. Phantom generation To evaluate the proposed method, synthetic ASL data was generated from brain T 1 and PD maps supplied by MRiLab (Liu et al., 2017) Error propagation and stability To asses the error propagation and stability due to the nonlinear fitting procedure we performed a pseudo replica analysis for all three methods. To this end, 100 different noise realizations with a standard deviation of 0.65 were simulated for Case 3. Due to the non-linear fitting process a Gaussian noise assumption in the parameter maps could be violated, thus the median and inter-quartile range between the 25 th and 75 th quartile were used for evaluation. We assessed potential biases using the medians of differences to the ground truth of the 100 realisations and compared the differences in the IQRs between the methods. For the synthetic dataset GM and WM binary masks, generated on the ground truth phantom, are employed. Based on the down sampled GM and WM mask, low resolution mask were generated by thresholding the corresponding GM/WM masks with 0.7. Mask for the simulated lesions were generated in analogy. Additionally, we compared the estimated CBF and ATT maps of the proposed method with the results of BASIL and NLLS by means of a relative difference to the numerical ground truth parameter maps. To assess if differences in median or IQR are statistically Bonferroni (1936). Each method and tissue was considered as parallel test case. Median and IQR of CBF and ATT were considered as separate cases. Results were considered statistically significant for p-values less than 0.05 (p<0.05). In vivo measurements All measurements were performed on a 3T MAGNETOM Prisma (Siemens Healthcare, Erlangen, Germany) system using a 20-channel head coil. Written informed consent was obtained by all healthy volunteers as well as by all patients following the local ethics committee's regulations. In total, six healthy volunteers, consisting of five male and one female subject with an age of 29.5±2.6 years were analyzed.Additionally, seven patients with ischemic stroke due to middle cerebral artery occlusion who received successful re-canalization therapy (i.e. intravenous thrombolysis followed by mechanical thrombectomy), consisting of six male and one female subject with an age of 57.1±13 years, were considered. Patient data was acquired 24 hours after recanalization therapy. ASL images were acquired using a prototype 3D pCASL sequence with a 2D CAIPIRINHA accelerated single-shot 3D GRASE readout (Ivanov et al., 2017) and two background suppression pulses (Vidorreta et al., 2013). Labeling efficiency for this sequence was experimentally determined in Vidorreta et al. (2013) Additionally, for each healthy subject a T 1 weighted image was acquired using a 3D-MPRAGE sequence with the following imaging parameters: 1 mm 3 isotropic resolution, 176 slices, TR = 1900 ms, TE = 2.7 ms, TI = 900 ms, flip angle = 9 • , acquisition time = 5 min 58 s. ASL Data Processing The accelerated ASL images were reconstructed directly on the scanner console by means of a prototype reconstruction pipeline provided by the vendor. The reconstructed ASL images were motion corrected using Statistical Parameter Mapping (SPM)12 1 (Wellcome Trust Centre for Neuroimaging, University College London, UK) (Friston et al., 2007) and the ASL-Toolbox (Wang et al., 2008;Wang, 2012). This rigid-body based motion correction process involved three sub-steps as described in Wang (2012). After reconstruction and motion correction the perfusion weighted time series were calculated. From this perfusion weighted time-series the CBF and ATT maps were estimated using the proposed method as well as the two reference methods. The fixed parameters φ amount to the same values as in the synthetic data set except for T 1 =1330 ms, the approximate tissue T 1 relaxation constant. Anatomical Image Processing For each healthy subject brain masks and PV estimates for GM and WM were computed using FSL (FMRIB Software Library, Oxford, UK (Jenkinson et al., 2012)) and BASIL. In a first step, non-brain tissue was removed from the high resolution structural (T 1 weighted) image using the FSL tool BET (Smith, 2002). In a second step, PV estimates for GM and WM were obtained from the T1w image using the FSL tool FAST (Zhang et al., 2001). Third, the structural image and brain mask were registered to the mean ASL image using the FSL tool FLIRT with 6 degrees of freedom (Jenkinson and Smith, 2001;Jenkinson et al., 2002). The obtained transformation matrix served as initial guess for the next registration refinement step, implemented in BASIL. This step used the epi reg tool of FSL for boundary based registration of the perfusion image with the segmented white matter mask (Greve and Fischl, 2009). In the last step, the PV estimates for GM and WM were transformed to the ASL-space by a process that integrates over the volume of the low resolution voxels as described by Chappell et al. (2011) and implemented in the FSL tool applywrap. Finally GM and WM binary masks in ASL space were computed by thresholding the PV estimates at 70% in WM and GM respectively. For the patient data brain masks were generated from the M 0 image using the FSL tool BET due to the missing T 1 weighted image. Method Comparison Healthy subjects were compared based on visual inspection of ATT and CBF for 1 and 4 acquired averages. In addition, WM and GM masks were used to compute median and IQR which were visualized with box-plots. Statistically significant differences in median and IQR between methods were assessed using Mann-Whitney-U tests, similar to the ones in the numerical simulation. p-values were adjusted for multiple comparisons (Bonferroni, 1936). Results were considered statistically significant for p-values less than 0.05 (p<0.05). Stroke patients are compared based on visual inspection only. Parameter optimization To identify a good set of model and regularization parameters a grid search was performed on the synthetic dataset and in vivo healthy subjects. The resulting regularization parameters amounted to γ init = 10 −3 and δ init = 1 which were reduced by 0.5 and 0.1 respectively after each Gauss-Newton step down to γ f inal = 6.5 · 10 −6 and δ f inal = 10 −2 . A reduction of regularization parameters was observed to be beneficial for overall convergence in IRGN methods (Bakushinsky and Kokurin, 2004;Kaltenbacher et al., 2008;Kaltenbacher and Hofmann, 2010). Relative tolerance for convergence was set to 10 −8 between consecutive evaluations of function value. Regarding the inner iterations, 50 were used in the initial Gauss-Newton step and the number was increased by a factor of two until the maximum allowed number of 1000 iterations is reached , i.e. Implementation The proposed method is implemented in Python 3.7 Results We compare the fitting quality of our proposed joint spatial TGV 2 regularization strategy to established quantification on the CBF-and ATT-maps at three levels: (table 1). However, for the estimated ATT-map both methods show similar relative difference. Using spatial priors on CBF and ATT (BASIL w/ ) reduces this variations but also seems to introduce a slightly lower value in ATT ( . Case 1 has no simulated pathologies, Case 2 shows hyperperfusion in CBF only and Case 3 hyperperfusion in CBF and increased ATT in the corresponding area. In the first column the numerical ground truth is shown and in the following columns the estimated CBF-and ATT-maps from NLLS, BASIL without regularization on ATT (BASIL w/o), and with regularization on ATT (BASIL w/), and the proposed method without and with regularization on ATT, respectively. The proposed method with regularization on both unknown maps shows improved noise removal in CBF and ATT compared to the other methods due to joint spatial constraints. Median and 25% -75% IQR for selected ROIs is given in table 1 Fig. 2. CBF and ATT maps of synthetic phantoms of three cases with pathologies are shown. Each case represents a large partly occlusion of the arteria media combined with a small partly occlusion in frontal gray matter. No variation in CBF is simulated but each case shows an increase in ATT, which gets more severe from Case 4 to Case 6. The order of reference and displayed reconstruction algorithms is the same as in figure 1. The proposed method shows the least influence of the highly increased ATT on the CBF estimates and is able to recover higher ATT values in the affected areas than the other methods, as can be seen in table 1. Fig. 3. Pixel-wise relative absolute difference between the ground truth numerical reference and the quantitative maps, estimated with the algorithms of figure 1. The NLLS shows the greatest deviation in ATT and CBF. BASIL w/o reduces the relative difference in the CBF due to the spatial prior and BASIL w/ is able to reduce deviations even further. The proposed w/o method shows similar results on CBF and ATT as BASIL w/o. The least relative difference is achieved with the proposed method due to joint spatial constraints on CBF and ATT simultaneously. ception of BASIL w/, which shows lower IQR than the proposed method. In GM CBF, the proposed method shows statistically significantly lower IQR than NLLS but both BASIL approaches are able to further reduce IQR than the proposed method. IQR of GM ATT shows the least deviations in the proposed method. In the WM lesion the proposed method reduces IQR compared to NLLS in CBF and ATT statistically significant but no statistically significant difference to both BASIL methods is observed. The CBF GM lesion shows statistically significant reduction of IQR using the proposed method over BASIL but no difference to NLLS. In the corresponding ATT, no statistically significant differences of IQR are observable. All p-values and median IQRs are reported in table 2. Discussion In this study we present a novel joint spatial regularization technique for quantitative ASL imaging, combining non-linear fitting with a TGV 2 functional. The proposed method poses a joint spatial TGV 2 prior on both, CBF and ATT, to improve the robustness of the fitting procedure. Synthetic ASL datasets with different pathologies as well as in vivo data from healthy and stroke patients with different SNR levels were considered. Imposing prior knowledge on the unknown parameter maps leads to the fundamental problem of bias/variance trade-off, as the solution will depend on the used prior information. This is also true for the used MAP-based approach shown in this work. The amount of bias, however, can be controlled by the used prior and the weight between data and prior information in optimization, respectively (Brinkmann et al., 2017). To this end, the NLLS approach without any regularization can be consid- However, an extension to the complex model with the proposed method could be easily obtained by an adaptation of the signal equation used for fitting, which will be done in a future step. As the proposed approach has been implemented into a Python toolbox (Maier et al., 2020), addition of new models can be achieved in a straight forward manner. Extension to other ASL models, e.g. pulsed ASL (PASL), is simply done by replacing the forward model in equation 2 with the appropriate one. Simple models, not consisting of composed functions, can be included using a plain text file. Complex models need to be im-plemented in Python by the user but templates exists to help in the implementation process. A detailed description of the employed software and how to include new models can be found in Maier et al. (2020). An exemplary PASL fit for phantom Case 0 is given in Supplementary Material Figure S8. In The contrary can be observed, the joint regularization produces the most stable CBF estimates of all methods. A totally wrong choice of the regularization weight compared to the supporting data could nevertheless introduce such errors. However, such a strong weight for regularization would also lead to a severely hampered visual impression and such fits would likely be discarded. ASL imaging is very sensitive to signal variations from motion or changes in blood velocity due to the cardiac cycle (Verbree and van Osch, 2017). Currently, the proposed method does not directly account for these variations. Motion related variations are corrected for in a preprocessing step but no correction for blood velocity changes is applied. As it is planned to extend the method to use raw k-space data, motion could be included in the forward model, e.g. based on determined motion fields prior to fitting, as it is done in MOCHA. As estimation of motion directly from highly undersampled k-space data can be challenging, a robust estimation and correction needs to be found. Another possibility would be to include a motion term into the fitting process but this poses a mathematically challenging problem, especially for forming forward and adjoint operation pairs. If ASL data is evaluated over large ROIs, NLLS seems to be the favourable method as it shows the least bias in our simulations (figure 7) for most investigated ROIs, especially in GM. In addition to the 3D acquisition used in this work, ASL is often performed in a 2D slice-by-slice fashion. Such data can also be fitted with the presented reconstruction framework, either, by applying regularization in 2D only or by adapting the third gradient direction to account for non-isotropic voxels. This adaptation amounts to a simple scaling of the gradient with respect to the ratio between in-plane and the acquired slice resolution, taking into account slice thickness and inter slice gap. Setting this scaling to zero equals 2D regularization. However, 3D regularization outperforms 2D, as has been shown in previous work (Huber et al., 2019;Maier et al., 2018). Reconstruction on the GPU required roughly 1 GB of memory which should be available on any recent GPU. Computation speed varies with hardware and further reduction could be expected with the recent increase in GPU performance. If memory requirements supersede the available GPU memory, a double buffering strategy is available, as introduced in Maier et al. (2020). New scanner consoles already often include GPUs, thus the proposed method could be directly integrated into the scanner reconstruction process.
7,032.2
2020-09-10T00:00:00.000
[ "Computer Science" ]
Functionalization of a Fully Integrated Electrophotonic Silicon Circuit for Biotin Sensing Electrophotonic (EPh) circuits are novel systems where photons and electrons can be controlled simultaneously in the same integrated circuit, attaining the development of innovative sensors for different applications. In this work, we present a complementary metal-oxide-semiconductor (CMOS)-compatible EPh circuit for biotin sensing, in which a silicon-based light source is monolithically integrated. The device is composed of an integrated light source, a waveguide, and a p–n photodiode, which are all fabricated in the same chip. The functionalization of the waveguide’s surface was investigated to biotinylate the EPh system for potential biosensing applications. The modified surfaces were characterized by AFM, optical microscopy, and Raman spectroscopy, as well as by photoluminescence measurements. The changes on the waveguide’s surface due to functionalization and biotinylation translated into different photocurrent intensities detected in the photodiode, demonstrating the potential uses of the EPh circuit as a biosensor. Introduction The possibility to control and manipulate electrons and photons simultaneously in the same integrated silicon circuit (IC) introduces the novel concept of electrophotonics (EPh), where integrated photonics and electronics merge in a single IC. Given their combined nature, EPh systems present advantages by tackling the challenges of next-generation communication systems [1], biomedical [2], quantum, and sensing technologies [3]. Amongst its many uses, EPh presents promising applications in the sensing field, attaining the development of lab-on-a-chip systems where all the necessary components are monolithically integrated into the same platform. In addition, EPh has the added value of allowing us to exploit the already existing IC fabrication infrastructure for the miniaturization of the systems on a chip scale. Silicon-based photodetectors that rely on the absorption of visible to NIR light for optical-to-electrical conversion and integrated SiN waveguides are ideal candidates as basic components for EPh circuits [3,4]. These systems can be easily engineered to develop innovative sensors that use photonic mechanisms to detect different analytes [5,6]. Being able to determine if specific biomolecules are present in a sample is key for diverse healthcare applications, such as medical diagnosis [7], monitoring, and treatment. Over the last few years, many different silicon-based photonic biosensors that use diverse sensing techniques by means of the analyte's interaction with light have been reported [2,8]. However, a light source is always needed for these devices to operate, which, in most cases, represents complicated and expensive integration of external or hybrid light emitters that always come with complications regarding alignment and insertion [9]. Moreover, non-integrated light detectors require special setup to function, making the devices impractical [9,10]. Thus, monolithically integrated light sources and photodetectors would be a great asset for practical and cost-effective optical biosensors. In this work, we present for the first time, to our knowledge, the functionalization of an EPh circuit for biotin sensing where the light source, the waveguide (WG) with the Biosensors 2023, 13, 399 2 of 10 bioreceptive region, and the photodetector are all contained in a single chip (7 × 4 mm 2 ). The whole system is monolithically integrated into silicon and fabricated with CMOScompatible materials and processes [11], with it being the first of its kind to be applied as a biosensor. Biotinylation, also known as biotin labeling, serves as a protein-detection [12] or biomolecule-immobilization mechanism [13] since biotin covalently binds with very high affinity to specific proteins, such as streptavidin and avidin. Hence, biotin sensors are relevant for different tagging or detection applications. In the case of the device proposed here, the adhesion of biotin modifies the effective refractive index (n eff ) in the material on top of the WG core, changing its characteristics of light propagation. The presence of other molecules further modifies n eff , and such changes result in different photocurrent values detected by the integrated photodiode. These changes can be contrasted to a reference measure, resulting in a potentially excellent sensor, as the close integration of all the electronic and photonic elements allows for the detection, identification, and correlation of small changes in light propagation as compared to non-integrated optical sensing approaches. The additional possibility of fluorescence from the analyzed species adds a further degree of specificity regarding the characteristics of the light after interacting with a specific substance. To enable biotinylation on the EPh circuit, first, an analyte-specific functionalization of the WG surface is needed. This is achieved by coating the hydroxylated surface with 3-(aminopropyl) triethoxysilane (APTES), to which glutaraldehyde (GTA) is later covalently attached. Biotin is then deposited on the functionalized surface binding to the GTA molecules, finally reaching biotinylation. The surface is investigated on each step by microscopy, AFM, and Raman spectroscopy measurements, as well as photoluminescence (PL). Furthermore, the functionalized and biotinylated EPh system is evaluated through the changes in the detected photocurrent due to light-matter interactions with the molecules attached to the WG surface. Herein, we demonstrate the first fully integrated and functionalized EPh circuit for biotin sensing as a potential biomolecule sensor. Materials and Methods The EPh circuit consists of a Si-based light-emitting capacitor (LEC), a silicon nitride (Si 3 N 4 ) waveguide core on a silicon dioxide (SiO 2 ) cladding, and a p-n photodiode as the photodetector, all monolithically integrated into a Si substrate as schematized in Figure 1. This cutting-edge system was fabricated using all standard CMOS materials and procedures with no need for external light sources thanks to the LEC based on silicon-rich oxide (SRO) [4]. To operate the device, first, the LEC is biased by applying a voltage V LEC , producing light that is directly injected into the WG. The light emitted in the LEC is produced by electroluminescence in the SRO layer when inducing the current [14,15]. A portion of the propagating light in the WG (evanescent field) interacts with any substance placed on the surface of the WG, resulting in light-intensity changes. This modified transmitted light arriving to the photodetector depends on the nature of the deposited substance and its n eff and is finally converted into detectable electrical current I PN by the photodiode. Therefore, each different analyte has a different "transmission fingerprint" which could be related to specific values of the photocurrent. In addition, the possibility of fluorescent analytes (such as biotin) adds additional effects to the light interactions within the WG and the final intensity reaching the photodetector. So far, the viability of an EPh sensor based on a refractive scheme of the system, where fluorescence effects were not considered, has already been demonstrated through simulations in previous studies [6]. Further simulations considering other effects will be carried out in future work. A more detailed description of the EPh system and its fabrication process can be found in references [4,6]. For the study of the functionalized and biotinylated Si 3 N 4 surfaces and their optical properties, we used Si wafers with a Si 3 N 4 coating of the same characteristics as the WG core in the Eph system. Furthermore, the Si 3 N 4 of the actual EPh devices was also functionalized and biotinylated following the same methodology. Biosensors 2023, 13, x FOR PEER REVIEW 3 of 10 For the study of the functionalized and biotinylated Si3N4 surfaces and their optical properties, we used Si wafers with a Si3N4 coating of the same characteristics as the WG core in the Eph system. Furthermore, the Si3N4 of the actual EPh devices was also functionalized and biotinylated following the same methodology. The functionalization consisted of a sequence of steps starting with the activation of the hydroxyl (OH) groups on the Si3N4 surface, which are necessary for APTES bonding on the surface. The formation of the OH groups is commonly achieved by dipping the samples in a piranha solution [16], by applying heat treatment at 600 °C in an oxygen atmosphere [17], through oxygen plasma [18], or through an RCA cleaning process [19]. However, since the functionalization process was intended to be applied to the EPh circuit, it was necessary to follow a methodology that would not harm the device. For this reason, hydroxylation was achieved by submerging the samples into deionized water at 70 °C for 30 min, followed by heat treatment at 120 °C for 5 min in an N2 atmosphere [20]. The samples were then immersed in a 4% APTES (≥98%, Sigma-Adrich, Munich, Germany) solution in ethanol for a duration of 180 min to ensure full coverage of the surface. A rinsing and ultrasonic (1 min) process in ethyl alcohol (99.6%, J. T. Baker, Phillipsburg, NJ, USA) was performed to minimize non-specific attachment of APTES to the surface, followed by heat treatment at 110 °C in a N2 ambient temperature. Subsequently, the samples were immersed in a 4% GTA (25% in H2O, Sigma-Aldrich, Munich, Germany) solution in phosphate-buffered saline (PBS) (pH 7.4, Sigma-Adrich, Munich, Germany) for 120 min followed by a rinsing process in PBS and dried with N2. The APTES and GTA were deposited at room temperature in a N2 atmosphere to avoid further unwanted reactions [16,21,22]. Considering that this study is the first approach to the biotinylation of an EPh circuit, different concentrations of biotin were not explored. However, this subject will be addressed in the future, as it is relevant to evaluate, for instance, the influence of PL intensity on the detected Iphoto and the detection limit of the system. Hence, following previous studies [23], biotinylation of the Si3N4 functionalized surfaces was achieved by submerging the samples in a 4 mM biotin (≥98%, Sigma-Adrich, Munich, Germany)-PBS The functionalization consisted of a sequence of steps starting with the activation of the hydroxyl (OH) groups on the Si 3 N 4 surface, which are necessary for APTES bonding on the surface. The formation of the OH groups is commonly achieved by dipping the samples in a piranha solution [16], by applying heat treatment at 600 • C in an oxygen atmosphere [17], through oxygen plasma [18], or through an RCA cleaning process [19]. However, since the functionalization process was intended to be applied to the EPh circuit, it was necessary to follow a methodology that would not harm the device. For this reason, hydroxylation was achieved by submerging the samples into deionized water at 70 • C for 30 min, followed by heat treatment at 120 • C for 5 min in an N 2 atmosphere [20]. The samples were then immersed in a 4% APTES (≥98%, Sigma-Adrich, Munich, Germany) solution in ethanol for a duration of 180 min to ensure full coverage of the surface. A rinsing and ultrasonic (1 min) process in ethyl alcohol (99.6%, J. T. Baker, Phillipsburg, NJ, USA) was performed to minimize non-specific attachment of APTES to the surface, followed by heat treatment at 110 • C in a N 2 ambient temperature. Subsequently, the samples were immersed in a 4% GTA (25% in H 2 O, Sigma-Aldrich, Munich, Germany) solution in phosphate-buffered saline (PBS) (pH 7.4, Sigma-Adrich, Munich, Germany) for 120 min followed by a rinsing process in PBS and dried with N 2 . The APTES and GTA were deposited at room temperature in a N 2 atmosphere to avoid further unwanted reactions [16,21,22]. Considering that this study is the first approach to the biotinylation of an EPh circuit, different concentrations of biotin were not explored. However, this subject will be addressed in the future, as it is relevant to evaluate, for instance, the influence of PL intensity on the detected I photo and the detection limit of the system. Hence, following previous studies [23], biotinylation of the Si 3 N 4 functionalized surfaces was achieved by submerging the samples in a 4 mM biotin (≥98%, Sigma-Adrich, Munich, Germany)-PBS solution for a duration of 120 min at room temperature. Finally, the samples were rinsed with PBS [23,24]. A schematic representation of the biotinylated WG surface on the EPh circuit is presented in the top part of Figure 1. The Si 3 N 4 surface was characterized after each of the previously explained steps, i.e., after hydroxylation (we call this surface non-functionalized), after functionalization (with APTES and GTA), and after biotinylation (attachment of biotin). The changes on the Si 3 N 4 surface topography, due to the presence of the different molecules at each stage, were studied by optical microscopy (Leitz Dialux 20 microscope) and by atomic force microscopy (AFM) (Nanosurf easyScan in non-contact mode). Raman spectroscopy (WiTec alpha300R Confocal Raman Microscope) was used to confirm the bonding of the molecules. Furthermore, PL characterization (Horiba Jobin Yvon spectrometer model Fluoro-Max3) was performed to study the influence of the molecules on the Si 3 N 4 surface. The PL measurements were carried out under controlled illumination conditions at ambient temperature, exciting the samples with UV light at 330 nm. The evaluation of the capability of the EPh circuit to detect the presence of different molecules was performed by means of measurements and a comparison of photocurrent values produced by the photodiode. To measure the generated photocurrent I pn (Vpn, VLEC) , the photodetector was biased at a voltage V pn = −20 V, while a voltage V LEC was applied to the LEC (to produce light emission). However, it is necessary to eliminate any other current contribution (leakage current) and only consider the carriers generated by the injected light from the LEC. For this reason, instead of only using I pn , the current of the photodiode is biased with V pn but, under dark conditions, I pn (Vpn, 0) is also measured, i.e., when the LEC is turned off. Then, the reported photocurrent I photo (Vpn, VLEC) can be obtained from the following equation: Si 3 N 4 Surface Characterization The optical microscopy and AFM images of the Si 3 N 4 surface are presented in Figure 2, with them first showing the flat bare Si 3 N 4 surface (Figure 2a). After functionalization, through the deposition of APTES-GTA, it is possible to distinguish the presence of particles with spherical shapes (see Figure 2b), as reported in previous studies [13]. Figure 2c presents the surface of the samples after biotin attachment, where it is possible to observe bigger particles than before, which is associated with the presence of biotin on the functionalized surface. These results indicate a clear surface change as the different molecules were deposited. studied by optical microscopy (Leitz Dialux 20 microscope) and by atomic force microscopy (AFM) (Nanosurf easyScan in non-contact mode). Raman spectroscopy (WiTec al-pha300R Confocal Raman Microscope) was used to confirm the bonding of the molecules. Furthermore, PL characterization (Horiba Jobin Yvon spectrometer model Fluoro-Max3) was performed to study the influence of the molecules on the Si3N4 surface. The PL measurements were carried out under controlled illumination conditions at ambient temperature, exciting the samples with UV light at 330 nm. The evaluation of the capability of the EPh circuit to detect the presence of different molecules was performed by means of measurements and a comparison of photocurrent values produced by the photodiode. To measure the generated photocurrent Ipn (Vpn, VLEC), the photodetector was biased at a voltage Vpn = -20 V, while a voltage VLEC was applied to the LEC (to produce light emission). However, it is necessary to eliminate any other current contribution (leakage current) and only consider the carriers generated by the injected light from the LEC. For this reason, instead of only using Ipn, the current of the photodiode is biased with Vpn but, under dark conditions, Ipn (Vpn,0) is also measured, i.e., when the LEC is turned off. Then, the reported photocurrent Iphoto(Vpn,VLEC) can be obtained from the following equation: Iphoto(Vpn,VLEC) = Ipn (Vpn, VLEC) -Ipn (Vpn,0) (1) Si3N4 Surface Characterization The optical microscopy and AFM images of the Si3N4 surface are presented in Figure 2, with them first showing the flat bare Si3N4 surface (Figure 2a). After functionalization, through the deposition of APTES-GTA, it is possible to distinguish the presence of particles with spherical shapes (see Figure 2b), as reported in previous studies [13]. Figure 2c presents the surface of the samples after biotin attachment, where it is possible to observe bigger particles than before, which is associated with the presence of biotin on the functionalized surface. These results indicate a clear surface change as the different molecules were deposited. Raman Spectroscopy The attachment of the different molecules after functionalization and biotinylation was further verified by Raman spectroscopy (using an excitation source at 532 nm) of the samples at each step, as presented in Figure 3. Here, the hydroxylated Si 3 N 4 surface was taken as a reference spectrum. After APTES-GTA deposition it is possible to observe the appearance of signals in a range between 1100 cm −1 and 1800 cm −1 , which could be related to the presence of NH 2 , CH 2, and CHO groups in the APTES and GTA molecules [25][26][27]. After biotin is deposited on the functionalized samples, an enhancement of the same peaks can be observed, since these previously mentioned groups are also present in the biotin molecules [28]. These results suggest that the biotin successfully attached to the functionalized surface. Raman Spectroscopy The attachment of the different molecules after functionalization and biotinylation was further verified by Raman spectroscopy (using an excitation source at 532 nm) of the samples at each step, as presented in Figure 3. Here, the hydroxylated Si3N4 surface was taken as a reference spectrum. After APTES-GTA deposition it is possible to observe the appearance of signals in a range between 1100 cm −1 and 1800 cm −1 , which could be related to the presence of NH2, CH2, and CHO groups in the APTES and GTA molecules [25][26][27]. After biotin is deposited on the functionalized samples, an enhancement of the same peaks can be observed, since these previously mentioned groups are also present in the biotin molecules [28]. These results suggest that the biotin successfully attached to the functionalized surface. Figure 4 shows the PL emission spectra of the samples when excited with a 330 nm wavelength light source. The contribution to photoemission of the non-functionalized Si3N4 surface presents a characteristic peak with a maximum intensity of around 500 nm, which agrees with the PL emission by Si3N4 recorded in [29]. The origin of this light emission can be attributed to a combination of transition states between the Si3N4 and oxide (produced during the hydroxylation process), nitride dangling bonds, and probably also Figure 4 shows the PL emission spectra of the samples when excited with a 330 nm wavelength light source. The contribution to photoemission of the non-functionalized Si 3 N 4 surface presents a characteristic peak with a maximum intensity of around 500 nm, which agrees with the PL emission by Si 3 N 4 recorded in [29]. The origin of this light emission can be attributed to a combination of transition states between the Si 3 N 4 and oxide (produced during the hydroxylation process), nitride dangling bonds, and probably also surface defects induced by the OH groups [29,30]. A similar PL response is observed when the samples are functionalized, where only a slight rise in PL intensity is measured, as expected due to the presence of the APTES and GTA molecules [30,31]. The PL signal increases even further when biotin is deposited on the functionalized surface. This higher signal includes the contribution of luminescence by the biotin molecules themselves [32]. Here, as indicated in Figure 4, the characteristic wavelength at which biotin photoemits is 520 nm [31,33,34], which is closely around the maximum intensity peak (482 nm) of the PL signal of the biotinylated samples. These results, in addition to the previous findings, demonstrate the successful biotinylation of the Si 3 N 4 surfaces. surface defects induced by the OH groups [29,30]. A similar PL response is observed when the samples are functionalized, where only a slight rise in PL intensity is measured, as expected due to the presence of the APTES and GTA molecules [30,31]. The PL signal increases even further when biotin is deposited on the functionalized surface. This higher signal includes the contribution of luminescence by the biotin molecules themselves [32]. Here, as indicated in Figure 4, the characteristic wavelength at which biotin photoemits is 520 nm [31,33,34], which is closely around the maximum intensity peak (482 nm) of the PL signal of the biotinylated samples. These results, in addition to the previous findings, demonstrate the successful biotinylation of the Si3N4 surfaces. Eph Circuit Measurements The functionalization and biotinylation processes were applied on the WG of the Eph circuit following the same procedure as the prior characterized Si3N4 samples. Photocurrent measurements were performed at different stages of the process and the Iphoto was calculated using equation 1 and setting Vpn = -20 V. In Figure 5, a representative measurement of the photocurrent Iphoto tendency of an EPh device is presented. The shape of the curves is like those reported in [4]. It can be noted that, when the Eph circuit is non-functionalized, an Iphoto of almost -1 nA is detected at VLEC = 20 V, since the light passing through the WG has no interaction with any analyte on the surface other than air. However, when the WG is functionalized, Iphoto decreases to -500 pA due to the interaction of the evanescent field with the APTES and GTA molecules. Moreover, as can be seen from the AFM images in Figure 2b, the functionalized surface shows particles of a few tens of nm in size distributed on the Si3N4 film, probably inducing light losses due to scattering effects. This results in lower light intensities arriving at the photodetector and hence lower Iphoto. On the other hand, once the biotin is attached to the functionalized surface, a significant Iphoto increment is observed, which could be associated with a small contribution of biotin photoemission to the photocurrent [32]. More importantly, a possible increment of the refractive index of the biotinylated surface (APTES-GTA-biotin) compared to the non- Eph Circuit Measurements The functionalization and biotinylation processes were applied on the WG of the Eph circuit following the same procedure as the prior characterized Si 3 N 4 samples. Photocurrent measurements were performed at different stages of the process and the I photo was calculated using equation 1 and setting V pn = −20 V. In Figure 5, a representative measurement of the photocurrent I photo tendency of an EPh device is presented. The shape of the curves is like those reported in [4]. It can be noted that, when the Eph circuit is non-functionalized, an I photo of almost -1 nA is detected at V LEC = 20 V, since the light passing through the WG has no interaction with any analyte on the surface other than air. However, when the WG is functionalized, I photo decreases to -500 pA due to the interaction of the evanescent field with the APTES and GTA molecules. Moreover, as can be seen from the AFM images in Figure 2b, the functionalized surface shows particles of a few tens of nm in size distributed on the Si 3 N 4 film, probably inducing light losses due to scattering effects. This results in lower light intensities arriving at the photodetector and hence lower I photo . On the other hand, once the biotin is attached to the functionalized surface, a significant I photo increment is observed, which could be associated with a small contribution of biotin photoemission to the photocurrent [32]. More importantly, a possible increment of the refractive index of the biotinylated surface (APTES-GTA-biotin) compared to the non-functionalized surface (air), hence better confinement of the propagated light, could be the main cause for the enhanced photocurrent values [6]. Functionalization of the EPh Circuit In this study, the functionalization of the EPh circuit was validated to further analyze the attachment of biotin on the sensing region, i.e., the upper surface of the WG. The functionalization of the EPh system presented some challenges due to the nature of the circuit and its materials. For instance, a piranha solution or RCA cleaning is common for the hydroxylation procedure of the samples; here, however, to avoid etching and damaging of the Al contacts (Figure 1), a different approach needed to be followed. Hence, immersion in deionized water and annealing of the samples was shown to be a successful methodology to activate the OH groups required for the attachment of APTES and succeeding GTA. The following analysis through optical microscopy, AFM, and Raman spectra of the Si 3 N 4 surfaces revealed the presence of the molecules in each of the steps leading to favorable biotinylation. The EPh Sensor The concept of the EPh circuit being used as a refractive index sensor evaluating the changes of the generated I photo when different analytes are present on the WG surface has been proven in the past [6]. In this work the same principle has been further extended to the detection of molecules, such as biotin, that can be used for a diverse range of applications in the sensing area. The results show that, indeed, a change in the photocurrent was observed depending on the functionalization or biotinylation of the WG core, showing that the electrophotonic scheme can be applied to biosensing. The principle of operation of the biotin sensor presented here, in contrast to other detection schemes such as optical fiber or surface plasmon sensors [35], where an external light source is always required (usually a laser), is based on the detection of I photo in a novel system where the light source (LEC) and the photodetector are fully integrated into the same circuit. However, the main drawback of the EPh system lies in the low light intensity emitted by the LEC translating into low current I photo intensities in the pA-nA range. The experimental electroluminescence spectra of these SRO-based LECs have been measured to extend from blue to near-infrared, with two main peaks that vary with the V LEC applied across the structure [29]. Much effort is being put into enhancing the light-emission intensities of the LECs by texturizing the substrate's surface to promote larger electric field intensities around nanostructured sharp tips [36], but the extremely self-contained and close integration of the system also represents that it is possible to detect changes in photocurrent values much lower than in other systems, which also relaxes the requirement for high emission intensities [8]. As can be seen, there is still ongoing work to improve the monolithically integrated light emitters while maintaining the challenge of conserving CMOS compatibility. Photocurrent The intensity of the detected I photo is directly correlated to the intensity of the light arriving at the photodiode. In a previous piece of work, it was shown experimentally and through simulations that the interaction of an analyte on the WG and the traveling light is strongly dependent on the refractive index of the respective analyte [6], as expected. In this work, the different attached molecules during functionalization and biotinylation also contribute to changes in the detected I photo , as can be seen in Figure 5. Refractive index measurements of the used molecules (GTA, APTES, and biotin) are not easy to obtain, since the commonly used spectroscopic ellipsometry method has complex difficulties due to the high roughness of the samples, as shown in Figure 2. Other techniques based on superficial plasmon resonance are also usually applied [37], but complicated devices are needed for the determination of the refractive index and this lies outside the scope of the present study. However, it has been demonstrated that when the refractive index contrast between the analyte and the lower SiO 2 cladding (see Figure 1) is low, light is better confined in the WG, whereas if the refractive index contrast is high, dispersion within the WG increases causing higher losses [6]. Taking the latter into consideration for the highest intensity I photo measurements obtained for the biotinylated device, it is reasonable to assume that the n eff of the APTES-GTA-biotin layer is higher than the one of the APTES-GTA film since better confinement of the light resulted in higher I photo values in the biotinylated system. Nonetheless, the functionalized samples also presented a highly uneven surface filled with particles (agglomerates of APTES-GTA) in sizes ranging a few tens of nm (Figure 2), which could lead to scattering effects and consequently large losses of the confined light thus presenting smaller I photo values. In addition, another conceivable contribution to the higher I photo measured on the biotinylated circuit might be due to the photoemission of the biotin molecules, as presented in Figure 4, adding to the intensity of the traveling light through the WG. Here, it is worth noticing that the power of the input light generated by the LEC is considerably lower (in the order of nW) than the light source used to excite the biotinylated samples during PL measurements (30 mW). Hence, we only assume that there might be a small contribution of the biotins PL emission to the light arriving to the photodetector, but a more thorough study needs to be carried out to fully understand this mechanism and the corresponding interactions within the WG. needed for the determination of the refractive index and this lies outside the scope of the present study. However, it has been demonstrated that when the refractive index contrast between the analyte and the lower SiO2 cladding (see Figure 1) is low, light is better confined in the WG, whereas if the refractive index contrast is high, dispersion within the WG increases causing higher losses [6]. Taking the latter into consideration for the highest intensity Iphoto measurements obtained for the biotinylated device, it is reasonable to assume that the neff of the APTES-GTA-biotin layer is higher than the one of the APTES-GTA film since better confinement of the light resulted in higher Iphoto values in the biotinylated system. Nonetheless, the functionalized samples also presented a highly uneven surface filled with particles (agglomerates of APTES-GTA) in sizes ranging a few tens of nm (Figure 2), which could lead to scattering effects and consequently large losses of the confined light thus presenting smaller Iphoto values. In addition, another conceivable contribution to the higher Iphoto measured on the biotinylated circuit might be due to the photoemission of the biotin molecules, as presented in Figure 4, adding to the intensity of the traveling light through the WG. Here, it is worth noticing that the power of the input light generated by the LEC is considerably lower (in the order of nW) than the light source used to excite the biotinylated samples during PL measurements (30 mW). Hence, we only assume that there might be a small contribution of the biotins PL emission to the light arriving to the photodetector, but a more thorough study needs to be carried out to fully understand this mechanism and the corresponding interactions within the WG. Conclusions In this work, it was proven that it is possible to functionalize a fully integrated EPh silicon circuit capable of sensing molecules such as biotin. This system has the capabilities to be scaled for biomolecule detection, such as for virus or antigen sensing for example. However, several challenges, such as molecule selectivity and the optimization of the Eph circuit itself, should be addressed first. Moreover, typical biosensor characteristics, including molecule affinity, sensitivity, and detection limits, are to be explored in the future. Conclusions In this work, it was proven that it is possible to functionalize a fully integrated EPh silicon circuit capable of sensing molecules such as biotin. This system has the capabilities to be scaled for biomolecule detection, such as for virus or antigen sensing for example. However, several challenges, such as molecule selectivity and the optimization of the Eph circuit itself, should be addressed first. Moreover, typical biosensor characteristics, including molecule affinity, sensitivity, and detection limits, are to be explored in the future. Here, a fully integrated Eph system, that can be used to detect different molecules, has been presented for the first time, proving its potential as a biosensor.
7,420
2023-03-01T00:00:00.000
[ "Physics" ]
Inverse Multiquadratic Functions as Basis for Rectangular Collocation Method to Solve the Vibrational Schrödinger Equation We explore the use of inverse multiquadratic (IMQ) functions as basis functions when solving the vibrational Schrödinger equation with the rectangular collocation method. The quality of the vibrational spectrum of formaldehyde (in six dimensions) is compared to that obtained using Gaussian basis functions when using different numbers of width-optimized IMQ functions. The effects of the ratio of the number of collocation points to the number of basis functions and of the choice of the IMQ exponent are studied. We show that the IMQ basis can be used with parameters where the IMQ function is not integrable. We find that the quality of the spectrum with IMQ basis functions is somewhat lower that that with a Gaussian basis when the basis size is large and for a range of IMQ exponents. The IMQ functions are, however, advantageous when a small number of functions is used or with a small number of collocation points e.g. when using square collocation. Introduction Computational vibrational spectroscopy is important in the studies of vibrational phenomena in space, atmosphere, and in materials and at interfaces.It is necessary for the assignment of molecular species, including reactants, products, and intermediates, and for the assignment of reaction pathways in many applications [1].The Born-Oppenheimer approximation is a good approximation to compute the vibrational spectrum in most applications, in which case the vibrational dynamics is described by the Schrödinger equation (SE) for nuclei: where r denotes coordinates spanning the configuration space, V(r) is the potential energy surface (PES), T(r) is the kinetic energy operator (KEO), and ψ(r) is the wavefunction.The values of the PES can be computed ab initio or sampled from an analytic function (which itself could be fitted to ab initio data or to empirical data) [2][3][4][5][6][7][8].The KEO has a simple form in space-fixed Cartesian coordinates r SF : where N atoms is the number of atoms, M i their masses, and ∆ i is the Laplacian operator in Cartesian coordinates of the ith atom: ∆ i = ∂ 2 /∂x 2 i + ∂ 2 /∂y 2 i + ∂ 2 /∂z 2 i (we use atomic units).As space-fixed Cartesian coordinates are redundant for free molecules, typically, the KEO is expressed in internal coordinates, in which it is much more complex [9].Approximations are often used to simplify the expression for the KEO [10][11][12][13], which are a source of error. The eigenvalues E are vibrational levels, and differences between them simulate observed spectra.Intensities of transitions are important for comparison with experimental spectra; however, the spectrum (of eigenvalues) itself is already useful for species assignment, and its calculation for polyatomic molecules is nontrivial.Equation (1) can be relatively easily solved for a general PES for three-and four-atomic molecules, and the calculation becomes progressively more complex and CPU-and memory-costly for larger molecules.The commonly used method to solve Equation (1) for a general potential V(r) is the variational approach [14,15], in which one expands the wavefunction in a basis: where N is the basis size.Insertion of Equation ( 3) into the SE Equation (1), multiplication on the left by basis functions θ i (r), and integration over all configuration space leads to a matrix equation for the vector of coefficients (c): where H the Hamiltonian and B the overlap matrix.Already with four atoms (a six dimensional configuration space), it is not unusual to use hundreds of thousands of basis functions (N), and the necessity to compute the integrals often requires PES values at millions of locations [16].This, practically, requires availability of the PES as a continuous V(r) function.Such functions are not trivial to construct with high accuracy, even for a four-atomic system, and become difficult to construct for larger systems [3,[16][17][18].For isolated molecules, it is relatively easy to compute a sufficient amount of high-quality ab initio data to which V(r) can be fitted.For vibrations at interfaces and in materials, the cost of ab initio calculations is much higher, and for the vast majority of molecule-surface or aggregate-state systems of practical importance, there are no, and there will never be, ab initio based PES functions, even though vibrational problems with a sufficiently small number of coupled degrees of freedom can be identified in such systems [12,19,20].These difficulties, in particular, are responsible for the fact that harmonic spectra are still widely used in computational materials science.This is unfortunate, as in many applications anharmonicity is important.For example, the catalytic activity of surfaces is precisely due to their ability to weaken intramolecular bonds in adsorbed molecules, leading to more significant anharmonicity compared to free molecules.Popular methods, which simplify the solution of Equation (1) while allowing for anharmonicity and coupling, also have not found much use in materials modeling [21].For example, the vibrational self-consistent field approach (VSCF) [10] still requires the availability of the PES, as it relies on integrals.The vibrational perturbation theory (VPT2) approach, implemented in many ab initio codes [22], assumes a simple polynomial form of the PES, which still requires many high-quality ab initio data points around the equilibrium, and fails for degenerate levels.Clearly, a direct solution of Equation (1) for a general V(r) is preferred. A collocation approach is an alternative way to convert the SE (both the vibrational SE, Equation (1) [13,17,[23][24][25][26][27], and the electronic SE [28]) into a matrix form.In it, one uses the expansion of Equation ( 3), but only aims to satisfy the SE at a set of collocation points (r j , j = 1, . . ., M): Although square collocation (M = N) was the type of collocation originally proposed for vibrational problems [24][25][26], in general, M > N. Equation ( 5) can be solved using approaches developed for rectangular matrices [29] or by squaring it and using common eigensolvers: There is no requirement that Equation (3) reproduce the wavefunction in all space (although with a sufficiently dense set r j , covering all regions of space where the wavefunction is not negligible, this will be the case).The collocation points can be distributed in any desired way and do not have to form a quadrature grid (note that even though Equation ( 6) has the form of a quadrature, the sums need not be equal to any integrals) [30].This potentially allows computing the spectrum from a smaller number of V(r) values, which could be directly computed ab initio.The rectangular collocation (which is solved in the least squares sense) can also be used to tune the basis, so that good accuracy can be achieved with small N [13,27].Note that M can be smaller if the basis is better.The rectangular collocation approach has been applied to computations of anharmonic spectra of polyatomic molecules, including those on surfaces directly from ab initio samples of V(r) [11,12,19,20,31]. The KEO can, in principle, be applied to the basis functions analytically [11,12,32]; however, the complexity of KEO expressions in given internal coordinates (r int ) can be avoided when applying the KEO numerically (using finite differences with an appropriate stencil to achieve the desired accuracy) in space-fixed Cartesian coordinates (r SF ) [30,33,34]: See Reference [33] for details.In this way, it is easy to use the exact KEO with any coordinates in which the SE is solved and with any basis functions. Problem Statement and the Aim of This Work Rectangular collocation can be used with any basis functions.The functions need not be integrable or even continuous, except at the collocation points.For example, Slater type functions (useful in full-potential electronic structure calculations) are easy to use with collocation, even though they are not with the variational approach (which is why Gaussian-like functions are usually preferred in that application) [28].Localized or delocalized functions can be used.For localized basis functions, only Gaussian functions have been used with rectangular collocation for the vibrational SE [20,30,33,34].Gaussian functions are general and form a sufficiently small basis set (especially if their parameters are fitted) for the calculation to be doable on a personal computer for low-dimensional problems, such as triatomic molecules, and for a small number of states [13,20,27].For larger molecules and large numbers of states, either other types of basis functions need to be used, such as parameterized harmonic functions, which allow for a small basis set size, as was tested in up to 15 dimensions [11][12][13]19,31], or a much larger Gaussian basis would be needed.For example, a sub-cm −1 accuracy has been achieved for the vibrational spectrum of formaldehyde (a six-dimensional SE), with about 40,000 basis functions and with more than 100,000 collocation points [30,33,34].While this kind of calculation is, in principle, doable on a modern workstation with a couple hundred GB of RAM, it is somewhat costly. It is; therefore, useful to study other types of localized basis functions, which might be advantageous in terms of accuracy, or the required number of basis functions and collocation points to reach a given accuracy.In this work, we explore the generalized inverse multiquadratic (IMQ) function as the basis function in the rectangular collocation method when solving the vibrational Schrödinger equation.The parameter c controls the width of the function, and the exponent parameter β the degree of locality.It is usually assumed that β must satisfy β > d, the dimensionality of the space. This requirement is; however, made to insure integrability (of θ i (r)θ j (r) and θ i (r) Ĥθ j (r)), and may not be necessary with collocation.The generalized IMQ function was previously used by Rabitz's group to solve bound-state Schrödinger equations for modelling one-and two-dimensional problems (Morse oscillator and 2D Henon-Heiles potential) with the square (N = M) collocation method [35].Hu et al. [35] concluded, based on those model problems, that the IMQ basis functions are advantageous for highly anharmonic problems and highly excited states.They also noted a slower rate of growth of the associated condition numbers compared to Gaussians, which bodes well for building a more complete basis.The aim of this work is to explore the utility of the generalized IMQ basis function when solving a vibrational SE for a real molecule with the rectangular collocation method.We compare the performance of an IMQ basis to that of a Gaussian basis on the example of the spectrum of formaldehyde (i.e., solving a six-dimensional SE), for which results, with Gaussian bases of different sizes, are available, and for which comparisons with the variational approach using DVR (discrete variable representation) have been made in References [30,33,34].Specifically, we explore the effect of different choices of β, including those which do not satisfy the integrability condition.We also explore the behavior of spectrum quality with the number of basis function and collocation points. Methods Equations ( 5) and ( 6) were solved using the IMQ basis functions.Calculations with Gaussian basis function are performed for comparison as in References [30,34].The vibrational SE of formaldehyde, H 2 CO, was solved for the lowest 100 levels in six bond coordinates, including the CO bond length, the two CH bond lengths, the two HCO angles, and the dihedral angle between the two HCO planes.The six coordinates, in this order, form the vector r =r int .The KEO was applied in space-fixed Cartesian coordinates (Equation ( 2)) using Equation (7), with a five-point finite difference stencil with all dx k = 1 × 10 −5 .See Reference [33] for details.The values of V(r) at the collocation points were sampled from the analytic PES of Reference [36].The collocation points r j were chosen within specific ranges of the six coordinates, from a pseudo-random six-dimensional Sobol sequence [37], and accepted into the collocation point set if where rand is a (uniformly distributed) random number in [0, 1].We used V max = 17,000 cm −1 and ∆ = 500 cm −1 .The coordinate ranges were r min = (1.03,0.84, 0.84, 83, 83, 105), r max = (1.50,1.69, 1.69, 162, 162, 255), where bond lengths were in Å and angles in degrees.The point selection was; therefore, similar to that used in References [30,34] and thus allowed for comparison with results obtained with the Gaussian basis in those works.The quality of the spectrum was evaluated as the MAE (mean absolute error), for the 50 and 100 lowest vibrational levels, versus a reference spectrum computed in Reference [30] on the same PES with a highly accurate variational scheme.For each combination of the numbers of basis functions (N), collocation points (M), and the IMQ exponent (β), the basis width parameter (c) was optimized.To account for different ranges and types of internal coordinates, the following version of the IMQ function was used: (11) This allows for the use of a single parameter c for basis width optimization in a similar way to that which was done for Gaussian width parameters in References [30,34].The Gaussian widths of Equation (9) were optimized in the same way.The optimal width typically corresponded to condition numbers of the overlap matrix S T S on the order of 10 10 .5) and ( 6) is it required that integrals over all space be finite.The use of non-integrable functions might be beneficial in some applications and remains little explored.Collocation is a way to harness any advantages associated with such functions. Effect of the IMQ Exponent Mathematics 2018, 6, x FOR PEER REVIEW 5 of 9 Equation (9) were optimized in the same way.The optimal width typically corresponded to condition numbers of the overlap matrix S T S on the order of 10 10 . Effect of the IMQ Exponent Figure 1 shows the MAE of the lowest 50 and 100 vibrational levels obtained with the widthoptimized IMQ basis, using N = 20,000 basis functions and M = 80,000 collocation points, for different exponents (β).The horizontal lines are MAE, obtained with a width-optimized Gaussian basis with the same N and M. Two conclusions can be made from this Figure : (i) The Gaussian basis generally outperforms the IMQ basis for the low-energy levels; however, for several β values, the spectrum quality is practically the same as with the Gaussian basis.The IMQ basis slightly outperforms the Gaussian basis for the higher-energy levels, corroborating the conclusion Hu et al. achieved based on model systems.The specific choice of β seems to be non-critical, as long as the width (c) is optimized.The best quality IMQ bases do satisfy the condition β > d = 6.(ii) The spectrum quality only marginally deteriorates for β = 4-6, which do not satisfy the integrability condition β > d.This highlights the fact that nowhere in the collocation Equations ( 5) and ( 6) is it required that integrals over all space be finite.The use of non-integrable functions might be beneficial in some applications and remains little explored.Collocation is a way to harness any advantages associated with such functions.The mean absolute error (MAE), in cm −1 , over the lowest 50 and 100 vibrational levels of H2CO, computed with the inverse multiquadratic (IMQ50, IMQ100) basis with different β (beta) parameters.The horizontal lines at 2 and 3.7 cm −1 are corresponding values obtained with a Gaussian basis (G50, G100, respectively), as in Reference [30].N = 20,000 basis functions and M = 80,000 collocation points were used. Effect of the Basis Size Next, we tested the performance of the IMQ basis with the basis size at a fixed M:N ratio of 3. Figure 2 shows the MAE of the lowest 50 and 100 vibrational levels obtained with the widthoptimized IMQ functions, with β = 7 and with width-optimized Gaussian basis functions.For the lowest 50 level, the Gaussian basis outperforms, although only modestly, the IMQ basis, except at the lowest number of basis functions (N = 15,000), where the IMQ basis is better.N = 15,000 is; however, too small to achieve spectroscopically relevant accuracy on the order of 1 cm −1 or better.A similar result is observed with 100 levels, where IMQ outperforms for N = 15,000 and N = 20,000, but underperforms the Gaussian basis for larger basis sizes.For spectroscopically accurate calculations, the Gaussian basis is clearly preferred for both low-lying and highly excited states.The IMQ basis needs about a third more basis functions than the Gaussian basis to achieve a similar accuracy in this regime.The horizontal lines at 2 and 3.7 cm −1 are corresponding values obtained with a Gaussian basis (G50, G100, respectively), as in Reference [30].N = 20,000 basis functions and M = 80,000 collocation points were used. Effect of the Basis Size Next, we tested the performance of the IMQ basis with the basis size at a fixed M:N ratio of 3. Figure 2 shows the MAE of the lowest 50 and 100 vibrational levels obtained with the width-optimized IMQ functions, with β = 7 and with width-optimized Gaussian basis functions.For the lowest 50 level, the Gaussian basis outperforms, although only modestly, the IMQ basis, except at the lowest number of basis functions (N = 15,000), where the IMQ basis is better.N = 15,000 is; however, too small to achieve spectroscopically relevant accuracy on the order of 1 cm −1 or better.A similar result is observed with 100 levels, where IMQ outperforms for N = 15,000 and N = 20,000, but underperforms the Gaussian basis for larger basis sizes.For spectroscopically accurate calculations, the Gaussian basis is clearly preferred for both low-lying and highly excited states.The IMQ basis needs about a third more basis functions than the Gaussian basis to achieve a similar accuracy in this regime. Effect of the Rectangularity of the Collocation Equation The IMQ basis functions were previously tested with the quadratic collocation (M = N) [35].Here, we show that it is advantageous to use rectangular collocation (i.e., M > N). Figure 3 shows the behavior of the spectrum errors over the lowest 50 and 100 vibrational levels, computed with a widthoptimized inverse multiquadratic basis with β = 7 and a width-optimized Gaussian basis, for N = 30,000 basis functions and for different ratios of the number of collocation points to the number of basis functions M:N.There is a clear advantage of using M > N, which allows for a several-fold increase in level accuracy versus square collocation.This is a practically important advantage from the point of view of computational cost, which is dominated by N; we have previously shown [33] that, when using Equation ( 6) to solve the rectangular collocation matrix equation, one only needs to store and manipulate matrices of size N × N for M > N as long as M:N is integer.There are diminishing returns to the increase of the M:N ratio; here, M = 3N appears to be an optimal choice.We note that the effect of rectangularity is problem-and computational setup- Effect of the Rectangularity of the Collocation Equation The IMQ basis functions were previously tested with the quadratic collocation (M = N) [35].Here, we show that it is advantageous to use rectangular collocation (i.e., M > N). Figure 3 shows the behavior of the spectrum errors over the lowest 50 and 100 vibrational levels, computed with a width-optimized inverse multiquadratic basis with β = 7 and a width-optimized Gaussian basis, for N = 30,000 basis functions and for different ratios of the number of collocation points to the number of basis functions M:N.There is a clear advantage of using M > N, which allows for a several-fold increase in level accuracy versus square collocation.This is a practically important advantage from the point of view of computational cost, which is dominated by N; we have previously shown [33] that, when using Equation ( 6) to solve the rectangular collocation matrix equation, one only needs to store and manipulate matrices of size N × N for M > N as long as M:N is integer. Effect of the Rectangularity of the Collocation Equation The IMQ basis functions were previously tested with the quadratic collocation (M = N) [35].Here, we show that it is advantageous to use rectangular collocation (i.e., M > N). Figure 3 shows the behavior of the spectrum errors over the lowest 50 and 100 vibrational levels, computed with a widthoptimized inverse multiquadratic basis with β = 7 and a width-optimized Gaussian basis, for N = 30,000 basis functions and for different ratios of the number of collocation points to the number of basis functions M:N.There is a clear advantage of using M > N, which allows for a several-fold increase in level accuracy versus square collocation.This is a practically important advantage from the point of view of computational cost, which is dominated by N; we have previously shown [33] that, when using Equation ( 6) to solve the rectangular collocation matrix equation, one only needs to store and manipulate matrices of size N × N for M > N as long as M:N is integer.There are diminishing returns to the increase of the M:N ratio; here, M = 3N appears to be an optimal choice.We note that the effect of rectangularity is problem-and computational setup- There are diminishing returns to the increase of the M:N ratio; here, M = 3N appears to be an optimal choice.We note that the effect of rectangularity is problem-and computational setup-dependent (e.g., see Reference [30], where a more or less pronounced effect of increasing the M:N ratio can be observed depending on collocation point and basis function placement, when using Gaussian basis functions).Figure 3 here shows that the behavior with respect to the degree of rectangularity of the collocation equation is qualitatively similar for IMQ and Gaussian bases.Specifically for the case M = N (i.e., that of square collocation), the IMQ basis slightly outperforms the Gaussian basis, corroborating the conclusion of Reference [35].The square collocation is not, however, able to provide spectroscopic accuracy with N = 30,000, whereas rectangular collocation does allow achieving level errors on the order of 1 cm −1 . Conclusions We have explored the use of inverse multiquadratic functions as basis functions in the rectangular collocation method to solve the vibrational Schrödinger equation.We computed the vibrational spectrum of formaldehyde (in d = six dimensions), which allowed us to compare the behavior of the solution with respect to various calculation parameters (the IMQ exponent parameter (β), basis size (N), and collocation point set size (M)) to that previously observed with the Gaussian basis. For the lowest basis set size we used (15,000 functions), the IMQ basis outperformed the Gaussian basis.However, to achieve the "spectroscopic accuracy" on the order of 1 cm −1 on the lowest 50 or 100 levels, more than N = 30,000 functions are needed and M > N is needed; for these values of N, the accuracy obtained with the IMQ basis is somewhat lower than that with the Gaussian basis. The behavior with respect to the ratio of the numbers of the collocation points (M) and basis functions (N) is qualitatively similar to that observed with a Gaussian basis, both showing a significant improvement of errors for M > N up to about M = 3N.This highlights the advantage of the rectangular over the square collocation, notably in the CPU cost, which is dominated by N rather than M. When N = M (i.e., when using square collocation), the IMQ basis outperformed the Gaussian basis, although in that regime the spectrum errors were much larger than 1 cm −1 . Perhaps the most important conclusion of this work is that, with collocation, one can use basis functions that are not even integrable.In this case, this was shown when using IMQ functions with the exponent β ≤ d.There is only a slight increase of error until β became much smaller than d, although the best accuracy is obtained when β > d.This is an important advantage over quadrature based variational methods, as basis functions can be chosen to satisfy requirements that make integrals more difficult to compute (an example being the cusp condition in electronic structure).It would be useful to study other non-integrable functions, as they might result in a more compact, yet sufficiently complete, basis at the collocation points.For example, one could use a neural network representation of the wavefunction with sigmoid neurons that are non-integrable, as opposed to radial neurons which are [38].This is possible because, with collocation, one does not compute integrals (the sum in Equation ( 6) need not converge any quadrature).This also should allow any singularities to be dealt with rather easily by not including them in the collocation point set; recent results [28] show promise in this direction that should be further explored. Figure 1 Figure 1 shows the MAE of the lowest 50 and 100 vibrational levels obtained with the width-optimized IMQ basis, using N = 20,000 basis functions and M = 80,000 collocation points, for different exponents (β).The horizontal lines are MAE, obtained with a width-optimized Gaussian basis with the same N and M. Two conclusions can be made from this Figure: (i) The Gaussian basis generally outperforms the IMQ basis for the low-energy levels; however, for several β values, the spectrum quality is practically the same as with the Gaussian basis.The IMQ basis slightly outperforms the Gaussian basis for the higher-energy levels, corroborating the conclusion Hu et al. achieved based on model systems.The specific choice of β seems to be non-critical, as long as the width (c) is optimized.The best quality IMQ bases do satisfy the condition β > d = 6.(ii) The spectrum quality only marginally deteriorates for β = 4-6, which do not satisfy the integrability condition β > d.This highlights the fact that nowhere in the collocation Equations (5) and (6) is it required that integrals over all space be finite.The use of non-integrable functions might be beneficial in some applications and remains little explored.Collocation is a way to harness any advantages associated with such functions. Figure 1 . Figure1.The mean absolute error (MAE), in cm −1 , over the lowest 50 and 100 vibrational levels of H 2 CO, computed with the inverse multiquadratic (IMQ50, IMQ100) basis with different β (beta) parameters.The horizontal lines at 2 and 3.7 cm −1 are corresponding values obtained with a Gaussian basis (G50, G100, respectively), as in Reference[30].N = 20,000 basis functions and M = 80,000 collocation points were used. Mathematics 2018, 6 , 9 Figure 2 . Figure 2. The mean absolute error (MAE), in cm −1 , over the lowest 50 and 100 vibrational levels of H2CO, computed with a width-optimized inverse multiquadratic (IMQ50, IMQ100) basis with β = 7 and a width-optimized multiquadratic Gaussian (G50, G100) basis, for different numbers of basis functions N and M = 3N collocation points.The insert shows part of the graph at the logarithmic scale. Figure 3 . Figure 3.The mean absolute error (MAE), in cm −1 , over the lowest 50 and 100 vibrational levels of H2CO, computed with a width-optimized inverse multiquadratic (IMQ50, IMQ100) basis with β = 7 and a width-optimized multiquadratic Gaussian (G50, G100) basis, for N = 30,000 basis functions and different ratios of the number of collocation points to the number of basis functions M:N. Figure 2 . Figure 2. The mean absolute error (MAE), in cm −1 , over the lowest 50 and 100 vibrational levels of H 2 CO, computed with a width-optimized inverse multiquadratic (IMQ50, IMQ100) basis with β = 7 and a width-optimized multiquadratic Gaussian (G50, G100) basis, for different numbers of basis functions N and M = 3N collocation points.The insert shows part of the graph at the logarithmic scale. Mathematics 2018, 6 , 9 Figure 2 . Figure 2. The mean absolute error (MAE), in cm −1 , over the lowest 50 and 100 vibrational levels of H2CO, computed with a width-optimized inverse multiquadratic (IMQ50, IMQ100) basis with β = 7 and a width-optimized multiquadratic Gaussian (G50, G100) basis, for different numbers of basis functions N and M = 3N collocation points.The insert shows part of the graph at the logarithmic scale. Figure 3 . Figure3.The mean absolute error (MAE), in cm −1 , over the lowest 50 and 100 vibrational levels of H2CO, computed with a width-optimized inverse multiquadratic (IMQ50, IMQ100) basis with β = 7 and a width-optimized multiquadratic Gaussian (G50, G100) basis, for N = 30,000 basis functions and different ratios of the number of collocation points to the number of basis functions M:N. Figure 3 . Figure3.The mean absolute error (MAE), in cm −1 , over the lowest 50 and 100 vibrational levels of H 2 CO, computed with a width-optimized inverse multiquadratic (IMQ50, IMQ100) basis with β = 7 and a width-optimized multiquadratic Gaussian (G50, G100) basis, for N = 30,000 basis functions and different ratios of the number of collocation points to the number of basis functions M:N.
6,620.6
2018-09-28T00:00:00.000
[ "Chemistry" ]
Blocking c-MET/ERBB1 Axis Prevents Brain Metastasis in ERBB2+ Breast Cancer Simple Summary Targeted monotherapies are ineffective in the treatment of brain metastasis of ERBB2+ breast cancer (BC) underscoring the need for combination therapies. The lack of robust preclinical models has further hampered the assessment of treatment modalities. We report here a clinically relevant orthotopic mouse model of ERBB2+ BC that spontaneously metastasizes to brain and demonstrates that targeting the c-MET/ERBB1 axis with a combination of cabozantinib and neratinib decreases primary tumor growth and prevents brain metastasis in ERBB2+ BC. Abstract Brain metastasis (BrM) remains a significant cause of cancer-related mortality in epidermal growth factor receptor 2-positive (ERBB2+) breast cancer (BC) patients. We proposed here that a combination treatment of irreversible tyrosine kinase inhibitor neratinib (NER) and the c-MET inhibitor cabozantinib (CBZ) could prevent brain metastasis. To address this, we first tested the combination treatment of NER and CBZ in the brain-seeking ERBB2+ cell lines SKBrM3 and JIMT-1-BR3, and in ERBB2+ organoids that expressed the c-MET/ERBB1 axis. Next, we developed and characterized an orthotopic mouse model of spontaneous BrM and evaluated the therapeutic effect of CBZ and NER in vivo. The combination treatment of NER and CBZ significantly inhibited proliferation and migration in ERBB2+ cell lines and reduced the organoid growth in vitro. Mechanistically, the combination treatment of NER and CBZ substantially inhibited ERK activation downstream of the c-MET/ERBB1 axis. Orthotopically implanted SKBrM3+ cells formed primary tumor in the mammary fat pad and spontaneously metastasized to the brain and other distant organs. Combination treatment with NER and CBZ inhibited primary tumor growth and predominantly prevented BrM. In conclusion, the orthotopic model of spontaneous BrM is clinically relevant, and the combination therapy of NER and CBZ might be a useful approach to prevent BrM in BC. Introduction The improved five-year survival rate of~90% of breast cancer (BC) patients is mainly attributed to its successful clinical management that includes early screening and effective treatment modalities [1]. However, for BC patients with distant metastasis, the five-year survival rate is~27%, with a median survival of 18-24 months [2]. Irrespective of molecular subtypes, BC patients with brain metastasis (BrM) have the worst cancer-specific survival (CSS). Retrospective analysis of the Surveillance, Epidemiology, and End Results (SEER) database of BC patients indicates that the ERBB2 + subtype along with triple-negative BC account for more than 50% of cases of distant metastasis, which preferentially metastasizes to brain, bone, liver, and lungs [3]. Despite available targeted therapies, metastatic ERBB2 + BC patients have a median survival of~34 months [3]. The poor therapeutic response towards targeted therapies in metastatic ERBB2 + BC is attributed in part to clonal evolution in metastatic cells, their adaptation to the organ-specific microenvironment, and differential drug delivery to the metastatic niche [4][5][6]. This is exemplified by the presence of the blood-brain barrier (BBB) in the case of BrM, which has one of the poorest outcomes among all metastatic BCs [7][8][9]. The limited success of available therapeutic approaches against metastatic ERBB2 + BC underscores the need for novel targeted therapies. Recent clinical studies with small molecule inhibitors, including tyrosine kinase inhibitors, anti-ERBB2 agents, PI3K/Akt/mTOR inhibitors, and CDK4/6 inhibitors, have shown promise in inhibiting proliferation and metastasis in ERBB2 + BC [10][11][12]. For instance, inhibitors targeting ERBB1 and c-MET receptors that are upregulated during BC metastasis, are currently being investigated in clinical trials for patients with existing metastasis [13][14][15][16]. Neratinib (NER), an irreversible pan-ERBB family inhibitor, has been reported to be efficacious in metastatic BC patients in combination with capecitabine [17] and paclitaxel [18,19]. Further, the data from two different clinical trials showed that NER, in combination with either capecitabine or T-DM1, was effective against brain metastatic ERBB2 + BC, with grade 3 and 4 levels of toxicity, respectively [14,15]. A recent study in the ERBB2 + spontaneous metastasis model showed that NER monotherapy (60 mg/kg) inhibited proliferation and distant metastasis in BALB/c mice via inhibition of ferroptosis [20]. In addition, fluorescent imaging at experimental end-point post-neoadjuvant treatment with NER showed preventive effects on metastatic progression. Similarly, cabozantinib (CBZ), an inhibitor of the c-MET receptor, alone or in combination with standard therapies, is being investigated in ongoing clinical trials for the treatment of patients with metastatic breast and renal cell carcinomas [13,21,22]. As both ERBB1 and c-MET have been reported to be upregulated in metastatic ERBB2 + BC and positively correlated with poor survival in ERBB2 + BC patients, these pathways are considered as potential targets for combination therapies of metastatic BC. However, the effect of combination therapy, targeting both ERBB1 and c-MET has not been investigated in brain metastatic ERBB2 + BC. There are limited preclinical models available to understand underlying mechanism and evaluate targeted therapies against BrM of ERBB2 + BC. So far, mostly intracardiac and intracarotid injection-based models have been used to investigate therapeutic approaches against BC BrM [23][24][25]; however, both intracardiac and intracarotid models are considered insufficient to represent BC pathogenesis. In particular, the lack of primary tumors in these models limits the therapeutic evaluation of different treatment modalities to the metastatic site only, which is not the case in clinical management of metastatic BC (MBC) patients. In addition, the intracardiac and intracarotid models do not recapitulate the incidence of spontaneous metastasis and are often considered as models of "forced metastasis". Although an orthotopic ERBB2 + BC model for spontaneous metastasis has been reported recently in BALB/c mouse, it showed 80% incidences of lung and adrenal metastases, respectively, and 50% and 60% metastasis to bone and brain, respectively [20]. Thus far, there is a lack of clinically relevant ERBB2 + mouse model that can recapitulate the incidences of distant metastasis like MBC patients. Particularly, a mouse model demonstrating spontaneous metastasis to the brain is required to evaluate therapeutic modalities against lethal BrM of BC. In the present study, we aimed to evaluate the combined efficacy of ERBB1 and c-MET targeted therapies in vitro in brain metastatic ERBB2 + cell lines and in an organoid model, and in vivo in a novel orthotopic model of spontaneous BC metastasis. We observed that FDA approved anti-ErbB1/ErbB2 neratinib (NER) and anti-c-MET cabozantinib (CBZ) inhibited proliferation and metastasis of brain seeking ERBB2 + SKBrM3 and JIMT-1-BR3 BC cell lines and decreased growth of organoids derived from huERBB2-Tg mice. To evaluate combination therapy in vivo, we developed a unique ERBB2 + BC orthotopic nude mouse model of spontaneous metastasis that showed primary tumor growth and clinically relevant distant metastasis to brain, bone, liver, and lung. Combination treatment with NER and CBZ for 3 weeks was effective in inhibiting the tumor growth and incidence of BrM. Further, the combination treatment was more effective in preventing brain metastasis with a partial effect on other metastatic sites. Altogether, treatment with the NER inhibitor alone, and in combination with a CBZ, is an effective strategy for preventing BrM as observed in the orthotopic model of spontaneous BC metastasis. Targeting ERBB1 and c-MET Inhibits Proliferation in Brain-Seeking BC Cell Lines and ERBB2 + Organoids Previous reports suggest that ERBB1/2 and c-MET pathways play an important role in metastatic progression of different cancers, including BC [26][27][28][29]. We analyzed the expression of ERBB1, ERBB2, and c-MET receptors in the ERBB2 + brain metastatic BC cell lines SKBrM3 and JIMT-1-BR3. Interestingly, we observed an increased expression of ERBB1 in both SKBrM3 and trastuzumab-resistant JIMT-1-BR3 cell lines, but relatively reduced expression of ERBB2, as compared to their respective parental cell lines SKBR3 and JIMT-1 ( Figure 1A). In contrast, c-MET expression was upregulated only in SKBrM3 cells as compared to its parental cell line ( Figure 1A and Figure S1A). To investigate the efficacy of NER, we first estimated the inhibitory concentration in SKBrM3 and JIMT-1-BR3 cell lines. The IC 50 of NER for SKBrM3 and JIMT-1-BR3 were estimated as 7.2 µM and 3.3 µM, respectively ( Figure S1B). Based on the inhibitory effect in SKBrM3 cells, we treated all the cell lines with 1 µM NER (below IC 20 of SKBrM3 cell line) alone or in combination with different concentrations of CBZ (1-10 µM). The combination of NER and CBZ significantly inhibited cell proliferation (46.71% ± 4.6%) as compared to NER alone (10.98 ± 3.2%) and CBZ alone (32.46% ± 8.1%) in SKBrM3 cells ( Figure 1B), whereas in the SKBR3 cell line, the combination treatment inhibited proliferation to a greater extent (68.1% ± 1.3%) compared to NER alone (36.8 ± 2.8%) and CBZ alone (29.2 ± 5.4%), but the fold-difference in growth inhibition was lower compared to the SKBrM3 cell line ( Figure 1B). These studies suggested that the combination of NER and CBZ inhibited the growth of SKBrM3 cells in a dose-dependent and synergistic manner. In contrast, we did not observe a synergistic effect of combination treatment in the JIMT-1-BR3 cell line ( Figure 1C). As JIMT-1-BR3 showed lower expression of the c-MET receptor, we did not pursue it for evaluation of combination therapy targeting the c-MET/ERBB1 axis. Based on the results in the proliferation assay, we selected NER (1 µM) and CBZ (5 µM) for further treatments. Next, we investigated the effect of NER and CBZ on the organoids that were generated from huERBB2 + transgenic (Tg) mice. Here, we first analyzed the expression of targets pertinent to the combination treatment. We observed that ERBB1, ERBB2, and c-MET were highly expressed in these groups. Interestingly, compared to the 84.6% ± 22.2% change in the area of organoids in the control group (n = 10), the percent change in area for NER treatment was −16.72 ± 22.3% (** p < 0.01); for CBZ treatment 8.9 ± 24.3% (** p < 0.01); and for NER+CBZ treatment −43.06 ± 16.8% (** p < 0.01). Among the treatment groups, both NER and CBZ decreased proliferation as compared to the untreated control ( Figure 1E,F). However, there was no significant difference in organoid growth between NER and CBZ treatment groups. Further, the combination treatment with NER and CBZ significantly reduced organoid growth as compared to the control group (~4-fold reduction; *** p < 0.001) and to single-agent treatments ( Figure 1E,F). These data suggested that the combination of NER and CBZ was effective in the ERBB2 + organoid model and, therefore, required further investigation in an appropriate in vivo model of metastasis. , whereas panel F shows quantitative data for the effect of drugs on the growth of huERBB2 + organoids. The area for each organoid was calculated in µm 2 , and the percent change in area of organoids (y-axis) was plotted for each treatment group (x-axis). Growth was measured in n = 8 organoids for NER and CBZ treatment groups and n = 10 and n = 11 organoids for control and combination groups, respectively. The statistical significance among different groups was calculated by one-way ANOVA with * p < 0.01; ** p < 0.001; and *** p < 0.0001; NS = No significance. Effect of NER and CBZ Treatment on Migration of Brain Seeking Cells We performed a Boyden chamber migration assay to evaluate the effect of combination therapy on cell migration. Interestingly, we observed that NER (1 µM) and CBZ (5 µM) concentrations each inhibited in vitro cell migration of SKBrM3 as well as SKBR3 cell lines (Figure 2A). In the SKBrM3 cell line, NER and CBZ alone inhibited migration by 32.3 ± 2.9% and 29.2 ± 4%, respectively, compared to the untreated control group ( Figure 2B). The effect was even greater with a combination of NER and CBZ in the SKBrM3 cell line (63.25 ± 7.6%), suggesting that targeting the ERBB1 and c-MET receptors inhibits cell motility in the SKBrM3 cell line. In contrast, CBZ alone significantly reduced the migration of JIMT-1, but not JIMT-1-BR3 cells (Figure 2A), possibly due to reduced expression of c-MET in the latter cell line. These studies suggested that the c-MET receptor might not be a potential target in JIMT-1-BR3 cells. However, NER treatment reduced the migration of JIMT-1-BR3 cells by 76 ± 1.6% ( Figure 2B) as compared to the untreated control group. The quantitative analysis showed that more SKBrM3 cells migrated through the 0.8 µm barrier as compared to the parental cell line, possibly due to their higher metastatic potential ( Figure 2B). µm. Experiments were performed in triplicate, and a paired t-test was used to calculate the statistical significance (**** p < 0.0001; *** p < 0.005; ** p < 0.01; * p < 0.05, ns = No significance). Effect of NER and CBZ on Downstream Signaling The synergistic regulation of signaling mediated by ERBB1 and c-MET receptors is important in the regulation of cancer progression, metastasis, and drug resistance [16,28,30]. The downstream protein kinase B (PKB/Akt) and extracellular signal-regulated kinase (ERK) are co-regulated by both ERBB1 and c-MET receptors [31]. Therefore, we analyzed the effect of NER (1 µM) and CBZ (5 µM) for 48 h on the expression of ERBB1 and c-MET receptors, and assessed the effect of combination treatment on activation of AKT and ERK molecules in SKBrM3, JIMT-1-BR3, and their respective parental cell lines. Interestingly, NER, as a single agent, as well as in combination with CBZ, modulated pERBB2 (Tyr1248), pERBB1 (Tyr1068), and its downstream pAKT (Ser473) and pERK (Thr982) signaling ( Figure 2C,D). Particularly in metastatic SKBrM3 and JIMT-1-BR3 cell lines, 1 µM NER treatment reduced downstream pERK and pAKT expression, suggesting that NER treatment alone is efficacious in inhibiting downstream signaling. In contrast, CBZ at 5 µM had no effect on ERK and AKT phosphorylation. Furthermore, a combination with NER (1 µM) and CBZ (5 µM) reduced pERK signaling with a partial effect on pAKT signaling ( Figure 2C,D), suggesting that downstream pERK signaling is critical in the metastatic SKBrM3 cell line. Further, we observed a similar response with NER alone or NER in combination with CBZ on the JIMT-1-BR3 cell line that expressed a low endogenous level of c-MET, suggesting specificity of CBZ with c-MET expression. Effect of NER and CBZ on In Vitro Trans-Endothelial Migration The BBB is selectively permeable under normal physiological conditions. However, the pathological cues, including brain metastasis, render the loss of BBB integrity, which transforms the intact BBB into the blood-tumor barrier (BTB) and alters the permeability for therapeutic agents [32][33][34]. As the SKBrM3 cell line expresses high c-MET and ERBB1 receptors, and the combination treatment of NER and CBZ predominantly downregulated the pERK pathway in this cell line, we examined the effect of treatment on the migration of SKBrM3 cell line in vitro that mimics the human BBB to some extent ( Figure 2E). We also observed a greater impact of combination treatment in the trans-endothelial migration (TEM) assay. Interestingly, as compared to the control group, NER and CBZ alone inhibited the TEM 66.16% ± 3.18 and 55.10% ± 2.17, respectively, whereas inhibition of migration was significantly greater in the combination treatment (92.79% ± 0.89; Figure 2F). Further, to visualize the effect of combination therapy on TEM of SKBrM3 cells, we presented the micrograph data, which showed that the combination treatment with NER and CBZ elicited a more profound effect as compared to control and single agent treatments ( Figure 2G). In addition, the efficacy of combination treatment of NER and CBZ was significantly higher than single treatment groups, suggesting that targeting the c-MET/ERBB1 axis could be a potential therapeutic strategy to prevent brain metastasis. Characterization of the Orthotopic Model of Spontaneous BC Metastasis Animal models developing distant metastasis either due to spontaneous progression of BC or by orthotopic implantation in mammary fat-pads are useful to understand the molecular mechanisms regulating various steps of metastatic progression and for evaluating the anti-tumor and anti-metastatic activity of targeted therapies. To develop a BC orthotopic model of spontaneous metastasis, we selected SKBrM3 cells, as these cells express high endogenous levels of c-MET and ERBB1 and showed an adequate response in combination in vitro compared to other cell lines. We first enriched SKBrM3 cells using the Boyden chamber as demonstrated in the schematic ( Figure 3A). The enriched SKBrM3 cells (SKBrM3 + hereafter) exhibiting high vimentin and ZEB1 expression ( Figure 3B) were used for orthotopic injection in the fat pad of the fourth mammary gland of female nude mice (n = 6). We observed a progressive tumor growth kinetics and after 6 weeks, we euthanized tumor-bearing mice to record tumor weights and incidences of metastasis. The average tumor weight was found to be 1.43 ± 0.32 g. Representative images from bioluminescence imaging (BLI) of orthotopic tumors and pictures of isolated tumors are shown with in Figure 3C. Subsequently, we assessed the composition of the tumor microenvironment in SKBrM3 + orthotopic tumors. The IF staining suggested that the tumors were highly positive for α-SMA (fibroblasts), CD31 (blood vessels), and F4/80 (macrophages), which are considered as major cellular constituents of the tumor microenvironment and play an instrumental role in metastatic progression [35][36][37][38] (Figure 3D). Next, we analyzed the expression of target molecules in the tumor sections derived from the mammary fat pads of implanted mice. Interestingly, we observed high expression of pERBB1, pERBB2, and pc-MET in SKBrM3 + tumor sections in Immunohistochemistry (IHC) analysis ( Figure 3E). Furthermore, we isolated the organs and performed BLI to analyze incidences of metastasis in different organs. Interestingly, mice bearing orthotopic tumors exhibited extensive distant metastasis to the bone, brain, liver, and lung, which are the major organs of metastasis in the ERBB2 + BC subtype ( Figure 3F). Representative images from BLI are shown for each organ along with the total incidences of metastasis ( Figure 3F). Further, we observed that SKBrM3 + cells exhibited high incidences of brain metastasis, and we found five mice out of six to be positive in BLI. The organ-specific metastatic behavior of SKBrM3 + cells might be a cumulative result of sequential in vitro enrichment, the injection site, and/or influence of the tumor microenvironment on disseminated cells. NER and CBZ Treatment Decreases Tumor Growth and Prevents Distant Metastasis As the upregulated ERBB1 and c-MET pathways contribute to BC progression and distant metastasis [26][27][28], we investigated the ability of NER and CBZ to prevent primary tumor growth and metastasis in the orthotopic model of spontaneous metastasis (n = 5). Based on previous studies, we considered lower doses of NER and CBZ (20 mg/kg body weight each) to investigate combined efficacy in vivo [39,40]. As mentioned in the experimental plan, we administered NER and CBZ orally five days a week for three weeks ( Figure 4A). Following three weeks of treatment, we analyzed the effect of combination therapy on tumor growth and metastasis. Interestingly, both NER (20 mg/kg/body weight) and CBZ (20 mg/kg/body weight) decreased the average tumor volume by 85.7% and 67.4%, respectively, as compared to the control group ( Figure 4B). The combination treatment further decreased the average tumor volume by 90.2% of the control group ( Figure 4B), which was also observed in BLI before euthanization of mice at experimental endpoint ( Figure 4C), and in the images of isolated tumors ( Figure 4D). This data was further substantiated by analysis of isolated tumors from each group, where we observed that the average tumor weights were significantly decreased in each treatment group including NER (70.8% ± 10.45%; ** p < 0.01), CBZ (58.25% ± 14.2%; * p < 0.05), and combination (75.82% ± 18.74%; ** p < 0.01) as compared to the control group ( Figure 4E). Further immunohistochemical analysis of the primary tumors demonstrated significantly lower Ki-67 + cells in the combination treatment groups as compared to the control group and single treatment groups ( Figure 4F,G). Overall, the combination treatment of NER and CBZ significantly reduced the tumor growth, as compared to CBZ alone and untreated control group. We further analyzed the effect of combination treatment with NER and CBZ on incidences of metastasis by a using small animal imaging system. The BLI analysis of isolated organs suggested that the combination of NER and CBZ predominantly showed a preventive effect on the incidences in BrM ( Figure 5A,B). We found that 80% of mice did not show incidences of BrM after the treatment with combination treatment. In contrast, the combination of NER and CBZ showed a preventive effect to a lesser extent in the case of bone metastasis, but we observed reduced metastatic burden in bone post treatment, as compared to the control group ( Figure 5A,B). As a single agent, NER prevented BrM in three out of the five mice, whereas it had a limited effect on bone metastasis, and only one mouse was observed with no metastasis (Figure 5A,B). In contrast, CBZ as a single agent was more effective in decreasing BrM than lung, liver, and bone metastasis ( Figure 5A,B). Overall, the data suggested that the combination of NER and CBZ effectively prevented metastasis to the brain with a partial effect on lung, bone, and liver metastasis. The differential response to combination therapy might be due to differential metastatic burden or due to poor response towards the therapy at different metastatic sites. where the x-axis shows the percent incidences for metastasis in each organ including brain (blue), bone (orange), liver (gray), and lung (yellow), and the y-axis shows different treatment groups including control, NER, CBZ, and their combination. Discussion Targeted therapies, particularly the anti-ERBB2 antibody trastuzumab (Herceptin) and its derivatives, alone or in combination, have greatly improved survival in ERBB2 + BC patients [41][42][43]. In contrast, there are limited targeted therapies available with modest therapeutic responses to target distant metastasis in ERBB2 + BC patients [3,44,45]. The optimization of targeted therapies against metastatic ERBB2 + BC is challenging for several reasons, including molecular and metabolic adaptations in metastasizing cells, refractive organ-specific microenvironments, and lack of preclinical models to evaluate therapeutic targets [9,[46][47][48]. In this study, we reported an orthotopic model of spontaneous ERBB2 + BC BrM that can be used to evaluate therapeutic approaches targeting both the primary tumor and distant metastases together, suggesting its high clinical relevance. Previously, an orthotopic mouse model of BrM using triple-negative cell lines MDA-MB-231 and 4T1 have been reported in NSG and BALB/c mice, respectively, which are suitable for therapeutic modalities against TNBC [49][50][51][52]. In the ERBB2 + BC subtype, both intracardiac and intracarotid models have been used for the evaluation of therapeutic modalities against BC BrM [24,25,53]. However, neither of these models are enough to understand the mechanisms of BC progression and metastasis, nor can they be used to test preventative therapies in BC. Recently, an ERBB2 + orthotopic model of spontaneous metastasis has been reported in BALB/c mouse [20]. However, the incidences of metastasis are more in lungs (~80%) as compared to bone (50%) and brain (60%). Conversely, the orthotopic nude mouse model described in this study is, physiologically, more relevant compared to intracardiac and intracarotid models and ideally suitable for the evaluation of therapeutic modalities against BC metastasis due to the presence of both matched primary and metastatic tumors. In addition, the SKBrM3 + orthotopic model exhibited consistently high incidences of metastasis to bone and brain, which recapitulates the clinical metastatic burden observed in metastatic BC patients [3]. For the assessment of metastasis in small animals like mouse models, BLI is the most commonly used method that is based on the luciferase activity of engineered metastatic cell lines [54,55]. Assessment of various organs derived from LUC+SKBrM3 + cell line implanted mice by BLI helped us to clearly identify the metastatic burden in different treatment groups. Therefore, BLI imaging of intact and freshly harvested tissues is an excellent approach to analyze incidences of metastasis using highly sensitive luciferin drugs and could be used to analyze metastases. However, further evaluation of molecular markers, radiological imaging, and histological analysis could be used for deeper insight of metastatic progression and associated pathways. In line with previous reports that showed co-amplification of c-MET and ERBB pathways and their role in metastatic BC [24,56,57], we observed the higher expression of c-MET and ERBB1 receptors in ERBB2 + brain-seeking SKBrM3 cells as compared to its parental cell line, and in organoids derived from ERBB2-Tg mouse. As both the brain metastatic cell lines SKBrM3 and JIMT-1-BR3 showed altered molecular expression as compared to their parental cell line, our results support the existing notion that molecular expression is influenced during the gain of metastatic traits and in response to the microenvironment at metastatic sites [5,7,58]. For example, a recent multicenter study analyzed the genomic profile and subtype switching in BC patients who were diagnosed with BrM. Interestingly, there was differential genomic profiles in case of BrM as compared to their respective primary tumors and more than 20% patients showed subtype switching, including ERBB2+ patients [59]. In the case of BC, the amplification of ERBB receptors correlates with disease aggressiveness and distant metastasis [56,57]. Particularly, ERBB1 overexpression or its co-amplification with other ERBB receptors has been reported to favor distant metastasis [57]. Similarly, c-MET overexpression has been reported to contribute to BC metastasis, particularly to the brain. Previously, gene set enrichment analysis (GSEA) of a large cohort of BC patients (n = 710), 47 with BrM, showed a highly enriched c-MET-pathway in BrM patients [24]. In addition, the upregulation of c-MET is associated with BrM, not to the bone metastasis, and the knockdown of c-MET in brain-seeking cells has been reported to significantly increase BrM-free survival in vivo [24]. Interestingly, activation of c-MET was shown to induce high IL-1β secretion leading to an IL-8 and CXCL1 dependent feed-forward loop, creating a favorable environment for BrM. Further, the co-amplification and cross-talk between c-MET and ERBB1 pathways have been reported in different malignancies, including BC that regulates tumor progression, distant metastasis, and therapeutic resistance [29,30,60,61]. Previously, it has been reported that targeting both ERBB1 and c-MET receptors sensitizes cancer cells to targeted therapy [30]. Moreover, c-MET amplification has been reported to mediate resistance to ERBB1 inhibitors [61]. In another study, tissue samples from 825 BC patients were analyzed to correlate the expression of upregulated proteins. The study highlighted that ERBB1 overexpression correlated with high p-c-MET expression [62], substantiating our hypothesis that the co-targeting of ERBB1 and c-MET could be an important therapeutic strategy in the treatment of BC progression and metastasis. Our results in cell lines and organoids derived from the huERBB2-Tg mouse model provide evidence that combined inhibition of the ERBB1/ERBB2, and c-MET pathways could synergistically inhibit proliferation and metastasis in BC. Due to their intact tumor architecture, organoids are considered as a robust model for therapeutic drug testing in solid tumors, including BC [63][64][65][66]. The therapeutic effect of NER and CBZ in ERBB2 + organoids was further substantiated by in vivo findings in the orthotopic model of spontaneous metastasis. Both NER and CBZ have been reported to cross the BBB and, therefore, could be used to target BrM in our orthotopic model of spontaneous metastasis. Our results showed that the combination of NER and CBZ not only effectively decreased tumor volume but also prevented the incidences of metastasis in this BC orthotopic model. Previously, NER monotherapy in neoadjuvant settings has been shown to inhibit tumor growth and metastasis in the ERBB2 + orthotopic model of spontaneous metastasis [20]. However, the spontaneous metastasis was observed more in adrenal and lungs rather than bone and brain, which are the most common sites of metastasis in ERBB2 + BC patients. These studies suggest that interpretation of the therapeutic effect of NER monotherapy on distant metastasis might be difficult in the BALB/c model. In contrast, like our findings, the preventive effect of neoadjuvant NER therapy has been reported predominantly in case of BrM in the BALB/c model. However, the differential therapeutic response in different metastatic sites might be due to the altered molecular profile of metastatic cells and due to organ-specific microenvironments at different metastatic sites. Previous reports have provided solid evidence that cells metastasizing to different organs have different genetic and molecular profiles, leading to subtype switching; therefore, metastatic cells differ in their organotropism as well as their response to targeted therapies [7,58,59,67]. In addition, neoadjuvant NER therapy was reported less efficacious in preventing lung and liver metastasis, as compared to brain metastasis in the BALB/c model [20]. Mechanistically, our findings show that combination therapy profoundly downregulated the MAPK/ERK pathway, which is downstream to c-MET and ERBB1 pathways. Previous studies have highlighted the MAPK/ERK pathway as a key mediator of proliferation and metastasis in metastatic BC that together with AKT/mTOR and STAT3 plays an essential role in progression of BrM [53,68,69]. A recent study showed that NER monotherapy inhibits proliferation of ERBB2 + BC via inhibition of ferroptosis [20]. In addition, other pathways, including PI3K, PARP, CDK4/6, FAK, etc., have been reported to play essential roles in BrM progression [70]. Therefore, combination therapies targeting multiple pathways might be an important strategy in the management of BrM in BC. We have summarized the treatment strategy in the graphical abstract ( Figure 6). Although a three-week low dose regimen of NER and CBZ combination therapy decreased tumor growth and incidences of metastasis in our orthotopic model of spontaneous BC metastasis, we assume that treatment with the maximum tolerated dose in combination treatment would provide more profound in vivo efficacy. In addition, the orthotopic model of spontaneous metastasis will be a useful reagent to investigate underlying mechanisms involved in metastatic progression of BC and for evaluation of novel therapeutic approaches against BC metastasis. Animal Use and Ethics All animal work was performed as per the protocols (#17-019-04 FC) approved by the Institutional Animal Care and Use Committee (IACUC) at the University of Nebraska Medical Center (UNMC). The animals were kept in a specific pathogen-free animal care facility at UNMC. Cell Lines and Reagents The SKBR3 cell line was procured from ATCC, and its metastatic derivative SKBrM3 was kindly gifted by Dr. Watabe from the Wake Forest School of Medicine, Winston Salem, North Carolina. Dr. Steeg from the National Cancer Institute kindly gifted us the JIMT-1-BR3 cell line, and the JIMT-1 cell line was obtained from Dr. Hamid Band at UNMC and maintained as described previously [71]. The drugs neratinib (Cat. No. S2150) and cabozantinib (Cat. No. S1119) used in the study were purchased from Selleckchem. Proliferation Assay Cell proliferation assays were performed, as described previously [72][73][74]. Briefly, five thousand cells per well were seeded in a 96-well plate in 10% FBS containing RPMI media for SKBR3 and SKBrM3 cell lines and in 10% FBS containing DMEM media for JIMT-1 and JIMT-1-BR3 cell lines. The next day, cells were starved in 2% FBS medium for 2 h prior to the drug treatments. After 2 h of serum starvation, cancer cells were treated with different concentrations of drugs, as mentioned in the Results Section. After incubation of cells at 37 • C for 48 h, we added 10 µL of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT, 5 mg/mL, Sigma, St. Louis, MO, USA) to each well. After adding MTT, plates were then incubated at 37 • C for 4 h. Absorbance was measured at 570 nm using a microplate reader (Spectramax 190 Microplate Reader, Molecular Devices, LLC, USA). Migration Assay We performed the cell migration assay as published previously [75]; 1 × 10 6 cells were seeded in 6-well plates in their respective media for 48 h in the presence of NER (1 µM) and CBZ (5 µM), and their combination. The drug-treated cells were trypsinized and seeded at a density of 5 × 10 5 cells/well in a 6-well Boyden chamber in serum free media. In the lower chamber of 6-well plates, 10% serum containing media was added to demonstrate the in vitro effect of the above treatments on migration of brain metastasis and their respective primary cell lines. After 16 h, migrated cells on the opposite side of the trans-well were fixed in 100% methanol, stained with 0.1% crystal violet, imaged, and counted using an EVOS ® FL auto imaging system microscope CA, USA). Trans-Endothelial Migration The trans-endothelial migration (TEM) was performed as described earlier [76]. Briefly, we seeded 5 × 10 5 brain endothelial cells on the upper outer surface of the gelatin coated trans-well for 24 h. In the lower chamber, 5 × 10 5 human astrocytes were cultured for 24 h. the next day, drug treated brain tropic SKBrM3 cells were seeded onto the top of endothelial cells in a trans-well insert. All the three cells types were incubated for the next 16 h in co-culture conditions, and transmigrated cells were counted by fixing cells in 100% methanol and stained with 0.1% crystal violet. Transmigrated cells were imaged and counted using an EVOS ® FL auto imaging system microscope (Life Technologies, CA, USA). Immunohistochemistry Immunohistochemistry (IHC) was performed on tumor tissues as described earlier [77]. Briefly, tissue sections were kept overnight at 58 • C and followed by xylene wash and alcohol gradient-based hydration. Citrate buffer (pH = 6) was used for antigen retrieval, followed by peroxidase quenching using 0.3% H 2 O 2 in methanol. Following 3 washes in water, tissues were blocked in 2.5% horse serum and incubated overnight in primary antibody cocktail, ERBB1 (D38B1; 1:400), ERBB2 (Cat. No. 2242; 1:200), and c-MET (Cat. No. 4560; 1:200, purchased from Cell Signaling technologies, Beverly, MA, USA. The next day, after washing off unbound primary antibodies, an HRP-conjugated antibody cocktail (ImmPress Universal antibody kit, Vector Laboratories, Burlingame, CA, USA) was used for 30 min at RT and developed using 3-3' diaminobenzidine solution (DAB substrate kit (SK-400), Vector Laboratories, Burlingame, CA, USA. Following counterstain with hematoxylin, the slides were dehydrated in an alcohol gradient from 25% to 100%, followed by 3 xylene washes. After that, air-dried tissue sections were mounted using paramount media (Fisher Scientific, PA, USA) and visualized under the microscope. Immunofluorescence For immunofluorescence in cell lines and tissues, we followed previously published protocol with slight modifications [78,79]. Briefly, the coverslip adhered cells were fixed in 4% paraformaldehyde for 15 min at RT, washed in 0.1% glycine, and permeablized in 0.1% TritonX100 for 10 min at RT. After washing in PBS, cells were blocked in 10% normal goat serum for 2 h, followed by incubation in primary antibodies specific to ERBB1 (D38B1; 1: Organoid Culture and Treatment Assay Tumor tissues were enzymatically digested and processed as described previously [81]. Briefly, freshly harvested tissues were digested with 0.012% (w/v) collagenase XI (Sigma) and 0.012% (w/v) dispase (GIBCO, MD, USA) in DMEM media containing 1% FBS (GIBCO) and embedded in growth factor reduced Matrigel (BD Biosciences, San Jose, CA, USA). The organoids were cultured in AdDMEM/F12 (GIBCO) media supplemented with 0.1% insulin-transferrin-selenium (ITS-G) (100×) (Gibco™, MD, USA), FGF10, and FGF 2 (PreproTech, Cranbury, NJ, USA), in a 5% CO 2 incubator at 37 • C. On the 5th day, the tumor organoids were treated with drugs, and changes in the size and morphology were followed consecutively for 7 days. The bright-field images were acquired on an EVOS ® FL auto imaging system microscope (Life Technologies, CA, USA), followed by quantification of the change in area of selected organoids. Orthotopic Mouse Model SKBrM3 cells were harvested at 70-80% confluency using 0.025% trypsin and seeded at 5 × 10 5 -cells/well density in the upper chamber of a 0.8-µm pore size trans-well inserted in 6-well plates containing 1.5 mL media. The migrated cells were collected from the wells and then subjected to two more cycles of enrichment. After analyzing the luciferase activity, 1 × 10 6 enriched SKBrM3 + (or BrM3 + ) cells in 100 µL of PBS were orthotopically implanted in the fourth mammary fat pad of 6to 8-week-old female nude mice. Tumor growth was followed by caliper-based measurements, as well as by imaging the mice on a small-animal in vivo imaging system (IVIS) on a weekly basis. Drug Treatment and Animal Imaging We evaluated the effect of oral doses of NER (20 mg/kg/mouse) alone, CBZ (20 mg/kg/mouse) alone, or their combination in our orthotopic model of ERBB2 + spontaneous BC metastasis using 6-to 8-week-old female nude mice. The tumor volume and body weight of each mouse were measured every 4th or 5th day. Post-treatment, the mice were first injected intraperitoneally (IP) with 15 mg/kg bodyweight of D-luciferin (Cat No. 122799, Perkin Elmer, Akron, OH, USA), and imaging was performed using small animal IVIS to analyze the effect on tumor growth. Further, the mice were sacrificed as per the IACUC guidelines and organs associated with BC distant metastasis including bone (from hind limbs), brain, liver, and lungs were harvested for single organ imaging to investigate the micrometastases. Following imaging and weighing of the tumors, the organs were preserved in 10% buffered formalin for fixation and further analysis. Statistical Analysis The statistical analysis was performed using Student's t-test (* p < 0.05; ** p < 0.01; *** p < 0.001). For comparison among the independent treatment groups, we used a one-way analysis of variance (ANOVA) with a cutoff of p < 0.05. Tukey's multiple comparison test was performed to analyze the statistical significance for difference in tumor weights as well as tumor volumes among different treatment groups using two-way ANOVA with * p < 0.05; ** p < 0.01; *** p < 0.001; and **** p < 0.0001. Conclusions We report in this paper an orthotopic mouse model for spontaneous BC metastasis that shows high incidences of BrM along with other clinically relevant distant organ metastases, including bone, liver, and lungs. Next, in concordance with high expression of ERBB1 and c-MET receptors, the combination treatment of NER and CBZ showed significant anti-proliferative effects in brain seeking cell line SKBrM3 and ERBB2 + organoids and elicited a profound inhibitory effect on cell migration in vitro. Our in vitro data was further substantiated by an in vivo preventive treatment approach where the combination treatment with NER and CBZ significantly reduced the primary tumor burden and demonstrated preventive effects on metastatic progression to the brain in an orthotopic model of spontaneous BC metastasis ( Figure 6). Overall, the combination of NER and CBZ could effectively inhibit BrM and targeting the c-MET/ERBB1 axis could be a unique strategy for preventing ERBB2 + BC BrM.
8,835.4
2020-10-01T00:00:00.000
[ "Medicine", "Biology" ]
Allergy to soft cannula of insulin pump in diabetic patient Insulin pump is a relatively good choice for diabetic patients who require multiple daily injections with wide fluctuations of blood glucose. Patients using insulin pump therapy and still having uncontrolled blood glucose levels for various factors: insulin pump not working properly, insulin instability, insulin autoantibody, insulin allergy, etc. We described a 46-year-old woman with type 2 diabetes and progressive hyperglycemia after switching multiple daily insulin injections to insulin pump, due to allergy to soft cannula of insulin pump. INTRODUCTION Insulin pump (continuous subcutaneous insulin infusion, CSII), is a battery-operated programmable device with basal and bolus insulin infusions over 24h a day to simulate physiological insulin delivery. CSII have the superiority in controlling blood glucose without an increase in hypoglycemic events, compared with multiple daily insulin injections. [1][2][3] However, still, few patients treated with CSII had poor glucose control for various factors. Here, we report a case of diabetic patient with uncontrolled blood glucose levels due to allergy to soft cannula of insulin pump. To the best of our knowledge so far, no such case has ever been reported. CASE REPORT A 46-year-old woman with seven months duration of diabetes was referred to our hospital because of the wide fluctuation of blood glucose from 2.1mmol/L to 20mmol/L. Prior to admission, lispro, glargine, metformin and acarbose were prescribed successively for controlling patient's blood glucose. She had no history of allergies. 2. Hui Huang, PhD. Professor, 1, 2: Department of Endocrinology and Metabolism, www.pjms.com.pk blood glucose). Next day, we added the lispro infusion rate and the bolus before meals for better glycemic control. But during the following four days, her blood glucose level continuously rose up to 14.6mmol/L (the fasting blood glucose) and 23.8mmol/L(2-hour postprandial blood glucose) in spite of increasing insulin dosage (Table-I). At the sixth day, as replacing the infusion tube and insulin pump soft cannula, we found the injection site was red, swollen and blistered skin (diameter about 1cm ), while the surrounding skin which stuck with adhesive was normal. Then, the injection site was replaced and the previous infusion insulin dosage was kept. The blood glucose was dramatically dropped and the patient had hypoglycemic reaction with post-breakfast glucose 3.6 mmol/L the next day. Nevertheless, the third day after replacing the cannulation site, the patients fasting blood glucose level was 11.8mmol/L and 2-hour postprandial glucose 22.6mmol/L ( Table-I). Similarly, the skin at new cannulation site also became red and swollen, but the skin reactions at previous injection site gradually subsided. Hypersensitive reactions in cannulation sites was suspected, and the patient was switched to multiple daily insulin injections regimen (glargine and lispro). The next three days, her blood glucose was dramatically decreased by adjustment of the insulin dosage according to the level of glycemia and no abnormal reaction to injection site had been found. Allergy to soft cannula of insulin pump was immediately considered in this patient. The following days, multiple daily insulin injections regimen was continued and the glycemic control was good (fasting glucose from 5.5 to 6.3mmol/L, 2-hour postprandial glucose from 6.4 to 9.6mmol/L) without hypoglycaemia. DISCUSSION To our knowledge, this case is the first one reported about a diabetic patient on an insulin pump with poor blood glucose control due to allergy to soft cannula of insulin pump. CSII is one of the options for patients with diabetes requiring multiple daily insulin injections and wide fluctuations of blood glucose. 4 However, some patients with insulin pump therapy still had poor glucose control which might be caused by many reasons: insulin pump not working properly, insulin instability, insulin secretion deficiency, insulin autoantibody, insulin allergy, etc. 5 Among them, insulin infusion system malfunction accounts for majority of problems, which often occurred in the syringe, infusion tube and connections, or subcutaneous infusion site, resulting in interruption of insulin flow. 6 Endermic induration and inflammation in injection site can influence the insulin absorption and change qualities of insulin to some extent. In general, the above situations can be avoided by double check of the "mini device" or replacing the infusion machinery and injection site. For this patient, IAA negative and poor islet βcell function could not explain the irregular changes in her blood glucose. In addition, she remained hyperglycemic after the infusion tube, cannula and injection site was replaced, which meant the hyperglycemia wasn't caused by the failure of insulin pump or improper injection position. Anaphylaxis was suspected when the same skin reaction occurred even after switching the soft cannula site. There are previous cases reported that insulin allergy was an important cause that resulted in induration, swelling and redness at injection site. 7,8 But, this patient had had no allergy symptoms since starting insulin therapy seven months ago and still had no swelling or redness at injection site when switched to multiple daily insulin injections in our hospital. All indicated that the patient's skin reaction of injection site might be related to allergy to the soft cannula of insulin pump and influence the insulin absorption. In summary, allergy to soft cannula of insulin pump should be considered when patients with CSII therapy have unexplained progressive rise in blood glucose after excluding other possible causes.
1,208.2
2017-01-01T00:00:00.000
[ "Medicine", "Biology" ]
Secure Multi-User k-Means Clustering Based on Encrypted IoT Data IoT technology collects information from a lot of clients, which may relate to personal privacy. To protect the privacy, the clients would like to encrypt the raw data with their own keys before uploading. However, to make use of the information, the data mining technology with cloud computing is used for the knowledge discovery. Hence, it is an emergent issue of how to effectively performing data mining algorithm on the encrypted data. In this paper, we present a k-means clustering scheme with multi-user based on the IoT data. Although, there are many privacy-preserving k-means clustering protocols, they rarely focus on the situation of encrypting with different public keys. Besides, the existing works are inefficient and impractical. The scheme we propose in this paper not only solves the problem of evaluation on the encrypted data under different public keys but also improves the efficiency of the algorithm. It is semantic security under the semi-honest model according to our theoretical analysis. At last, we evaluate the experiment based on a real dataset, and comparing with previous works, the result shows that our scheme is more efficient and practical. Introduction With the growing up of Internet of Things technology, the application of Internet of Things will spread to all walks of life. IoT technology collects information through a variety of smart devices or sensors, according to the agreed protocol, transfers the information to the application platform for processing and achieves the goal of intelligent control. For example, hospital wristbands can identify patients undergoing medical care, and sport trackers can log physical activities. All of those smart devices and sensors will produce large amounts of data in its running, and to reduce cost, users would like to benefit from the outsourced services, which is one of the fundamental advantages of cloud computing. Cloud computing is a new business computing model. As an emerging computing model, cloud computing attracts a large number of users for its characteristics such as good scalability, low-cost and pay-on-demand. More and more enterprises and users store their services and data into the cloud. That is, the users with resource-constraint devices can delegate the heavy workloads into untrusted cloud servers and enjoy the unlimited computing resources. However, such a large quantity of data always involves sensitive information, such as medical records or locations information, and it is high risk to store such information directly in the cloud servers. The security challenge is an emergent problem in the cloud computing development. Due to the security threats to the cloud, users have to take action to protect their sensitive information. The common method is to encrypt raw data before uploading to the cloud server. Generally speaking, users do not trust others easily, and they encrypt their own data with their own keys. In other words, except securely storing data in the cloud server, it allows users to retrieve cloud data without revealing confidential data to other users or the service providers. Meanwhile, it brings new challenges to evaluate over ciphertexts under multiple keys. of operating on inputs encrypted under multiple keys. Unfortunately, it requires heavy communication cost during the decryption of the final results, and its efficiency is far from practice. Then, Peter, Tews and Katzenbeisser (2013) represented a scheme that allows evaluating any dynamically chosen function on inputs encrypted under different independent public keys utilizing Bresson-Catano-Pointcheval (BCP) encryption (Bresson, Catalano, & Pointcheval, 2003), which is an additive homomorphic encryption with a double trapdoor decryption mechanism. The disadvantage of this solution is that it requires the complex interactions between two non-colluding cloud servers during the ciphertexts transformation phase. Therefore, it is not suitable for the system in the real word. In this paper, to avoid all these drawbacks and achieve a better balance between efficiency and security, we construct a more efficient secure k-means clustering scheme under multiple keys based on two non-colluding cloud servers by utilizing the ElGamal-based Proxy Re-encryption (PRE) (Ateniese, Fu, Green, & Hohenberger, 2006). Generally speaking, it is reasonable to assume the existence of two non-colluding servers to perform the secure computation. According to Van and Juels (2010), there is a clear indication that it is impossible to realize a completely non-interactive solution in a single server setting. Hence, if we aim to implement non-interaction between data owners, we need at least two servers, just like Peter et al. (2013). Moreover, as a way to enhance efficiency, we make extensive use of ElGamal-based PRE to transform ciphertexts encrypted under multiple keys into ciphertexts under the same key before computation. Compared to Peter et al. (2013), we reduce the interaction between two servers, and increase the computing efficiency. In brief, we summarize our main contributions as follows: 1) In order to protect privacy information, we construct a new efficiency privacy-preserving k-means clustering scheme. In our setting, the scheme can both preserve the privacy of the data owners' sensitive information and the calculation results. 2) Existing works require the inputs to be encrypted under the same public key, which is very limited in practice. To avoid these problems, we take advantage of proxy re-encryption to construct a scheme that is based on distributed data encrypted under multiple keys. 3) We utilize the two non-colluding servers model to complete large calculation in the learning phase. Except computing proxy keys, the data owners should do nothing during the learning process. In the end, the data owners only need to do decryption to obtain the result. 4) Finally, we evaluate our scheme based on a real dataset. Besides, we make comparison with other works, and the results show that our scheme is more efficient and more practical. Organization. The rest paper is organized as follows. In Section 2, we introduce the related work about our work. Next, we represent the setting of our scheme and analyze the threat model in Section 3. Section 4 describes the preliminary knowledge and privacy-preserving building blocks. We introduce our scheme in detail in Section 5, while analyzing security in Section 6. We summarize our experimental results in Section 7. Finally, we conclude in Section 8. Related Work Previous works have focused on the issues of privacy-preserving clustering algorithm. In the early years, researchers mainly focused on the security k-means clustering based on a single databased, and made some achievements. Recently, the focus has shifted to the multiple data sources setting to obtain more precise clustering result. In the scheme (Bunn & Ostrovsky, 2007), Bunn and Ostrovsky (2007) proposed a secure two-party k-means clustering protocol that guaranteed privacy of each database without revealing the intermediate values, which was based on secure two-party computation. This scheme extends the clustering algorithm to an algorithm that works in the two-database setting. However, as we all know, secure multiparty computation will increase communication cost among participating parties. Besides, it is a heavy work for users to perform the data mining algorithm, because of its limited computation resources. To address this issue, researchers have started to focus on the data mining task in an outsourced environment (Rao, Samanthula, Bertino, Yi, & Liu, 2015;Jiang et al., 2018;Rong, Wang, Liu, Hao, & Xian, 2017;Samanthula, Rao, Bertino, Yi, & Liu, 2014;Xing, Hu, Yu, Cheng, & Zhang, 2017). The works of Rao et al. (2015) and Samanthula et al. (2014) outsourced all computation to two non-colluding cloud servers. In their works, users encrypt their own raw data under a cloud server's public key and upload them to the other cloud server. The two cloud servers collaboratively perform the clustering task on the combined data in a privacy-preserving manner. Since all data are encrypted under a unified key, and only the cloud server who holds the secret key can decrypt the ciphertext, it is hard for data owners to retrieve data from the cloud servers. under multiple keys. Jiang et al. (2018) proposed a secure k-means clustering protocol with the benefit of two noncolluding cloud servers to support storage and computation outsourcing. The raw data are encrypted under the data owner's public key, and it is convenient for them to retrieve its data and decrypt with its secret key. While, during the clustering procedures, the data owners should online all the time, and help work out the result. Similarly, data owners need participate in the whole clustering process in scheme Xing et al. (2017). They allow data users to compute the nearest cluster locally and update the cluster centers with the help of cloud server. Obviously, it will increase the communication cost between each entity and there exists a large number of calculations for data owners. Beyond that, there are some potential problems about data security in the update cluster centers phase. Rong et al. (2017) presented a privacy-preserving k-means clustering over a joint database encrypted under multiple keys in distributed cloud environments. They transformed ciphertexts under different keys into ones under a common key through a double decryption cryptosystem (Youn, Park, Kim, & Lim, 2005), which allows an authority to decrypt any ciphertext by using the master key without consent of corresponding owner. The problem with this method is that it will occur without the data owners' prior consent, which may against data owners' wishes. In addition, at the stage of ciphertext transformation, the ciphertexts are converted by two non-colluding servers, and it may significantly decrease the efficient computation. Obviously, it is still a problem to be solved imperatively that achieves efficient privacy-preserving k-means clustering algorithm under multiple keys. Architecture and Entities In order to solve the existing problems, we present a secure and efficient privacy-preserving k-means clustering protocol. In our setting, as shown in Figure 1, we consider data owners, and each of them with andimensional object (1 ≤ i ≤ ). Due to the security concerns, data owners carefully store data in an encrypted form. Once they want to get some information from the data, they will send request to the cloud server. In other words, data mining will be executed based on the cloud model in a secure manner. We only discuss k-means clustering as the data mining method in this paper, and it obviously can be extended to other data mining algorithm. There are two type of entities in our system model: data owners and cloud servers. 1) Data Owners (DO): Data owner encrypts data under his own public key, and he is the only person who can decrypt the ciphertext by the secret key. In general, the ciphertexts would be centralized into storage service provider. Data owners have the right to decide whether their data will be involved in the k-means clustering algorithm. 2) Cloud Computing and Storage Server (S): The cloud computing and storage server provides storage service to all data owners, and it will perform computation on the data when receiving request from the clients. 3) Cloud Computing Server (C): Cloud computing server is a temporary server with only computing service. Its main work is to assist the cloud server S to perform data mining algorithm in a privacy-preserving manner. Threat Model In this paper, we prefer outsourcing the data and computation to a server provider, such as the cloud server. While, due to many reasons, the clouds are unreliable, and they may try to collude with others to obtain uncorrupted parties' private information as many as possible. During the evaluation process, the corrupted parties may also deviate from the protocol specification according to the adversary instruction. Under these circumstances, the adversaries are called malicious adversaries. In our study, we mainly focus on the semi-honest model. In other words, the entities in our setting are all semi-honest adversaries. That is to say, they will execute the protocol correctly, but they also attempt to obtain some information about the users' private information. In addition, we assume that the entities do not collude with each other. The design goal of our scheme is to ensure the data owners obtain the results of clustering, while the clouds do not learn any information from the clustering algorithm, even the intermediate value. It is worth noting that the ciphertexts stored in the cloud server are all encrypted under different public keys. Hence, we mainly aim to effectively solve the privacy-preserving k-means clustering under multi-keys. We make benefits from the proxy re-encryption to convert the ciphertexts into the ones under a unified key. During the transformation process, the entities learn nothing about the data owners' information. The clustering process is based on a two-cloud model. They execute the protocol correctly and do not learn any extra information. Thus, we say that our system is private. Informally, we also say that our protocol is correct. It is easy to verify. In addition, during the clustering process, we make sure that the communication cost and computation is minimized as possible. In the study, we consider that the data owners are not willing to help to run collaborative analysis and to spend too much resources to execute it. Consequently, we outsource the computation to computation service providers, and ensure that the communication cost is minimized between the two cloud servers. k-Means Clustering Algorithm k-means clustering algorithm as one of the main data mining methods is widely used in practice. It can be used to partition a set of data objects into clusters. We assume that there are participants and each participant holds an -dimensional object (1 ≤ ≤ ). k-means clustering algorithm aims to divide the objects into clusters (1 ≤ ≤ ), and to ensure a large degree of similarity within the same class, but little similarity between different classes. The clustering process is comprised of two steps. The first step is to assign objects into different clusters. The criterion of classification is to measure the distance between a sample and the related cluster center . There are many methods for this criterion, but in this paper, we adopt the Euclidean distance. At each iteration of the first step, k-means clustering algorithm assigns the object to the nearest cluster, which is labeled by , and it follows = argmin || − || 2 where 1 ≤ ≤ , 1 ≤ ≤ . All objects will be divided into k clusters during the first step, and the second step is to update the cluster centers. The new cluster center is defined as the center of each cluster, and we assume that there exists objects in the -th cluster. The updating algorithm is given by where 1 ≤ i ≤ , 1 ≤ j ≤ . The clustering process is terminated if the cluster centers are sufficiently close to the previous ones or the iterations reach a certain number of times. Additively Homomorphic Proxy Re-Encryption Proxy Re-Encryption (PRE) allows an honest-but-curious proxy to transform a ciphertext computed under Alice's public key into one that can be opened by Bob's secret key, without disclosing the plaintext. In 2006, Ateniese et al. (2006) proposed a unidirectional PRE, which is an improvement over Blaze, Bleumer and Strauss (1998) where the keys are bidirectional relied on pairing-based cryptography. In this paper, to enable the additive homomorphic property, we use the algebraic structure of elliptic curves over finite fields, similar to (Wang, M Li, Chow, & H Li, 2014;Shafagh, Hithnawi, Burkhalter, Fischli, & Duquennoy, 2017). The PRE scheme is also based on the bilinear map (Boneh & Franklin, 2001), which, given a cyclic group of prime order , has the following property for , ∈ ℤ and , ℎ ∈ : e( , ℎ ) = ( , ℎ) . The additively homomorphic proxy re-encryption (AHPRE) scheme can be described as follows:  Setup(1 ) → , , Z, , : Input 1 , where κ is a security parameter. Choose a random generator ∈ .  KeyGen → (pk, sk): Choose a random number ∈ ℤ , and set the public key as pk = with secret key sk = .  ReKeyGen(sk , pk ) → rk → : A user A delegates to B with public key pk = , the re-encryption key is computed as rk → = pk / = / ∈ .  Enc (pk, ) → : Present the message as = ∈ . To encrypt under pk in such a way that it can only decrypted by the holder of sk , output the first-level ciphertext = ( , ).  Enc (pk, ) → : To encrypt = ∈ under pk in such a way that it can only decrypted by A and her delegates, output the second-level ciphertext c = ( , ).  ReEnc ( Note that in final decryption, we need to map back to , where the message is a finite and relatively small number, which can be obtained by solving a discrete log problem. Basic Cryptographic Primitives In this section, we mainly introduce a group of cryptographic primitives that will be used in our privacy preserving scheme.  Secure Multiplication Protocol (SMP): This protocol aims to compute the multiplication of two ciphertexts. Assume that S has two ciphertexts Enc( ) and Enc( ), it will obtain the encrypted multiplication Enc( ) with the help of C who has the corresponding secret key sk. The details of SMP is described in Protocol 1.  Secure Minimum out of 2 Numbers Protocol (SMINP2): This protocol considers the S with inputs Enc( ) and Enc( ) and the server C with the secret key sk . The SMINP 2 can be used to determine the relationship between two encrypted data. The Protocol 3 shows the detailed process. The Procedures of Our Scheme In this section, we describe our privacy-preserving scheme for the k-means clustering algorithm in detail. We consider that there are data owners, and each of them holds an -dimensional object (1 ≤ i ≤ ). They encrypt raw data with their own public key pk (1 ≤ ≤ ) to the second-level ciphertexts and upload to the cloud server S. Under this circumstance, data owners are still able to retrieve their data and decrypt them under their own secret keys sk without leaking any information to other participants. Once receiving the request, the cloud server S will perform the clustering algorithm with the cloud server C in a privacy preserving manner. We divide our scheme into three stages: (1) Ciphertexts transformation; (2) Assigning records to the nearest cluster; (3) Computing the new clustering centers. Ciphertexts Transformation Due to the ciphertexts are encrypted under different public keys, it is hard to perform evaluation on these data. Although, López-Alt et al. (2012) have presented a multi-key fully homomorphic encryption cryptosystem, it is really inefficient in the practical applications. To realize data encryption and communication protection and improve the calculation efficiency, we design a ciphertexts transformation method based on the proxy reencryption (Ateniese et al., 2006). This method aims to convert the ciphertexts into the ones under a unified key. Noting that the ciphertexts stored in the cloud server are all encrypted in the form of the second-level ciphertext. The data owners have the right to decide whether they will participant into the data mining. If they would like to participant the k-means clustering to comparison with others, which may help them to reacquainted themselves, they will send the request to the cloud server S with a proxy re-encryption key rk → , which is computed based on the data owners' secret key and the cloud server C's public key. where the sk is the -th participant's secret key and pk is the C's public key that is broadcast to all entities. Once receiving clustering request and the proxy re-encryption key rk → from data owners, the cloud server S begins to execute the re-encryption function ReEnc(Enc (pk , ), rk → ) to convert the ciphertext computed under DO 's public key pk into the encryption under the cloud server C's public key pk . Next work will be executed on the ciphertexts under the cloud server C's public key. To simply, we denote Enc( ) as the first-level ciphertexts under pk . Assigning Records to the Nearest Cluster The second step is to assign records to their nearest clusters by computing the minimum squared Euclidean distance. The cloud server S's first task is to initialize cluster centers ( , … , ). There are many initialization methods. The general method is to initialize centers with randomly generated values. Alternatively, to reduce the number of iterations required in the process of clustering, we can adopt an optimized manner proposed in (Ostrovsky, Rabani, Schulman, & Swamy, 2012). In this paper, we randomly choose data records as the cluster centers. Let denotes the squared Euclidean distance between the record and the cluster center . It is easy for S to compute (1 ≤ ≤ , 1 ≤ ≤ ) by performing the protocol SSEDP defined in Section 3. To assign a record to the nearest cluster, it needs to compare the squared Euclidean distance between the record and the cluster centers (1 ≤ ≤ ). We have present a protocol SMINP 2 for securely get the minimum, and we assign the record to the cluster based on the minimum squared Euclidean distance and update the cluster label corresponding to the record to = at the same time. It is worth mentioning that the cluster label is encrypted with the cloud C's public key under the second-level encryption in our scheme. Its good point is the convenience for returning the final results to the data owners. Just like before, it only needs performing the proxy re-encryption to convert the ciphertexts to ones that encrypted under data owners' public keys. This process repeats until the cluster center is figured out. The data set will be divided into subsets. Note that when this procedure terminates, the cloud servers will learn nothing about the raw data and which cluster belongs to. Computing the New Clustering Centers After assigning the records to the nearest cluster, the cloud server S needs to reevaluate the cluster center for each cluster. We assume that there are records in the -th cluster. The new cluster center is simply defined as the center of each cluster, and the updating process as follows: The stage-2 and stage-3 is an iterative process. They will repeat until the termination condition holds. In this paper, we set two termination conditions: 1) We set a reasonable number , and execute the algorithm verifies whether the sum of the squared Euclidean distances between the current and new clusters is upper-bounded by . 2) If the number of iterations reaches a set value, the iteration will be terminated too. If the termination condition holds, the clustering will halt and return the final result. Otherwise, the algorithm continues to the next iteration with the new clusters as input. Security Analysis In this section, we mainly analyze the security about our privacy-preserving k-means clustering scheme. Obviously, the data confidentiality of our scheme is achieved by the additively homomorphic proxy re-encryption. Beyond that, we assume that there is no collusion between the cloud server S and the cloud server C. And our scheme aims to achieve the privacy of the raw data, the intermediate values and the final results under the semihonest model. Proof. While, Dodis and Yampolskiy (2005) have proved that the above assumption is hard in the generic group model. In (Dodis & Yampolskiy 2005), Dodis and Yampolskiy (2005) address a stronger version called q-Decisional Bilinear Diffie-Hellman Inversion (q-DBDHI). The q-DBDHI problem asks: given the tuple ( , , … , ( ) ) as input, distinguish e( , ) / from random. The q-DBDHI assumption holds if no probabilistic polynomial time algorithm has advantage at least in solving the q-DBDHI problem in . Definition 1. (q-DBDHI Assumption). We say that the ( , , ) -DBDHI assumption holds in if no -time algorithm has advantage at least in solving the -DBDHI problem in . Privacy of Data. To protect the data privacy of the data owners, we allow the data owners encrypt the sensitive information with its own pair of public and private keys under the additively homomorphic proxy re-encryption (AHPRE) cryptosystem. According to the Theorem 1 and the assumption of non-colluding between two cloud servers, we can easily achieve the protection of individual privacy. Lemma 1. Without any collusion, our scheme is privacy-preserving for the training model under the additively homomorphic proxy re-encryption. Proof. Recall that in our basic scheme, the cloud server C does not communicate with the cloud server S until the training phase. It's worth mentioning that data owners do not need to communicate with the cloud servers. During the training phase, the cloud server S always perform the evaluation on the ciphertexts, and it can't obtain any information about the intermediate values, because the additively homomorphic proxy re-encryption scheme is semantically secure. Although the cloud server C keeps the secret key, it receives the blinded ciphertexts and can only be able to get the blinded messages. Hence, the cloud server C also cannot obtain the learning result. Therefore, the privacy of the training model is confidential. Scheme Evaluation In this subsection, we will analyze the communication and computation overhead incurred in each stage of the proposed scheme. The results regarding computational costs are given in Table 1. Here denotes the number of attributes, denotes the sum numbers of data records of participants, denotes the number of clusters. Besides, Map represents the bilinear map, Mul represents the multiplication, Exp represents the exponentiation. It is important to note that Stage 1 is run only once whereas Stage 2 and Stage 3 are run in an iterative manner until the termination condition holds. In addition, it clearly shows that the computational and communication costs of Stage 2 are significantly higher than the costs incurred in Stage 3 in each iteration. Implementation and Dataset Description We implemented the proposed scheme in Python using the GNU Multiple Precision Arithmetic (GMP) library (Naeem & Asghar, 1999) from the UCI KDD archive. The dataset consists of 65554 data records and 29 attributes. As part of the pre-processing, we normalized the attribute values and scaled them into the integer domain. Stage 2 (per iteration) n * (kd(d + 4) + 25(k − 1)) Mul n * (8kd + 40(k − 1)) Exp n * 6kd + 28(k − 1) |q| Stage 3 (per iteration) n + 25k Mul 40k Exp 28k |q| In this paper, we focus on performing evaluation of arbitrary functions on inputs that are encrypted under different independent public keys. We compare the performance of the ciphertexts transformation with two previous works. The result is showed in Figure 1. It clearly shows that the scheme we proposed in this paper is more efficient than the other two schemes. In addition, the communication costs in Table 1 is also less than the scheme of Rong et al. (2017) and Peter et al. (2013). To apply the privacy-preserving scheme to complex practical system, it is obvious that our scheme is the best choice to convert the ciphertexts into ones under a unified key. Furthermore, we implemented the secure k-means clustering algorithm based on the KEGG Metabolic Reaction Network dataset. The performance of the scheme is shown in Figure 2. We tested our scheme with different dimensional data (i.e d = 29, d = 20, d = 10). As we can see, the computational time depends on the size of the dataset. The most of the costs are resulted by the partial homomorphic property, since it needs much time to perform the SMP which is involved many decryption and encryption algorithm. Generally speaking, the proposed scheme in this paper is more efficient and more practical according to the result. Furthermore, we note that the data owners need not participate in the learning phase, and all the hard computations are outsourced to cloud servers, which is lightweight for them. Conclusion and Future Work In this paper, we aim to solve the problems of k-means clustering algorithm on inputs that are encrypted under different independent public keys. Our scheme is based on two non-colluding cloud servers, and during the process, there is no interaction between the cloud servers and the data owners. We have proved that our scheme is semantically secure in the semi-honest model. Moreover, we highlight its efficiency by giving the experimental results and comparing with previous works. To meet the needs of practical application, we will continue improve the efficiency of the learning algorithm. Furthermore, we are planning to experiment with other machine learning algorithms used in other application scenario.
6,640.2
2019-03-25T00:00:00.000
[ "Computer Science", "Engineering" ]
Quantum speedups of some general-purpose numerical optimisation algorithms We give quantum speedups of several general-purpose numerical optimisation methods for minimising a function $f:\mathbb{R}^n \to \mathbb{R}$. First, we show that many techniques for global optimisation under a Lipschitz constraint can be accelerated near-quadratically. Second, we show that backtracking line search, an ingredient in quasi-Newton optimisation algorithms, can be accelerated up to quadratically. Third, we show that a component of the Nelder-Mead algorithm can be accelerated by up to a multiplicative factor of $O(\sqrt{n})$. Fourth, we show that a quantum gradient computation algorithm of Gily\'en et al. can be used to approximately compute gradients in the framework of stochastic gradient descent. In each case, our results are based on applying existing quantum algorithms to accelerate specific components of the classical algorithms, rather than developing new quantum techniques. Introduction Quantum computers are designed to use quantum mechanics to outperform their classical counterparts. As well as the remarkable exponential speedups that are known for specialised problems such as integer factorisation and simulation of quantum-mechanical systems, there are also quantum algorithms which speed up general-purpose classical algorithms in the domains of combinatorial search and optimisation. These algorithms may achieve relatively modest speedups, but make up for this by having very broad applications. The most famous example is Grover's algorithm [26], which achieves a quadratic speedup of classical unstructured search, and can be used to accelerate classical algorithms for solving hard constraint satisfaction problems such as Boolean satisfiability. Here our focus is on quantum algorithms that accelerate classical numerical optimisation algorithms: that is, algorithms that attempt to solve the problem of finding x ∈ R n such that f (x) is minimised, for some function f : R n → R. (We use boldface throughout for elements of R n .) A vast number of optimisation algorithms are known. Some algorithms seek to find (or approximate) a global minimum of f , given some constraints on f ; others only attempt to find a local minimum. Some algorithms have provable correctness and/or performance bounds, while the performance of others must be verified experimentally. Whether or not an algorithm has good theoretical properties, its performance on a given problem often can only be determined by running it. These factors have led to the development and use of many numerical optimisation algorithms based on varied techniques. Here we consider some prominent general-purpose numerical optimisation techniques, and investigate the extent to which they can be accelerated by quantum algorithms. We stress that our goal is not to develop new quantum optimisation techniques (that perhaps would not have rigorous performance bounds), but rather to find quantum algorithms that speed up existing classical techniques, while retaining the same performance guarantees. That is, if the classical algorithm performs well in terms of solution quality or execution time on a given problem instance, the quantum algorithm should also perform well. We assume throughout that the quantum algorithm has access to an oracle that computes f (x) exactly on particular inputs x, implemented as a quantum circuit 1 . That is, we assume we have access to the map |x |0 → |x |f (x) . This contrasts with a model sometimes used elsewhere in the literature, where x is assumed to be provided to the quantum algorithm as a quantum state of log 2 n qubits [34,47] stored in a quantum RAM, and the goal is to produce a quantum state corresponding to arg min x f (x). Our results can be summarised as follows, where we use the notation (as in the rest of the paper) T (f ) for an upper bound on the time required to evaluate the function f . See Table 1 for a summary of the speedups we obtain. • Section 2: We show that a number of techniques for global optimisation under a Lipschitz constraint can be accelerated near-quadratically, and also discuss some challenges associated with speeding up the related and well-known classical algorithm DIRECT [31]. In Lipschitzian optimisation, one assumes that |f (x) − f (y)| ≤ K x − y for some K that is known in advance (the Lipschitz constant of f ), where · is the Euclidean norm. Many techniques for Lipschitzian optimisation can be understood in the framework of branch-and-bound algorithms [28]. These algorithms are based on dividing f 's domain into subsets, and using a lower-bounding procedure to rule out certain subsets from consideration. This enables the use of a quantum algorithm for speeding up branch-and-bound algorithms [43]. The complexity of branch-and-bound algorithms is controlled by a parameter T min discussed below; the quantum algorithm achieves a quadratic reduction in complexity in terms of this parameter. A simple representative example of an algorithm fitting into this framework is Galperin's cubic algorithm [21]. In this case, the quantum algorithm's complexity is then O( √ T min d 3/2 2 n T (f )), where d is the depth of the branch-and-bound tree, whereas the classical complexity is O(T min 2 n T (f )). • Section 3: We show that backtracking line search [45, Algorithm 3.1], a subroutine used in many quasi-Newton optimisation algorithms such as the BFGS algorithm, can be accelerated using a quantum algorithm which is a variant of Grover search [39]. Backtracking line search is based on choosing a direction d and searching along that direction. If the overall algorithm makes k iterations, the complexity of choosing d is τ (d), and the number of search steps taken by the classical algorithm is m 0 , the complexity of one iteration of this classical routine is O(τ (d) + m 0 T (f )), while the complexity of the quantum algorithm is O(τ (d) + √ m 0 (log k)T (f )). • Section 4: We show that the Nelder-Mead algorithm [44], a widely-used derivative-free numerical optimisation algorithm, can be accelerated using quantum minimum-finding [17]. The algorithm is an iterative procedure based on maintaining a simplex. Assume that T (f ) = Ω(n 3/2 ), and that the algorithm performs k iterations, s of which are "shrink" steps (qv). Then the complexity of the quantum algorithm is O(((s + 1) √ n log k + k)T (f ))), as compared with the classical complexity, O(((s + 1)n + k)T (f )). So if the number of shrink steps is large with respect to k, or k is small, the quantum speedup can be relatively substantial (up to a O( √ n) factor). • Section 5: Approximate computation of a gradient is a key subroutine in many optimisation algorithms, including the very widely-used gradient descent algorithm [8]. We show that the gradient of § Gradients of averaged functions Quantum gradient computation [22] can be computed more efficiently using a quantum algorithm of Gilyén, Arunachalam and Wiebe [22]. Given that each individual function f i is bounded and can be computed in time T (f ) (and satisfies some technical constraints on its partial derivatives), the quantum algorithm outputs an approximation of the gradient that is accurate up to in the ∞ norm, in time O( √ nT (f ) −1 ), as compared with the classical complexity O(nT (f ) −2 ). (The O notation hides polylogarithmic factors in N , n and 1/ .) However, as we will discuss, it is not clear whether this notion of approximation is sufficient to accelerate classical stochastic gradient descent algorithms. In each case, the quantum speedups we find are based on the use of existing quantum algorithms, rather than the development of new algorithmic techniques. We believe that there are many more quantum speedups of numerical optimisation algorithms to be discovered. We remark that, in many of the cases we consider, the extent of the quantum speedup achieved depends on the interplay of various parameters governing the optimisation algorithm's runtime, so not every problem instance will yield a speedup. Prior work on quantum speedups of numerical optimisation algorithms (as opposed to the analysis of new quantum algorithms such as the adiabatic algorithm [20] or quantum approximate optimisation algorithm [30,19]) has been relatively limited. Dürr and Høyer [17] gave a quantum algorithm to find a global minimum of a function f on a discrete space of size N , which is based on the use of Grover's algorithm and uses O( √ N ) evaluations of f . Arunachalam [5] applied Dürr and Høyer's algorithm to improve the generalised pattern search and mesh-adaptive direct search optimisation algorithms. A sequence of papers has found quantum speedups of linear programming and semidefinite programming algorithms [10,3,2,35,9]; quantum speedups of more general convex optimisation algorithms are also known [51,14]. Quantum speedups are known for computing gradients [32,22,15], an important subroutine in many optimisation algorithms; larger (exponential) speedups could be available in gradient descent-type algorithms if the inputs to the optimisation algorithm are available in a quantum RAM (qRAM) [34,47]. Recently, it was shown that classical algorithms based on the general technique known as branch-and-bound can be accelerated near-quadratically [43]. Branch-and-bound algorithms for global optimisation with a Lipschitz constraint Finding a global minimum of an arbitrary function f : R n → R can be a very challenging (or indeed impossible) task. One way to make this problem more tractable is to assume that f satisfies a Lipschitz condition: |f (x) − f (y)| ≤ K x − y for some K that is known in advance, where · is the Euclidean norm. Finding a global minimum of f under this condition is known as Lipschitzian optimisation. Lipschitzian optimisation is very general and hence can be applied in many contexts. Hansen and Jaumard [28] describe a selection of applications of Lipschitzian optimisation, including solution of nonlinear equations and inequalities; parametrisation of statistical models; black box system optimisation; and location problems. It is natural to restrict the domain of f to [0, 1] n , and to assume that f is bounded such that f (x) ∈ [0, 1] for all x ∈ [0, 1] n . Finally, we can relax to solving the approximate optimisation problem of finding y such that f (y) − min x∈[0,1] n f (x) ≤ , for some accuracy parameter that is determined in advance. Even in the case n = 1 and with these restrictions, this problem is far from trivial. One class of algorithms that can solve Lipschitzian optimisation problems are branch-and-bound algorithms. Generically, a branch-andbound algorithm solves a minimisation problem using the following procedures: • A branching procedure which, given a subset S of possible solutions, divides S into two or more smaller subsets, or returns that S should not be divided further. • A bounding procedure which, when given a subset S produced during the branching process, returns a lower bound L(S) such that L(S) ≤ min x∈S f (x). Branch-and-bound algorithms can be seen as exploring a tree, whose vertices correspond to subsets S. The children of a subset S correspond to the subsets which S was divided into, and leaves are subsets that should not be divided further. For a leaf, one should additionally have that L(S) = min x∈S f (x). Branch-and-bound algorithms use the additional information provided by the branch and bound procedures to explore the most promising sets S early on, and to avoid exploring subsets S such that L(S) is larger than the best solution found so far. One can show that the complexity of an optimal classical branch-and-bound algorithm based on these generic procedures is controlled by the size of the branch-and-bound tree, truncated by deleting all vertices whose corresponding lower bounds are less than the optimal cost min x f (x): if the size of this tree is T min , the optimal classical algorithm makes Θ(T min ) calls to the branch and bound procedures [33]. It is not required to know T min in order to apply this bound. A generic framework for branch-and-bound algorithms in the context of Lipschitzian optimisation was given by Hansen and Jaumard [28,Section 3.3], and we describe it as Algorithm 1. The algorithm splits [0, 1] n into hyperrectangles I, each of which is recursively split again. Each hyperrectangle has an associated upper bound (obtained by evaluating f at a discrete set of points in that hyperrectangle) and lower bound (obtained via a separate lower-bounding function), and the algorithm terminates when it finds a hyperrectangle whose upper bound is sufficiently close to its lower bound. Convergence is guaranteed if some simple criteria are satisfied, discussed in [28] (for example, the upper bound and lower bound should converge as the interval size tends to 0). Hansen and Jaumard show that many previously known algorithms for Lipschitzian optimisation can be understood as particular cases of Algorithm 1. These include Galperin's cubic algorithm [21], which proceeds by dividing the search space into hypercubes, and algorithms of Pijavskii [46], Shubert [48] and Mladineo [40]. The branching procedure of Algorithm 1 fits into the standard branch-and-bound framework. Given a subset I j , an upper bound is obtained by evaluating f (x) at a discrete set of positions x, and a lower bound is obtained using the bounding function F j . If the two are within , I j should not be expanded further. Otherwise, I j is split into subsets. Algorithm 1 has a notion of selecting the next subset in L using a selection rule, but it is shown in [33] that the best possible selection rule in branch-and-bound procedures (in a query complexity sense) is to expand the subset whose bounding function is smallest 2 . i. Partition I into hyperrectangles I 1 , . . . , I p according to a branching rule [Branch] ii. For j = 1, . . . , p: Algorithm 1: Generic branch-and-bound algorithm for Lipschitzian optimisation problems [28] There is a quantum algorithm that can achieve a near-quadratic speedup of classical branch-and-bound algorithms [43]. The algorithm is based on the use of quantum procedures for estimating the size of an unknown tree [1], and searching within such a tree [6,7,42]. The algorithm achieves a complexity of O( √ T min d 3/2 ) uses of the branch and bound procedures for finding the minimum of f up to accuracy . In this bound d is the maximal depth of the branch-and-bound tree and the O notation hides polylogarithmic factors in d, 1/ , and 1/δ, where δ is the probability of failure. (We remark that the algorithm as presented in [43] assumes knowledge of an upper bound on d in advance, but such a bound can be found efficiently by applying the quantum tree search algorithms of [42,6,7] to the branch-and-bound tree obtained by truncating at depth d , with exponentially increasing choices of d , until d is found where the corresponding tree does not contain any internal vertices that have not been expanded.) The quantum branch-and-bound algorithm can immediately be applied to Algorithm 1. If the time complexity of the branching and bounding rules is upper-bounded by C, the cost of the quantum algorithm is O( √ T min d 3/2 C), as compared with the classical complexity, which is O(T min C). If T min d, the speedup of the quantum algorithm over its classical counterpart in terms of the number of uses of the branching and bounding rules is near-quadratic. If these rules in turn are relatively simple to compute compared with T min (as is likely to be the case for challenging optimisation problems that occur in practice), this translates into a near-quadratic runtime speedup. To illustrate how this approach could be applied in practice, a simple example of an algorithm fitting into this framework is Galperin's cubic algorithm [21]. The branch and bound procedures are defined as The result of a few steps of splitting into subintervals is shown. The centres of intervals are labelled below with the step at which they are divided into subintervals (red), and the lower bound in that interval (blue). Endpoints are labelled above with the evaluated function values, shown to two decimal places. follows, recalling that K is the Lipschitz constant of f : • Branch: the subproblem I corresponding to a hypercube is divided into p = q n equal hypercubes, for some q ≥ 2, by dividing each side into q equal parts. • Lower bounding rule: Let x 0 be an extreme point of I. I has side length 1/q k for some integer k. n, maximised over extreme points of I. • Upper bounding rule: Evaluate f on the extreme points of I and return the minimum value found. Galperin's algorithm is illustrated in Figure 2 for the case n = 1. The complexity of the branch and bounding steps is dominated by the cost of evaluating f at the extreme points of each hypercube I, which is O(2 n T (f )). The quantum complexity is then O( √ T min d 3/2 2 n T (f )), whereas the classical complexity is O(T min 2 n T (f )); so we see that the speedup is largest for small n, e.g. n = O(1). The DIRECT algorithm A prominent algorithm proposed to handle Lipschitzian optimisation for n-variate functions where one does not know the Lipschitz constant in advance is known as DIRECT [31] (for "dividing rectangles"). The basic concept is to divide [0, 1] n into (hyper)rectangles, and at each step of the algorithm to produce a list of potentially optimal rectangles, which are those that should be expanded further; see Appendix A for more details. This is similar to the branch-and-bound algorithms of the previous section, but with the additional complication of generating the list of potentially optimal rectangles, which involves interaction across several nodes of the branch-and-bound tree. This creates a difficulty for the quantum branch-andbound algorithm, as it can only use branch and bound procedures based on only local information from the tree. Therefore it is unclear whether a similar quadratic speedup can be obtained. To identify the potentially optimal vertices, the DIRECT algorithm uses a 2d convex hull algorithm. It is a natural idea to speed this up via a quantum convex hull algorithm. Lanzagorta and Uhlmann [38] have described a quantum algorithm based on Grover's algorithm for computing a convex hull of m points in 2d with complexity O( √ mh), where h is the number of points in the convex hull; they also give an algorithm based on a heuristic whose runtime may be O( √ mh) for practically relevant problems. However, the special 1. Choose a starting point x 0 and constants γ ∈ (0, 1) and β ∈ (0, 1). Set x ← x 0 . Choose a direction d such that 3. Compute the step size: We apply this result to step 3 of the classical algorithm to achieve a square-root reduction in the dependence on m 0 . To achieve a final probability of failure bounded by a small constant, by a union bound over the k iterations, it is sufficient to repeat the algorithm of Theorem 1 O(log k) times to achieve O(1/k) failure probability at each iteration. This gives an overall complexity of the quantum algorithm which is O(τ (d) + √ m 0 (log k)T (f )) per iteration. If the overall algorithm makes k iterations, and m max is the largest value of m 0 for any iteration, we have an overall complexity of O(k(τ (d) + √ m max (log k)T (f ))). In cases where τ (d) = O(n) (such as the steepest descent method), T (f ) = Ω(n), and k is not exponentially large in n, the dominant term in this complexity bound is the second one, and we always achieve a quantum speedup. The assumption T (f ) = Ω(n) is natural if f depends on all n variables. . Therefore, the speedup achieved by the quantum algorithm (based on this worst-case bound) will be greatest when L is large (representing that ∇f could change rapidly), yet |D d f (x)| is small (representing that f does not change rapidly in direction d). Another way in which one might hope to speed up Algorithm 3 is computing D d f (x) more efficiently. For example, a quantum algorithm was presented by Gilyén, Arunachalam and Wiebe [23], based on a detailed analysis of and modifications to an earlier algorithm of Jordan [32], that approximately computes ∇f (x) for smooth functions quadratically more efficiently than classical methods (that are based e.g. on finite differences). However, it seems challenging to prove that such an approximation can be inserted in the backtracking line search framework without affecting the performance of the overall algorithm, in the worst case. This is because even a small change in the direction d can significantly change the behaviour of the algorithm, as the definition of Step 3 of Algorithm 3 is such that an arbitrarily small change to the values taken by f along the direction d can change m 0 substantially. See Section 5 below for a further discussion of this algorithm. Finally, we remark that one simple way to find a direction d such that D d (f ) is nonzero, as required for the line search procedure, is to choose i such that ∂f /∂x i is nonzero. Although a valid choice, in practice this could be less efficient than (for example) moving in the direction of steepest descent. The use of Grover's algorithm would reduce the complexity of this step to O( √ n(log k)T (f )), as compared with the classical O(nT (f )). Nelder-Mead algorithm The Nelder-Mead algorithm is a direct search optimisation algorithm; that is, one which does not require information about the gradient of the objective function. It is commonly-used and implemented within many computer algebra packages. However, little convergence theory exists and in practice it is ineffective in higher dimensions 4 [37,27]. The Nelder-Mead algorithm uses expansion, reflection, contraction and shrink steps to update a simplex in R n . A number of variants of the algorithm have been proposed. The variant we will use was analysed by Lagarias et al. [37], and is presented as Algorithm 4. Algorithm 4 does not specify a termination criterion. Termination criteria that could be used include the function values at the simplex points becoming sufficiently close; the simplex points themselves becoming sufficiently close; or an iteration limit being reached. 2. Sort. Order and relabel the vertices of the simplex such that f (x 0 ) ≥ f (x 1 ) ≥ · · · ≥ f (x n ) and let x 0 be the worst vertex, x 1 the next-worst vertex and x n the best vertex. Set c = 1 n n i=1 x i . 3. Reflection. Calculate the reflection point, accept reflection, replace x 0 with x r and return to step 2. f (x r ) accept the expansion point and replace x 0 with x e , otherwise accept the reflection point and replace x 0 with x r . Return to step 2. , accept the outside contraction point, replace x 0 with x c1 and return to step 2. Else go to step 7. accept inside contraction, replace x 0 with x c2 and return to step 2. Else go to step 7. 7. Shrink. For all points other than the best point, replace it with its shrink point, Algorithm 4: Nelder-Mead algorithm (see e.g. [37]) to write down the n + 1 points. To analyse step 2, observe that a complete ordering of the points is never required; the only information about the ordering needed is the worst vertex x 0 , the next-worst vertex x 1 , and the best vertex x n . Knowledge of the identities of these points is sufficient to compute the centroid c, and to carry out all the updates required, including the shrink step. So the first time that step 2 is executed, its complexity is O(n 2 + nT (f )), where the O(n 2 ) comes from computing the centroid. Each time step 2 is executed subsequently, except following a shrink step, the required updates can be made in time O(n). The complexity of step 2, when executed for the first time or following a shrink step, can be improved using quantum minimum-finding: Thus a quantum algorithm using Theorem 2 can find the worst, next-worst and best vertices with failure probability O(1/k) at each iteration in time O( √ nT (f ) log k) in total. This choice of failure probability is so that, by a union bound, the total probability of failure can be bounded by an arbitrarily small constant. Further, observe that the centroid can be updated in time O(n) following a shrink step, as if c denotes the updated centroid, then c = δc + (1 − δ)x n . This does not give a quantum speedup of step 2 in all cases; the first time that step 2 is executed, if T (f ) = O(n 3/2 ), its complexity is dominated by the O(n 2 ) cost of computing the centroid. There also remains an O(n 2 ) cost for updating the points at each shrink step. (There may be a more efficient way of keeping track of these shrink steps; however, we do not pursue this further here.) Then the overall complexity of the quantum algorithm is O((s + 1)(n 2 + √ nT (f ) log k) + k(n + T (f ))), and using a union bound over the k steps, the algorithm's failure probability is bounded above by an arbitrarily small constant. If T (f ) = Ω(n 3/2 ), this simplifies to O(((s + 1) √ n log k + k)T (f )). Comparing with the classical complexity, we see that the quantum speedup is largest when s is large compared with k. However, in practice shrink steps appear to be rare; in one set of experiments, only 33 shrink steps were observed in 2.9M iterations [50], and shrink steps never occur when Nelder-Mead is applied to a strictly convex function [37]. If there are no shrink steps and T (f ) = Ω(n 3/2 ), the complexity of the quantum algorithm is O(( √ n log k + k)T (f )), while the complexity of the classical algorithm is O((n + k)T (f )). This is still a quantum speedup if k = o(n); on the other hand, if k = Ω(n), the complexity is dominated by evaluating f once at each iteration, and it is difficult to see how a quantum speedup could be achieved. To be able to use quantum minimum-finding, we have assumed the ability to construct superpositions of the form 1 √ n+1 n i=0 |i |x i , which enables us to evaluate f in superposition. This is a quantum RAM [24], and quantum RAMs are often assumed to be difficult to construct; however, our requirements are very weak, because we only need the addressing to be performed in time O(n), rather than O(log n), which can be achieved using an explicit quantum circuit. Finally, we consider the possibility of accelerating calculation of the centroid c using a quantum algorithm. If each component of each vector x i is suitably bounded (e.g. x i ∞ ≤ 1) we could use quantum mean estimation [29,11,41] to estimate each component of c up to accuracy in time O((n/ ) log(n/ )) with failure probability bounded by a small constant, where the log(n/ ) term comes from reducing the failure probability for each component to /n. Classical mean estimation could be used instead with an overhead of an additional O(1/ ) factor. This would give an overall time complexity similar to that derived above, but it is not obvious what the effect of replacing the centroid with an approximate centroid would be on the overall algorithm. For example, it is argued in [18] that random perturbations to the centroid throughout the algorithm can be beneficial. Stochastic gradient descent One of the most widely-used, effective and simple methods for finding a local minimum of a function is gradient descent. Given a function f : R n → R and an initial point x ∈ R n , the algorithm moves to the point x = x − η∇f (x), where η > 0. In application areas such as machine learning [8], one often encounters functions f of the form for some "simple" functions f i (x), where N is large. (For example, f i (x) could be the error of a neural network parametrised by x on the i'th item of training data, and we might seek to minimise the average error.) Rather than computing the exact gradient ∇f (x) by summing ∇f i (x) over all N choices for i, it is natural to approximate ∇f by sampling k random indices i 1 , . . . , i k ∈ [N ] with replacement and outputting 1 k (∇f i 1 (x) + · · · + ∇f i k (x)). (The case k = 1 is known as stochastic gradient descent; the sample i 1 , . . . , i k is sometimes known as a mini-batch.) If f satisfies the Lipschitz condition that ∇f i (x) ∞ ≤ 1, to approximate ∇f (x) up to additive error in the ∞ norm with failure probability δ it is sufficient to take k = O( −2 log(n/δ)) by a Chernoff bound argument. Let T (f ) denote an upper bound on the time required to compute f i (x) for all i. If we approximate ∇f i (x) using the finite difference method, then each approximation to ∇f i (x) can be computed in time O(nT (f )), giving a total complexity of O(nT (f ) −2 log(n/δ)). The use of quantum amplitude estimation [12] would improve the dependence on quadratically. Here we observe that the dependence on n can also be improved quadratically, using a result of Gilyén, Arunachalam and Wiebe [22]. We will impose the restriction (for technical reasons) that the range of each function f i is within [1/10, 9/10], where these numbers could be replaced with any constants between 0 and 1. Given the more typical constraint that f i : R n → [0, 1] (e.g. if the output of f i represents a probability), f i can easily be modified to satisfy this constraint by a simple linear transformation, which does not change arg min x f (x). The results of [22] use two somewhat nonstandard oracle models which we now define. First we will consider probability access, and define what a probability oracle is. Essentially, within this model our objective function corresponds precisely to the probability of a certain outcome being observed upon measurement (in particular, the probability of seeing |1 when measuring the final qubit). Indeed, given a classical description of the function g(z), an oracle of this form can be constructed without a significant overhead [13]. The next access model we consider is access via a phase oracle. The authors of [22] showed that a probability oracle is capable of simulating a phase oracle, and vice versa, with only logarithmic overhead: Theorem 3 (Converting between probability and phase oracles [22]). Suppose g : Z → [0, 1] is given by access to a probability oracle U g which makes use of a auxiliary qubits. Then we can simulate anapproximate phase oracle using O(log(1/ )) queries to U g ; the gate complexity is the same up to a factor of O(a). Similarly, suppose g : Z → [δ, 1 − δ] is given by access to a phase oracle O g . Then we can construct an -approximate probability oracle for g using O(log(1/ )/δ) queries to O g . The gate complexity is the same up to a factor of O(log(1/ )(log log(1/ ) + log(1/δ))). What this shows is that the two access models are more-or-less equivalent in power. Now we have defined probability oracles, we can show that access to probability oracles for the individual f i functions immediately gives such access for f itself. Proof. We start with the superposition 1 √ N N i=1 |i |x |0 , where |x denotes a description of the real vector x in terms of binary, up to some digits of precision, leading to an orthonormal basis. If N is a power of 2, this state can be constructed easily by applying Hadamard gates to each qubit in a register of log 2 N qubits. If not, the state can be constructed in circuit complexity O(log N ) as follows: attach a register of log 2 N qubits; apply Hadamard gates to produce |i ; compute the function "i ≤ N " into an ancilla qubit using an efficient comparison circuit (e.g. [16]); measure the ancilla qubit; and proceed only if the answer is 1. If not (which occurs with probability at most 1/2), repeat this step. We then apply the controlled operation |i |ψ → |i U f i |ψ . This produces for some sequences of normalised states |ψ Rearranging subsystems, we can write this as for some unnormalised states |ψ 0 , |ψ 1 where as required by the definition of a probability oracle for f . We will use this probability oracle within the framework of the fast quantum algorithm of [22] for computing gradients. This algorithm is applicable to functions that satisfy a certain smoothness condition. Given some analytic function h : The following result shows that if each function f i satisfies the required smoothness condition [22], we have that the overall function f also satisfies the same condition. Claim 5. Let c be a real constant, and fix some x ∈ R n . Suppose that for all i ∈ [N ] the function f i : R n → R is analytic, and that for every natural number k, and α ∈ [n] k , we have that then we have that f also satisfies the same condition. Proof. We apply the linearity of ∂ α . Observe that and we are done. In fact it's not too hard to see that this claim generalises to more-or-less any bound on the partial derivatives. We can now state the result we will need from [22]. Theorem 6 (Gilyén, Arunachalam and Wiebe [22,Theorem 25]). Suppose that g : R n → R is an analytic function such that, for all r ∈ N and α ∈ [n] r , |∂ α g(x)| ≤ c r r r/2 . Assume access to g is given by a phase oracle O g . Then there exists an algorithm that outputs a vector ∇f (x) ∈ R n such that ∇f (x) − ∇f (x) ∞ ≤ with 99% probability, using O( √ n/ ) queries to the oracle and additional time O(n 3/2 / ). Note that, if the time complexity of evaluating O g is Ω(n), this dominates the overall runtime bound. We can encapsulate the combination of these results in the following theorem. Theorem 7. Let f be defined as in (1), and assume that each function f i satisfies the conditions required for Theorem 6 and can be computed in time T (f ), for some bound T (f ) such that T (f ) = Ω(n). Then there is a quantum algorithm that outputs ∇f (x) such that ∇f (x) − ∇f (x) ∞ ≤ with 99% probability, in time Proof. Given the ability to compute each f i function in time T (f ), we can produce a phase oracle computing f i in time O(T (f )). By Theorem 3, and using that f i : R n → [1/10, 9/10], we can then obtain an operation approximating a probability oracle for f i up to error in time O(T (f )). By Lemma 4, this gives a probability oracle for f , at additional cost O(log N ). By Theorem 3, we then obtain a phase oracle for f at additional cost poly log (N, 1/ ). This finally allows us to apply Theorem 6 to achieve the stated complexity. Despite Theorem 7 giving a more efficient quantum algorithm for approximately computing ∇f , it is not clear whether this translates into a more efficient quantum algorithm for stochastic gradient descent, or a quantum speedup of other algorithms making use of ∇f . This is because the algorithm of [22] only outputs an approximate gradient, and one which may not be an unbiased estimate of ∇f . To prove approximate convergence of stochastic gradient descent, it is not essential for the gradient estimates to be unbiased [8], and it is plausible that an approximate estimate of the gradient should lead to an approximate minimiser for f being found. However, the technique used in [8] to show approximate convergence in this scenario requires the 2-norm of the approximate gradient to be close to that of ∇f . The algorithm of [22] provides accuracy in the ∞-norm, which would only give accuracy √ n in the 2-norm. Further, it was shown by Cornelissen [15] that if f is picked from a certain class of smooth functions, approximating ∇f up to 2-norm accuracy requires Ω(n/ ) uses of a phase oracle for f in the worst case, so this is not merely a technical restriction. Nevertheless, it is possible that quantum gradient estimation may be more efficient than stochastic gradient descent in practice. 2. Let S be the set of potentially optimal hyperrectangles. 4. Evaluate hyperrectangle j and decide where to divide it using the following procedure: (a) Let I be the set of dimensions with maximal side length. Let δ be one-third of this maximal side length. Let c be the centre of hyperrectangle j. (b) Evaluate f at the points c ± δe i for all i ∈ I, where e i is the i'th vector in the standard basis. (c) Divide the hyperrectangle containing c into thirds along the dimensions i ∈ I, in ascending order of w i = min{f (c + δe i ), f (c + δe i )}. Let ∆m be the number of new points evaluated. Update m ← m + ∆m, f min ← new best min. 6. t ← t + 1. If t = T , where T is the iteration limit, then stop, if not go to step 2. We think of K in Definition 3 as a surrogate for the Lipschitz constant of f (which is not assumed to be known in advance). An example of the first couple of steps of dividing [0, 1] 2 into rectangles is shown in Figure 6a. The set of potentially optimal hyperrectangles can be determined in time O(m ), where m ≤ m is the number of distinct interval lengths, using a convex hull technique described in [31] and illustrated in Figure 6b. The conditions (2) and (3) are satisfied by the points that lie on the lower convex hull when f (c j ) is plotted against d j for each hyperrectangle, and we also include the point (0, f min − |f min |). In Figure 6b the red dots represent potentially optimal hyperrectangles whereas the black dots represent hyperrectangles that are not potentially optimal.
9,178.4
2020-04-14T00:00:00.000
[ "Computer Science" ]
Effect of Mimic Vegetation with Different Stiffness on Regular Wave Propagation and Turbulence Flume experiments were performed to test four plant mimics with different stiffness to reveal the effect of plant stiffness on the wave dissipation and turbulence process. The mimics were built of silica gel rod groups, and their bending elastic modulus was measured as a proxy for stiffness. The regular wave velocity distribution, turbulence characteristics, and wave dissipation effect of different groups were studied in a flume experiment. Results show that, when a wave ran through the flexible rod groups, the velocity period changed gradually from unimodal to bimodal, and the secondary wave peak was more apparent in the more flexible mimics. The change in the turbulence intensity in the different rod groups showed that the higher the rod stiffness, the greater the turbulence intensity. With an increase in the bending elastic modulus of a rod group, the wave dissipation coefficient increased. The increase in the wave dissipation coefficient was not linearly correlated with the bending elastic modulus, but it was sensitive within a certain range of the elastic modulus. Introduction Waves are one of the most important hydrodynamic force in coastal environments [1][2][3].The reduction of coastal erosion induced by waves is an important topic for coastal protection and morphological changes [4][5][6].Plants, such as mangroves, play an important role in protecting coasts.The planting of forests for wave attenuation in front of seawalls can reduce the arrival of waves, reduce the impact force of waves, and enhance the security of dams.It is known that different plant properties (e.g., density, stiffness, flexibility, arrangement mode, degree of submergence, and other factors) produce different influences on momentum transfer and the turbulence structure in canopy flow [7][8][9][10][11][12].These related processes can lead to different sediment deposition patterns, which can influence the coastal morphology.The interesting point is that even the presence of a short, low-biomass seagrass meadow can lower the beach erosion rates compared to shallow unvegetated nearshore reef flats [13][14][15][16].However, wave interactions between plants with varied stiffness have not been fully understood. To investigate these interactions, Huang et al. [17] designed a physical model with a rigid main trunk and flexible branches and leaves.They then systematically analyzed wave propagation behaviors on a vegetated floodplain, as well as the effect of plant branches and leaves, tree trunks, the width of the beach, the depth of the water on the beach land, and wave elements on the propagation and deformation of the wave.Jiang et al. [18] used a physical model experiment of a wave flume to study the effect of changes in the wave height and wave form.Incident wave height, plant densities, reflection, transmission coefficient, and the wave energy dissipation were investigated.Moller and Spencer discovered that wave height decreased exponentially in a vegetated area [19].Quartel et al. [20] performed an experiment at the Red River Delta in Vietnam and found that the wave dissipation ability of mangrove areas is five to 7.5 times more than that produced due to bottom friction.Bradley and Houser [21] quantitatively analyzed the effect of the relative movement of flexible seaweed leaves on wave height reduction in a reversing current.Fonseca and Cahalan [22] and Augustin et al. [11] showed that, when the height of seaweed was greater than or close to the water depth, the wave dissipation effect was obvious, and when the plant was submerged, the wave dissipation effect decreased with increased water depth.Tschirky and Hall [23] and Lima et al. [24] performed experiments that indicated that an increase in plant density enhanced the wave dissipation effect.However, Mazda et al. [25] and Horstman et al. [26] found that when the water depth in the mangrove was more than the height of the aerial roots, an increase in water depth reduced the wave dissipation effect, and when the water depth increased to the height of the mangrove leaves, the wave dissipation effect increased.Cruise and Muslesh [27] used a rigid pole to simulate the emerged portion of rigid vegetation and studied the effect of plant diameter and arrangement on water depth and velocity.White and Nepf [28] also studied the plant drag force, flow turbulence, and diffusion with a rigid rod. Currently, there are many laboratory studies on vegetation under unidirectional currents and/or waves [29][30][31][32], field studies on wave dissipation through flexible vegetation [33][34][35][36], and turbulent flow through real mangrove roots [37].However, the wave propagation and turbulence in vegetation with different stiffness is currently less studied [8].Therefore, in this study, the main aim is to study the wave propagation and turbulence characteristics among vegetation with different stiffness.A type of mimics used in this study is completely rigid, and the other three types of mimics are flexible to mimic the stems of macro algaes like Fucus vesiculosus and Fucus serratus with comparable elastic modulus (0.121-0.585Gpa) [38].The bending elastic modulus were measured using plant mimics made of silica gel rods with different stiffness.The regular wave velocity distribution, turbulence characteristics, and wave dissipation effect of different groups were studied to better understand the wave dissipation process through a vegetated field.The knowledge obtained by this study may provide a scientific reference for the planning and design of coastal protection projects. Experimental Design Experiments were conducted in a laboratory wave flume.The dimensions of the flume were 66 m long, 1.0 m wide, and 1.6 m deep.A piston-type waves paddle installed at one end of the flume was used to generate regular and irregular waves.For simplicity, we only tested regular waves in the current study.An overview of the flume, with its coordinate system and the wave maker, is shown in Figure 1.All of the instruments were deployed in this flume.Details of the flume dimensions and sensor deployments are also shown in Figure 1.The water surface elevation was measured using capacitance-type wave probes with good long-term stability and linear calibration curves.The wave probes were calibrated just prior to conducting the experiments.A SonTek 16-MHz Micro ADV (Acoustic Doppler Velocimeter) (SonTek/Xylem Inc.: San Diego, CA, USA) was used to measure the three-dimensional water velocity.The sampling frequency of the ADV and wave gages is 50 Hz.The data were collected after the waveform stabilized in the front of the mimic vegetation area, and the data collection lasted for 60 s.In order to eliminate the error, each experiment was repeated for three times.Peak velocities were obtained by taking the maximum value of an entire wave period.The vegetation zone was composed of silica gel rods of different stiffness installed on a flat slope.The rods were fixed though the prefabricated holes on the at the slope bed.Wave gauges were installed before and after the vegetation zone to quantitatively measure the wave attenuation.The 3D flow field structure and turbulence characteristics were measured using the ADV.The current velocity was measured at the middle and bottom layers at 10 cm and 2 cm above the bed using the ADV.These two measuring points are regarded as representative heights of the vertical profile, while the information of the whole profile was not obtained. The wave and flow design included no wave breaking action nor the presence of emergent vegetation.Therefore, the designed water depth of the floodplain was 15 cm, and the corresponding water depth before the wave plate was 45 cm.In addition, the regular wave height was 5 cm, and the wave period was 1.34 s.The resulting wave length was 1.53 m, and the tested water depth (0.15 m) in the vegetation canopy was half water depth.The wave condition is similar to previous lab work with small waves (wave height H ≤7 cm) [36,39]. Experiment Materials The diameter of plant mimics (d) was 1 cm and mimic height (hv) was 20 cm, and they were arranged into a rectangle with a total width of 195 cm consisting of 40 rows that were 5 cm apart with columns 5 cm apart (See Figure 1).The height of the mimics was determined to be similar to F. vesiculosus.The projecting area was 289.85 cm 2 , which is also similar to the field conditions of F. vesiculosus [38].Thus, essentially, this experiment did not involve scaling, as the tested mimics were dynamically similar to F. vesiculosus in the field conditions and the tested wave condition was also similar to the real field condition (depth = 0.15 m, wave height = 0.05 m, and period = 1.34 s).In the current experiment, the tested Re (Re = u×d/ν, where u is the velocity of the middle and bottom layers) number range was generally between 1000 and 2000. Eleven measuring point was set along the center-line of the wave flume to minimize the influence of the side-wall.The obtained velocities and turbulence statistics are regarded as the representative measurements of the flume cross section.However, pair ADV measurements in the lateral direction were not conducted in our experiment.Thus, the current study mostly focused on velocity in the streamwise direction, i.e., u.The u velocity is positive when it is in the same direction as wave propagation, and it is negative when it is opposite to wave propagation. We used a spring scale for the cantilever measurement.10 rods of the same material were involved in each test and the experimental results are averaged.The elasticity modulus was The vegetation zone was composed of silica gel rods of different stiffness installed on a flat slope.The rods were fixed though the prefabricated holes on the at the slope bed.Wave gauges were installed before and after the vegetation zone to quantitatively measure the wave attenuation.The 3D flow field structure and turbulence characteristics were measured using the ADV.The current velocity was measured at the middle and bottom layers at 10 cm and 2 cm above the bed using the ADV.These two measuring points are regarded as representative heights of the vertical profile, while the information of the whole profile was not obtained. The wave and flow design included no wave breaking action nor the presence of emergent vegetation.Therefore, the designed water depth of the floodplain was 15 cm, and the corresponding water depth before the wave plate was 45 cm.In addition, the regular wave height was 5 cm, and the wave period was 1.34 s.The resulting wave length was 1.53 m, and the tested water depth (0.15 m) in the vegetation canopy was half water depth.The wave condition is similar to previous lab work with small waves (wave height H ≤7 cm) [36,39]. Experiment Materials The diameter of plant mimics (d) was 1 cm and mimic height (h v ) was 20 cm, and they were arranged into a rectangle with a total width of 195 cm consisting of 40 rows that were 5 cm apart with columns 5 cm apart (See Figure 1).The height of the mimics was determined to be similar to F. vesiculosus.The projecting area was 289.85 cm 2 , which is also similar to the field conditions of F. vesiculosus [38].Thus, essentially, this experiment did not involve scaling, as the tested mimics were dynamically similar to F. vesiculosus in the field conditions and the tested wave condition was also similar to the real field condition (depth = 0.15 m, wave height = 0.05 m, and period = 1.34 s).In the current experiment, the tested Re (Re = u × d/ν, where u is the velocity of the middle and bottom layers) number range was generally between 1000 and 2000. Eleven measuring point was set along the center-line of the wave flume to minimize the influence of the side-wall.The obtained velocities and turbulence statistics are regarded as the representative measurements of the flume cross section.However, pair ADV measurements in the lateral direction were not conducted in our experiment.Thus, the current study mostly focused on velocity in the streamwise direction, i.e., u.The u velocity is positive when it is in the same direction as wave propagation, and it is negative when it is opposite to wave propagation. We used a spring scale for the cantilever measurement.10 rods of the same material were involved in each test and the experimental results are averaged.The elasticity modulus was calculated using the cantilever beam formula, and the stiffnesses of different materials were measured as: Cantilever beam formula : where E is the elasticity modulus (Pa); u is the offset distance (m); F is the transverse tensile force (N); I is the inertia moment, i.e., I = πd 4 /64 for a circle; and L is the rod length (m). The elastic modulus of the rods is shown in Table 1.Materials 1, 2, 3, and 4 with different stiffnesses are denoted as M1, M2, M3, and M4, respectively (Figure 2).These rods were commercially available.The elastic modulus of M1 was significantly greater than the others, and it was able to keep upright throughout the entire process.Therefore, M1 can be seen as a rigid rod.calculated using the cantilever beam formula, and the stiffnesses of different materials were measured as: Cantilever beam formula: where E is the elasticity modulus (Pa); u is the offset distance (m); F is the transverse tensile force (N); I is the inertia moment, i.e., I = πd 4 /64 for a circle; and L is the rod length (m). The elastic modulus of the rods is shown in Table 1.Materials 1, 2, 3, and 4 with different stiffnesses are denoted as M1, M2, M3, and M4, respectively (Figure 2).These rods were commercially available.The elastic modulus of M1 was significantly greater than the others, and it was able to keep upright throughout the entire process.Therefore, M1 can be seen as a rigid rod. Data Processing The original data were phase-averaged according to Cox's theory [40]: On the basis of j = T ∆t points, the original 3D data were divided into three directions and N j circles.Using the phase average, the velocity of each point in one circle can be obtained, u ia (i = x, y, z), and the fluctuating velocity can be presented as follows: Turbulence intensity is the root mean square of the fluctuating velocity: The probability density function of random data means the probability of an instantaneous value being within a specified range.For the turbulent process, the probability of its value, u(t), being in the (u 0 , u 0 + ∆u) can be defined as the following: where T is the measuring time; and T s is the sampling time within (u 0 , u 0 + ∆u), T s = ∑ n i=1 ∆t i .A probability density function of velocity measurements was made for each material.If this random process is a normal distribution, then the probability density function can be found using the following equation: where f is the probability density; and u i is the fluctuating velocity in the i direction. Reynolds stress is the shear force caused by the momentum exchange of a unit fluid passing through a unit area.The equation is the following: where when i = j, σ = −ρµ i µ j , and σ is the normal stress; when i = j, τ ij is the Reynolds shear stress. Peak Velocities The peak velocities changed significantly when waves crossed the different rod groups.The more flexible the plant, the smaller the peak the velocity.Table 1 shows the value of the velocity peaks.It was found that with an increase in rod flexibility (from M1 to M4), the velocity peak value in the middle and bottom layer both gradually diminished.Compared to M1, the peak value of the middle and bottom layer of M4 were reduced by 31% and 32%, respectively.The peak velocity value between two rods was increased.It is because rigid rods do not have any swing deformation, which squeezes the passing water and leads to higher velocity.For flexible rods, however, they sway as water passes, and hence do not lead to similar increased velocity.In fact, as the flexibility increases, the averaged velocity reduces (Figure 3). Flexible rods do not cause contraction of the flow passing the flume section because of the unsynchronized swing.Therefore, the peak wave velocity was small.This is similar to what occurs when a bridge makes a channel narrow and increases the flow velocity. Phase Averaged Velocity According to the instantaneous velocity measured using the ADV, phase velocities in the direction of u are shown for different rod groups in Figure 3. Data from measuring point #5 is shown as it is in the center of the vegetation patch and it is representative of the averaged flow condition.The velocity curve shows that with a phase shift, the more flexible the materials are, the lower peak flow is.With a low flow velocity, the differences among the flow velocities of different materials are not obvious. The Secondary Wave Peak in the Flexible Rod Groups The experimental results show that when waves went through the flexible rod groups, the velocity period changed gradually from unimodal to bimodal, owing to swing in the rod group.The more flexible the rod group, the more obvious the secondary wave peak.Figure 4 shows velocity of the M4 rod group at measuring point #5.The figure shows that both in the middle and the bottom layer, bimodal structures existed during each wave period.The ratio between the secondary wave peak and the main wave peak in the middle layer was 0.49:1, while in the bottom layer it was 0.31:1.This phenomenon indicates that the swing extent of a rod increases as the water surface approaches, and its impact on the secondary peak of the wave velocity also increases. Turbulence Intensity The middle layer turbulence intensity distributions in the u direction for the M1 and M4 rod groups are presented in Figure 5.The middle and bottom layer turbulence distributions of the different materials are shown in Figure 6.Spatial changes in the turbulence intensity indicates that the highest value occurs during the period of the wave entrance into the rod group and in the middle of the rod group.Possible explanations include the following: (1) The water's entrance into the rod group means that the wave propagates from one interface to another interface, which can result in intense turbulence; and (2) wave streaming causes intense turbulence when the wave propagates in the middle of the rod group. The Secondary Wave Peak in the Flexible Rod Groups The experimental results show that when waves went through the flexible rod groups, the velocity period changed gradually from unimodal to bimodal, owing to swing in the rod group.The more flexible the rod group, the more obvious the secondary wave peak.Figure 4 shows velocity of the M4 rod group at measuring point #5.The figure shows that both in the middle and the bottom layer, bimodal structures existed during each wave period.The ratio between the secondary wave peak and the main wave peak in the middle layer was 0.49:1, while in the bottom layer it was 0.31:1.This phenomenon indicates that the swing extent of a rod increases as the water surface approaches, and its impact on the secondary peak of the wave velocity also increases. The Secondary Wave Peak in the Flexible Rod Groups The experimental results show that when waves went through the flexible rod groups, the velocity period changed gradually from unimodal to bimodal, owing to swing in the rod group.The more flexible the rod group, the more obvious the secondary wave peak.Figure 4 shows velocity of the M4 rod group at measuring point #5.The figure shows that both in the middle and the bottom layer, bimodal structures existed during each wave period.The ratio between the secondary wave peak and the main wave peak in the middle layer was 0.49:1, while in the bottom layer it was 0.31:1.This phenomenon indicates that the swing extent of a rod increases as the water surface approaches, and its impact on the secondary peak of the wave velocity also increases. Turbulence Intensity The middle layer turbulence intensity distributions in the u direction for the M1 and M4 rod groups are presented in Figure 5.The middle and bottom layer turbulence distributions of the different materials are shown in Figure 6.Spatial changes in the turbulence intensity indicates that the highest value occurs during the period of the wave entrance into the rod group and in the middle of the rod group.Possible explanations include the following: (1) The water's entrance into the rod group means that the wave propagates from one interface to another interface, which can result in intense turbulence; and (2) wave streaming causes intense turbulence when the wave propagates in the middle of the rod group. Turbulence Intensity The middle layer turbulence intensity distributions in the u direction for the M1 and M4 rod groups are presented in Figure 5.The middle and bottom layer turbulence distributions of the different materials are shown in Figure 6.Spatial changes in the turbulence intensity indicates that the highest value occurs during the period of the wave entrance into the rod group and in the middle of the rod group.Possible explanations include the following: (1) The water's entrance into the rod group means that the wave propagates from one interface to another interface, which can result in intense turbulence; and (2) wave streaming causes intense turbulence when the wave propagates in the middle of the rod group.The turbulence intensity changes in different materials indicate that the greater the material stiffness, the stronger the turbulence velocity and intensity.From M1 to M4, the turbulence intensity reduces 5.1%, 5.4%, and 4.4%, respectively, in the u direction of measuring point #5.This result is similar to that of a previous experiment by Pujol et al., [29]. The turbulence intensity in different directions shows that the largest was in the u direction followed by the v direction, and the intensity in the w direction was minimal.As far as the vertical distribution, the turbulence intensity in the bottom layer was smaller than in the surface layer.This reflects that turbulence was anisotropic when the vegetation patch was under wavy flows. Probability Density of the Fluctuating Velocity If the probability density is a normal distribution, the wider the graph, the larger the velocity deviation.The y axis intercept is the turbulence intensity in Figure 7. Figure 7 shows the probability density distribution of the fluctuating velocity in the , , and directions.Two peak values are found in the probability density function of the direction, and at the same time, the turbulence intensity decreases with an increase in the flexibility of the rod group.While in the direction and direction, the probability density follows a normal distribution, and the differences between different materials are not obvious.The turbulence intensity changes in different materials indicate that the greater the material stiffness, the stronger the turbulence velocity and intensity.From M1 to M4, the turbulence intensity reduces 5.1%, 5.4%, and 4.4%, respectively, in the u direction of measuring point #5.This result is similar to that of a previous experiment by Pujol et al., [29]. The turbulence intensity in different directions shows that the largest was in the u direction followed by the v direction, and the intensity in the w direction was minimal.As far as the vertical distribution, the turbulence intensity in the bottom layer was smaller than in the surface layer.This reflects that turbulence was anisotropic when the vegetation patch was under wavy flows. Probability Density of the Fluctuating Velocity If the probability density is a normal distribution, the wider the graph, the larger the velocity deviation.The y axis intercept is the turbulence intensity in Figure 7. Figure 7 shows the probability density distribution of the fluctuating velocity in the u, v, and w directions.Two peak values are found in the probability density function of the u direction, and at the same time, the turbulence intensity decreases with an increase in the flexibility of the rod group.While in the v direction and w direction, the probability density follows a normal distribution, and the differences between different materials are not obvious.The turbulence intensity changes in different materials indicate that the greater the material stiffness, the stronger the turbulence velocity and intensity.From M1 to M4, the turbulence intensity reduces 5.1%, 5.4%, and 4.4%, respectively, in the u direction of measuring point #5.This result is similar to that of a previous experiment by Pujol et al., [29]. The turbulence intensity in different directions shows that the largest was in the u direction followed by the v direction, and the intensity in the w direction was minimal.As far as the vertical distribution, the turbulence intensity in the bottom layer was smaller than in the surface layer.This reflects that turbulence was anisotropic when the vegetation patch was under wavy flows. Probability Density of the Fluctuating Velocity If the probability density is a normal distribution, the wider the graph, the larger the velocity deviation.The y axis intercept is the turbulence intensity in Figure 7. Figure 7 shows the probability density distribution of the fluctuating velocity in the , , and directions.Two peak values are found in the probability density function of the direction, and at the same time, the turbulence intensity decreases with an increase in the flexibility of the rod group.While in the direction and direction, the probability density follows a normal distribution, and the differences between different materials are not obvious. Reynolds Stress Reynolds stress is generally greater than the viscous shear force in vegetated flows [41].Therefore, only near the sidewall is the viscosity term considered, otherwise it is ignored. Reynolds stress is the result of an uneven flow velocity distribution in a flow field.Therefore, the more uneven the velocity distribution is, the greater the Reynolds stress is, and the stronger the turbulence.Figure 8 shows the change in Reynolds stress for different rod groups, where = < ′′ , = < ′′ and = < ′′ ( ′, ′, and ′ represent fluctuating velocities in the , , and directions, respectively).Results show that the Reynolds stress decreases with increasing stiffness.The Reynolds stress of the M4 middle layer is only 10% that of M1. Moreover, the Reynolds stress of the middle layer is larger than the bottom layer, which is similar to Ma's [42] research on the wave turbulence.The middle layer Reynolds stress is about 1.14 times that of the bottom layer in the rigid rod group (M1), and about 1.52 times that of the flexible rod group (M4). Reynolds Stress Reynolds stress is generally greater than the viscous shear force in vegetated flows [41].Therefore, only near the sidewall is the viscosity term considered, otherwise it is ignored. Reynolds stress is the result of an uneven flow velocity distribution in a flow field.Therefore, the more uneven the velocity distribution is, the greater the Reynolds stress is, and the stronger the turbulence.Figure 8 shows the change in Reynolds stress for different rod groups, where R a = u v , R b = u w and R c = v w (u , v , and w represent fluctuating velocities in the u, v, and w directions, respectively).Results show that the Reynolds stress decreases with increasing stiffness.The Reynolds stress of the M4 middle layer is only 10% that of M1. Moreover, the Reynolds stress of the middle layer is larger than the bottom layer, which is similar to Ma's [42] research on the wave turbulence.The middle layer Reynolds stress is about 1.14 times that of the bottom layer in the rigid rod group (M1), and about 1.52 times that of the flexible rod group (M4). Energy Spectrum Density The turbulent process can be seen as superposition of simple harmonic waves with different frequencies.Velocity spectra were computed and give the parameters used to compute the power spectra (E(n), ).The energy spectral density curve represents the distribution of turbulent kinetic energy in a wave band , + in steady time.Time is the inverse of frequency.We set a 95% confidence interval on each of the spectra to eliminate the effect of noise.The high frequency in the energy spectrum represents the quickly changing turbulence, or turbulence on a small time scale [43]. The energy spectral density distributions in the u direction for different material rod groups at measuring point #5 are shown in Figure 9, which is related to the wave energy transmission process.As can be seen from the figure, when a wave propagates in the rod groups, two energy spectral peaks exist, with the main peak value larger than the secondary peak.With a reduction in material rigidity, the main peak value of the wave energy decreases, which means that the wave turbulence intensity is reduced.Furthermore, the secondary peak of M4 is the largest among all the cases.The secondary Energy Spectrum Density The turbulent process can be seen as superposition of simple harmonic waves with different frequencies.Velocity spectra were computed and give the parameters used to compute the power spectra (E(n), . The energy spectral density curve represents the distribution of turbulent kinetic energy in a wave band (w, w + dw) in steady time.Time is the inverse of frequency.We set a 95% confidence interval on each of the spectra to eliminate the effect of noise.The high frequency in the energy spectrum represents the quickly changing turbulence, or turbulence on a small time scale [43]. The energy spectral density distributions in the u direction for different material rod groups at measuring point #5 are shown in Figure 9, which is related to the wave energy transmission process.As can be seen from the figure, when a wave propagates in the rod groups, two energy spectral peaks exist, with the main peak value larger than the secondary peak.With a reduction in material rigidity, the main peak value of the wave energy decreases, which means that the wave turbulence intensity is reduced.Furthermore, the secondary peak of M4 is the largest among all the cases.The secondary peaks were related to rod group swing.The more flexible the rod group was, the more obvious the secondary wave peak was. Water 2019, 11 FOR PEER REVIEW 11 peaks were related to rod group swing.The more flexible the rod group was, the more obvious the secondary wave peak was. Wave Dissipation Effect in the Different Rod Groups The change in wave height before and after the wave moves through the rod groups represent the attenuation of wave energy (see Table 2 and Figure 10).When the bending elastic modulus of the rod group increases from 0.11 GPa to 0.39 Gpa (Compare M4 with M3), the wave dissipation coefficient correspondingly increases from 25.17% to 39.79%.When the flexural elastic modulus increases from 0.39 GPa to 16.56 Gpa (Compare M3 with M1), the wave dissipation coefficient increases from 39.79% to 40.45%,only an increase of 1.66%.M2 and M3 may have happened to be in an area that was insensitive to stiffness, causing the wave dissipation coefficients to fluctuate. Overall, the wave dissipation coefficient increases with increases in the bending modulus of elasticity in the rod group.More succinctly, the greater the stiffness of the rod group is, the more obvious the energy dissipation effects will be.In addition, the growth of the wave dissipation coefficient is not linear with the bending elastic modulus; but is sensitive within a certain range of the elastic modulus.There is a sharp quick increase in the wave dissipation coefficient.However, when the bending elastic modulus value increases to 0.39 Gpa, the wave dissipation coefficient growth becomes extremely small. The above behaviors can also be interpreted from the physical phenomenon point of view.M3 and M4 obviously swing more under wave flow.M2 only slightly swings when the wave peak passes.M1 is completely rigid and does not swing.This phenomenon shows that the bending elastic modulus values of M3 and M4 happen to be in the most sensitive ranges for a swing reaction under group wave conditions; that is, in the most sensitive ranges for a change in the wave dissipation coefficient. Wave Dissipation Effect in the Different Rod Groups The change in wave height before and after the wave moves through the rod groups represent the attenuation of wave energy (see Table 2 and Figure 10).When the bending elastic modulus of the rod group increases from 0.11 GPa to 0.39 Gpa (Compare M4 with M3), the wave dissipation coefficient correspondingly increases from 25.17% to 39.79%.When the flexural elastic modulus increases from 0.39 GPa to 16.56 Gpa (Compare M3 with M1), the wave dissipation coefficient increases from 39.79% to 40.45%,only an increase of 1.66%.M2 and M3 may have happened to be in an area that was insensitive to stiffness, causing the wave dissipation coefficients to fluctuate.Our results showed that when waves ran through flexible vegetation mimics, the velocity period changed gradually from unimodal to bimodal.This phenomenon is likely due to the swaying effect of the flexible vegetation [8,31], as it is more apparent with flexible mimics.The change in the turbulence intensity in the different rod groups showed that the higher the rod stiffness, the greater the turbulence intensity exists.This result is similar to that in Reference [29].With an increase in the bending elastic modulus of a rod group, the wave dissipation coefficient increased, which is consistent with the previous studies [39,41].However, the increase in the wave dissipation coefficient was not linearly correlated with the bending elastic modulus.It was more sensitive in a certain range of the elastic modulus than others. Conclusions The bending elastic modulus was measured using a conceptual plant model that was built of silica gel rod groups of different stiffness.The regular wave velocity distribution, turbulence characteristics, and wave dissipation effect of the different groups were studied.According to the results, the conclusions that follow can be drawn. (1) When waves went through different material rod groups, the peak velocity of the wave was in decay.The more flexible the rod group, the smaller the peak flow velocity.With a low flow velocity, the differences among the flow velocities of the different materials was not apparent. (2) When waves go through the flexible rod group, the velocity period gradually changed from unimodal to bimodal.Owing to rod group swing, the more flexible the rod group was, the more obvious the secondary wave peak was.With a reduction in material rigidity, the second peak value of the wave energy decreased, which was related to flow shocks that were caused by the swing of the flexible rod group.It is expected that, with different wave periods, the swing behavior and the wave energy transmission will be different, which should be further studied. (3) High turbulence intensity existed in the areas at the front and in the middle of the rod group.This was because when the wave entered the rod group, the wave propagated from one interface to another, resulting in intensified turbulence.Overall, the wave dissipation coefficient increases with increases in the bending modulus of elasticity in the rod group.More succinctly, the greater the stiffness of the rod group is, the more obvious the energy dissipation effects will be.In addition, the growth of the wave dissipation coefficient is not linear with the bending elastic modulus; but is sensitive within a certain range of the elastic modulus.There is a sharp quick increase in the wave dissipation coefficient.However, when the bending elastic modulus value increases to 0.39 Gpa, the wave dissipation coefficient growth becomes extremely small. The above behaviors can also be interpreted from the physical phenomenon point of view.M3 and M4 obviously swing more under wave flow.M2 only slightly swings when the wave peak passes.M1 is completely rigid and does not swing.This phenomenon shows that the bending elastic modulus values of M3 and M4 happen to be in the most sensitive ranges for a swing reaction under group wave conditions; that is, in the most sensitive ranges for a change in the wave dissipation coefficient. Our results showed that when waves ran through flexible vegetation mimics, the velocity period changed gradually from unimodal to bimodal.This phenomenon is likely due to the swaying effect of the flexible vegetation [8,31], as it is more apparent with flexible mimics.The change in the turbulence intensity in the different rod groups showed that the higher the rod stiffness, the greater the turbulence intensity exists.This result is similar to that in Reference [29].With an increase in the bending elastic modulus of a rod group, the wave dissipation coefficient increased, which is consistent with the previous studies [39,41].However, the increase in the wave dissipation coefficient was not linearly correlated with the bending elastic modulus.It was more sensitive in a certain range of the elastic modulus than others. Conclusions The bending elastic modulus was measured using a conceptual plant model that was built of silica gel rod groups of different stiffness.The regular wave velocity distribution, turbulence characteristics, and wave dissipation effect of the different groups were studied.According to the results, the conclusions that follow can be drawn. (1) When waves went through different material rod groups, the peak velocity of the wave was in decay.The more flexible the rod group, the smaller the peak flow velocity.With a low flow velocity, the differences among the flow velocities of the different materials was not apparent.(2) When waves go through the flexible rod group, the velocity period gradually changed from unimodal to bimodal.Owing to rod group swing, the more flexible the rod group was, the more obvious the secondary wave peak was.With a reduction in material rigidity, the second peak value of the wave energy decreased, which was related to flow shocks that were caused by the swing of the flexible rod group.It is expected that, with different wave periods, the swing behavior and the wave energy transmission will be different, which should be further studied.(3) High turbulence intensity existed in the areas at the front and in the middle of the rod group. This was because when the wave entered the rod group, the wave propagated from one interface to another, resulting in intensified turbulence.(4) The greater the material stiffness was, the stronger the turbulence velocity and intensity were. The Reynolds stress decreased with increased flexibility.Additionally, the middle layer Reynolds stress was generally larger than that at the bottom layer. The insights on different patterns in wave propagation turbulence intensity in different canopies may lead to further understanding of the coastal morphological changes with vegetation influence and may assist in selecting vegetation species with suitable stiffness for coastal protection purposes. Figure 1 . Figure 1.Model configuration and rod arrangement (from left to right, The green solid dots are the rods and the red ones are the ADV (Acoustic Doppler Velocimeter) measuring points). Figure 1 . Figure 1.Model configuration and rod arrangement (from left to right, The green solid dots are the rods and the red ones are the ADV (Acoustic Doppler Velocimeter) measuring points). Figure 2 . Figure 2. The picture of four materials of the flexible rods bending during the experiment. Figure 2 . Figure 2. The picture of four materials of the flexible rods bending during the experiment. Figure 3 . Figure 3. Phase averaged velocities in the u direction of different rod groups (measuring point of #5). Figure 4 . Figure 4. Velocity at measuring point #5 of the material 4 (M4) rod group.The dash boxes show the secondary peaks of the wave velocity in the negative direction. Figure 3 . Figure 3. Phase averaged velocities in the u direction of different rod groups (measuring point of #5). Water 2019, 11 FOR PEER REVIEW 6 Figure 3 . Figure 3. Phase averaged velocities in the u direction of different rod groups (measuring point of #5). Figure 4 . Figure 4. Velocity at measuring point #5 of the material 4 (M4) rod group.The dash boxes show the secondary peaks of the wave velocity in the negative direction. Figure 4 . Figure 4. Velocity at measuring point #5 of the material 4 (M4) rod group.The dash boxes show the secondary peaks of the wave velocity in the negative direction. Water 2019, 11 FOR PEER REVIEW 7 Figure 5 . Figure 5. Turbulence intensity in the u direction in the material 1 and 4 (M1 and M4) rod groups.Figure 5. Turbulence intensity in the u direction in the material 1 and 4 (M1 and M4) rod groups. Figure 5 . 7 Figure 5 . Figure 5. Turbulence intensity in the u direction in the material 1 and 4 (M1 and M4) rod groups.Figure 5. Turbulence intensity in the u direction in the material 1 and 4 (M1 and M4) rod groups. Figure 6 . Figure 6.Middle and bottom layer turbulence distributions of the different materials (measuring point #5). Figure 6 . Figure 6.Middle and bottom layer turbulence distributions of the different materials (measuring point #5). Water 2019, 11 FOR PEER REVIEW 8 Figure 6 . Figure 6.Middle and bottom layer turbulence distributions of the different materials (measuring point #5). Figure 7 . Figure 7. Probability density distribution of the fluctuating velocity in the u, v, and w direction (at measuring point #5). Figure 7 . Figure 7. Probability density distribution of the fluctuating velocity in the u, v, and w direction (at measuring point #5). Figure 9 . Figure 9. Energy spectral density distributions in the u direction for different material rod groups at measuring point #5. Figure 9 . Figure 9. Energy spectral density distributions in the u direction for different material rod groups at measuring point #5. Figure 10 . Figure 10.The relation of the Bending Elastic Modules E (Gpa) and the Wave Dissipation Coefficient. Figure 10 . Figure 10.The relation of the Bending Elastic Modules E (Gpa) and the Wave Dissipation Coefficient. Table 1 . The bending elastic modulus of the different materials and statistics of the flow velocity peak values. Table 1 . The bending elastic modulus of the different materials and statistics of the flow velocity peak values. Table 2 . Wave height and wave dissipation coefficient before and after the wave moves through the rod groups.
9,615.2
2019-01-10T00:00:00.000
[ "Engineering" ]
Developing sustainable models of rural health care : a community development approach Globally, small rural communities frequently are demographically similar to their neighbours and are consistently found to have a number of problems linked to the international phenomenon of rural decline and urban drift. For example, it is widely noted that rural populations have poor health status and aging populations. In Australia, multiple state and national policies and programs have been instigated to redress this situation. Yet few rural residents would agree that their town is the same as an apparently similar sized one nearby or across the country. This article reports a project that investigated the way government policies, health and community services, population characteristics and local peculiarities combined for residents in two small rural towns in New South Wales. Interviews and focus groups with policy makers, health and community service workers and community members identified the felt, expressed, normative and comparative needs of residents in the case-study towns. Key findings include substantial variation in service provision between towns because of historical funding allocations, workforce composition, natural disasters and distance from the nearest regional centre. Health and community services were more likely to be provided because of available funding, rather than identified community needs. While some services, such as mental illness intervention and GPs, are clearly in demand in rural areas, in these examples, more health services were not needed. Rather, flexibility in the services provided and work practices, role diversity for health and community workers and community profiling would be more effective to target services. The impact of industry, employment and recreation on health status cannot be ignored in local development. Introduction Rural/remote dwellers have higher morbidity and mortality rates than urban dwellers, and restricted access to health services 1,2 .Access is impeded by limited availability of services, higher costs, workforce shortages and transport problems, coupled with a disintegrating rural infrastructure 3,4 . The complexity of healthcare provision is frequently acknowledged as a problem in addressing a population's health status.Issues identified include limited collaboration across sectors; vertical funding and organisation of health services; multiple program evaluation criteria; and shortterm and inadequate funding 5,6 .Complex policies and processes are differentially applied across the nation and there exists a lack of understanding of community context and culture 7,8 .Investigations into solutions are also vertically focussed and do not incorporate an holistic approach to understanding health service delivery.Efforts to improve rural health status have largely been reactive, time limited, poorly coordinated and focussed on particular professional groups or type of disease.This has resulted in uneven levels of service provision poorly related to need 9 . The relationship between place of residence and socioeconomic status has been examined.Social geography began with Mayhew's 1861 account of the relationship between crime and other variables such as access to education 10 .This work remains current and is the basis of the World Health Organisation's report, The Social Determinants of Health 6 .It argues that life chances and health status are closely linked to an individual's environment.This knowledge should influence the provision of health services and the application of health policy in Australia.However, Vinson notes that a community's internal relations and peculiar characteristics can modify (for better or worse) the most well intentioned policy or program aimed at improving the health and welfare of a population 10 . To unravel these complex problems, a localised approach was adopted that valued community context and culture; and examined treatment, prevention, formal and informal support and education activities in two Australian rural locations. This project aimed to identify the peculiarities of two communities, noting the impact of health care on these, or vice versa.The project used a community development approach within a case study methodology.The objective was to determine the impact of policy processes and uneven service provision, and to identify innovation and collaboration that can inform new models of healthcare delivery. Needs analysis -a community development strategy In western countries, community development has a history in social practice rather than health practice, in spite of being acknowledged as a healthcare strategy 7,11 .In Australia community development approaches are more often the province of local government or community service agencies than state or federal health departments.Community development, however, offers a strategy to develop new models of health service provision that can take an holistic view of health, promote inter-sectorial collaboration, identify and evaluate innovation, and incorporate local context and culture, providing it is methodologically and ethically sound. The following section outlines the initial stage in the community development approach: needs analysis. Needs identification and needs analysis are two steps of a community development strategy, usually referred to collectively as needs analysis, which can identify problems faced by a target group 9 .The target group is linked by one or more defining characteristics, such as age, sex or location. Needs identification involves collecting information about the target group's circumstances, problems and resources.It involves making a value judgement about the relative importance of the identified needs and the way these might be met 9 .The findings of a needs analysis are vital for service planning because they can identify service gaps and barriers, service users, document ongoing disadvantage and provide leverage for advocacy activities 9 .This is a platform for community development activity. In community development work, the idealised approach is to facilitate the community's needs identification and analysis, subsequently allowing that community to develop suitable ways of meeting the needs 12 .While the ideal process is explicit, the value-laden prioritising of needs is less clear. The clues to whose needs are valued over others can sometimes be found in the definition of the target group or the definition of the issue.For example, Stevens describing needs assessment processes in the British National Health System noted a difference between the need for health care and the need for health 13 .The distinction is made because those who need health 'have problems with no realistic treatments'; whereas, those who need health care 'can benefit from treatment or prevention services' 13 .In this example, the target group whose needs are prioritised are health-service users with identifiable conditions or diagnoses that fit within a medical model of action. Value judgements are also found in the way certain needs are prioritised over others.The word need has a moral or ethical association that implies a responsibility of others to act if the needy cannot do so.To manage the vast range of possible or potential needs of any target group, and limited resources, a system of assessing priority is required. Kretzman and McKnight 14 suggest that an asset-based needs assessment framework is less judgemental.However that framework assesses strengths, and this project's aim was to identify contextual deficits in health and welfare services 13 . The literature on needs identification commonly uses Bradshaw's typology of need allocating different weight to each of the four categories: (i) felt; (ii) expressed; (iii) normative; and (iv) comparative need, in the analysis 15 . 'Felt need' is the wish list of the target group 9 .There is a tendency in the literature to place less importance on this type of need because it is perceived to develop a list of wants that may not address the identified problem.For example, training needs assessments will ask about skill development required to complete tasks, rather than identifying the training workers feel they need, which may not directly support the work of the employer 16 .Felt need may be called 'demand' because it is what people want or are willing to use if it was provided. 'Expressed need' is the measurement of the targets group's need, via waiting lists for example 9 .This can also be called To manage the tensions between an ideal community development approach, types of need, value judgements in prioritising needs and multiple target groups, a triangulated approach to healthcare needs analysis was developed for the present project.The approach is described and the project findings relevant to the needs analysis are reported here.The project's findings on policy implementation and disadvantage will form subsequent articles. Methods The project received approval from the Charles Sturt University Ethics In Human Research Committee (approval number2007/140).A case study approach entails an in-depth investigation of an area of interest.In this case, people and services bounded by a geographical location.The project was an instrumental case study of a bounded system 17 .The aim was to identify what is similar about each case and also what is unique.Each case site was instrumental to understanding the issue of rural health service provision, not simply the intrinsic conditions of the site 18 .Both qualitative and quantitative data was collected in a case study approach to build a picture of the case, applying the conceptual framework in analysis. Needs identification involves a comprehensive investigation of a community's social, cultural and physical environments; available infrastructure, existing services and supports and the collaborative relationships that exist between service providers.Several processes and types of information built the community profiles including census and global information systems (GIS) data, meetings, interviews, focus groups and documents. Felt needs were defined as any statements by research participants who live in the community about anything they believed would improve their health or wellbeing, service gaps, access problems, networks and supports. Expressed needs were defined as reports by health and community service workers of waiting lists and requests for services that could not be met.Also noted were records of need identified by community and government planners, lobbyists and media.Needs reflected in policy directives or goals were also defined as expressed needs. Normative needs were defined as statements by research participants who were health and community service workers about the needs of their service users in the case study community, and about any difficulties encountered in service provision. Comparative need was defined as the difference in need between the two case study sites identified by census data, GIS mapping of facilities and infrastructure and participant reports of past development and funding of services, services available, service gaps and problems.Also noted were positive views of the community, services, networks and facilities. Sample The two research sites were in central west New South Wales and named 'Seventy' and 'Thirty'.The towns were chosen because they were superficially unremarkable small rural towns with health and welfare services including a hospital, community health centre and community agencies. Neither has a large Aboriginal or immigrant population, no major tourist attractions or industries and neither are remote from a regional centre. Seventy had a population of approximately 1750, and Thirty Data analysis Needs analysis involves identifying policy and funding impacts, service gaps, potential collaborations, evaluating innovative projects and prioritising the identified needs based on community characteristics.The analysis is informed by primary healthcare goals of improving all the population's health via health promotion and prevention activities and access to treatment; community development processes including identifying felt, expressed, normative and comparative needs; and by prioritising the needs stated by community members. The research team examined the data for statements about need, service deficits and problems from the perspective that health care is a basic human right and that ease of access is the key to upholding that right.The researchers were also keen to note any innovative and/or successful means of service delivery and positive aspects of health care in the case study towns.However, consistent with the methodology, examples from the case study sites were seen to be instrumental to health service delivery in rural areas, not just intrinsic to the site. Statements from the transcribed interview and focus group records were separated into the four needs categories in a deductive analysis and stored in NVIVO. Felt needs The Elderly people experienced similar problems, often requiring treatment in regional centres or urban areas for serious and chronic conditions related to aging.This group appears more likely to travel for treatment services that are never available locally, specialist medical advice and procedures and cancer treatment, for example.While aged care residential facilities were well regarded, elderly people outside these services stated they did not like to ask for services or know what to ask for. Many people with high needs for information, emotional and financial support and treatment for chronic conditions said they did not put pressure on service providers, agencies or government departments to address their needs.Reasons given for this were a lack of skills, energy and other resources to do it.However, it was also because of stoicism and the attitude well expressed by one woman (and echoed by others) of I'm not a wanter. Transport was identified as a need for some groups and at some times.Transport for work and social events was identified as a need for young people, and the risk of motor vehicle accidents was a concern for parents.Travelling to access work, social activities, school, shopping and so on is an accepted part of living in a rural area.However, schedules are frequently planned around expected trips to minimise costs and travel time. Illness was described as a drain on financial and emotional resources.Unexpected and/or frequent travel for treatment services increased costs and required significant time away from work or caring responsibilities.Some people reported limited personal resources for this.Some participants reported accessing urban services to be additionally stressful because of their unfamiliarity with and nervousness in the city.For example: Normative needs Normative needs were frequently expressed as a need of health and welfare workers rather than a need of the population.For example, it was assumed the community needed the services of the agency and the unmet need was described as more work hours or funding for positions or more people to fill vacant positions.Generally the health workforce perceive they are being asked to 'do more with less'.The need to be addressed was the 'less,' which referred to work time, professional support, additional staff and their deteriorating buildings.Health workers reported some problems in matching client needs with the service they provided.This was usually expressed as a need.For example, 'they need to parent better', 'they need to be able to read and write', 'they need financial counselling' or 'they need someone to go there everyday to feed the kids and get them to school'.Often the need was identified as the responsibility of some other profession or a service option that did not exist.Many of the challenging needs identified were linked to a number of problems, including long-term unemployment, disability, mental illness, substance abuse or dependency and limited or no literacy. Health worker research participants discussed the increasing complexity of cases, noting that fragmented funding and services made it difficult to support clients even when services existed.Workers reported collaborating locally by sharing information and trying to work together, but there was agreement that some problems were unable to be resolved.For example: It's not just the kid with the developmental delay who I see, it's mum with one too and Dad drinks and they've got no money but they could get an American Express card and a plasma screen TV.We all know that family, and the others like them.Where do you start? People who did actively advocate for their children or themselves and those who used multiple public services but remained socially disadvantaged were sometimes regarded as welfare dependant and overstating their needs.For example: They know how to work the system.They come here because there's no jobs and they can stay unemployed no questions asked. One exception was noted in the findings of normative needs. A community worker stated: Transport is not needed.Professionals always say transport is needed but there's heaps of transport. Comparative needs Many Seventy residents perceived Thirty to 'get more' than Seventy does.This perception arises because the local council sits in Thirty.Many Seventy research participants expressed anger about this, citing lack of an aged care hostel and limited representation on the council as evidence of being 'ignored'.For example: We send them off all enthusiastic [elected local councillors] and next minute there's nothing.It was the same in the last council before the boundaries changed.We're on the edge [of the LGA] and at the end of the line. We're the biggest town in the council and we get nothing.The town also has a hospital served by the two local GPs that participants described as meeting their needs for general treatment and assessment of illnesses and injuries.A number of research participants reported the GPs undertaking special training to meet gaps in service provision.For example: …he's a frustrated psychiatrist.Mental health is no good so he did a course and now he does all that stuff and the other doctor does the family stuff. The health centre provided comprehensive primary health services to the elderly population including social and physical activity groups.Seventy also has a 'men's shed'. This is described as a facility for unemployed and retired men to meet and engage in self-directed activities that usually involve making things from wood.The shed is funded and supported by the council, but research participants report the planning and establishment is the result of a community health worker's efforts. Thirty had a devastating flood in 2004 from which the community is still recovering.As well as an emotional cost described by research participants, the flood damaged health service sites and subsequently limited services.Community health centre staff described the premises at the hospital as inadequate in mid-2007, although community nursing and some allied health services continue to operate. Thirty research participants were mostly positive about the health services in the town, although many noted a longstanding and well known dispute between the local GPs that is perceived to have a negative effect on services in the town.The effects are said to include nursing staff leaving the hospital and townspeople seeking care outside the town.A number of participants also reported being sent to a regional centre after presenting at the hospital for treatment. Childcare was identified as a need in Seventy but not in Health and community workers frequently described needs for more working hours or more workers.They could clearly identify ways they could improve their service delivery in the community, particularly by being proactive not reactive. These practices could be assumed to have a positive impact on the communities' health status.However, some more immediate needs could not be met.These needs were often related to the worker's reason for interaction with someone, but not within their role to meet. Identifying a need for support, for families, for example, is The findings of comparative needs highlight differences in services unrelated to need.This is not surprising given the inflexibility of funding and service models and the historical development of services.What is highlighted, however, is the way distance affects service use and potential need. Research participants from Thirty were mostly willing to travel to access specialist services, for work opportunities and to get anything they couldn't get locally. While only 30 km more distant from the same regional centre as Thirty, Seventy had a more active health centre, a more active hospital, more specialist community services and more reports of problems related to transport and access to specialist and support services.It also had more socioeconomic disadvantage and reported crime, more limited employment opportunities, more reported need for services and concern about the future. Conclusion The overall picture developed from the needs analysis is one of poorly resourced limited services patching up the health of their community as best they could.Active and effective services rely on the energy and experience of the workers, while the community is grateful for anything they can get and unlikely to demand more. There is no systematic way of profiling a community and identifying their needs, consequently services and facilities vary.The part-time, fragmented services trying to do everything cannot be sustainable for the funding bodies, in spite of the limited resources allocated to them.Nor are they sustainable for the individuals in the positions who may be isolated and frustrated at the limitations they experience. While some services, such as mental illness intervention and GPs, are clearly in demand, more health services are not needed.Rather flexibility in services provided and work practices, role diversity for workers and community profiling to target services would be more effective.More importantly, as isolation from regional centres increases, the communitys' need for industry, employment and recreational activities may do more for health than health services could. There are two solutions suggested for developing and implementing models of health care.The first solution is to identify needs more effectively.This requires consistent and concerted efforts to collect community information in a systematic way.If all healthcare providers are involved in profiling the health and welfare needs of the population, the needs can be effectively assessed and inform planning processes. The second solution to healthcare provision is more significant and broader.The application of human rights standards to rural healthcare provision can remove political imperatives and lobbying from the funding process.Ideally this means developing long-term healthcare plans based on human rights principles that will be supported by all levels of government.This leadership model will provide structure and guidance.It will not require the most disadvantaged members of communities to lobby for care they are entitled to. had approximately 1550 residents according to local government statistics.The 2001 census statistics used in this project record Seventy and Thirty as having 1512 and 1563 residents, respectively.Both towns are in the same local government area (LGA) with the main council services located in Thirty.Seventy is on the edge of the LGA and the southern side of the town is in the adjacent LGA.Each town is surrounded by several smaller villages with populations ranging from 50 to 800 residents in an agricultural region.The nearest large regional centre to each town (population 40 000) is 38 km from Thirty and 72 km from Seventy.Another town of 8000 residents is approximately 35 km from Seventy, providing shopping, sporting and some health and welfare services to Seventy residents.Data collection GIS data mapped town facilities and infrastructure, population health data and socio-economic disadvantage trends to specific localities within a community.Services, community facilities, businesses and industry were identified.Examination of community profiles from national census data identified socioeconomic indicators of each community.This was compared with Vinson's report on Australian disadvantage 19 .Policy and funding documents from several health-focussed state and commonwealth departments mapped the parameters of health and health-related service provision available to the community.Interviews were conducted with policy advisers and managers about the role and implementation of policy in rural areas, including what services were intended to be delivered, how and where.Qualitative semi-structured interviews and focus groups were undertaken with existing community groups to identify felt, expressed, normative and comparative needs.Existing community groups were those that met regularly for work and social purposes.Groups participating included the service group Rotary, mothers' groups, a toddlers' playgroup, school staff meetings, and health and welfare workers' staff and interagency meetings.It was central to the methodology that the participating groups had community knowledge, not necessarily specific health or welfare needs or experience.Participants were asked about their experiences of health services, what unmet needs they had, and what they hoped would be available in the future.Service providers were asked an additional question about innovative service delivery models.Interviews were conducted with four individuals and eight focus groups and were held in each town during March, April and May 2007.A total of 128 community members participated in the qualitative data collection, 57 from Thirty and 71 from Seventy.Focus groups were held with existing community groups at their usual meeting place and time.This included local health and community service workers.An additional seven participants who provided health and health-related services, and policy development or implementation were interviewed about service provision in each town.Interviews and focus groups were recorded as minutes and on a digital voice recorder.Minutes and files were transcribed into Microsoft Word documents. Community profiles of each town from the 2001 census show similar levels of population, income, educational attainment, home ownership, ethnic and religious mix and family make-up20 .However,Vinson's report into disadvantage by postcode found that Seventy was significantly more disadvantaged than Thirty 19 .Vinson's measures of disadvantage include, among others, reported child abuse, early school leaving, long-term unemployment, imprisonment and low income.The southern part of Seventy is in a different LGA from other parts, complicating any statistical picture of the population.A closer examination of the 2001 census statistics revealed Seventy has 80 more single parent families than Thirty does, and that Seventy has approximately 200 more people on incomes below $200 per week.There is no public housing in Thirty.There is public housing in Seventy and a number of families are reported to be living in the caravan park.Thirty's caravan park has '2 or3' permanent residents, described as elderly single men.Research participants from Thirty reported little or no disadvantage in the town, perceiving that disadvantaged people moved to the smaller villages surrounding the town and further from the regional centre because rents were cheaper.Research participants from Seventy perceived some unemployment and substance abuse to exist in the town, and noted people moving from other larger towns.This was often attributed to the drug and alcohol residential rehabilitation centre established approximately 15 years ago that can accommodate up to 20 people.Workers from this centre noted that most people attending the centre had been in jail and had multiple problems to address.However, while approximately 30 of their graduates continued to live in the community, the majority of people attending the centre leave the town after leaving the centre.The centre's ex-residents were not perceived to need any particular health services.Current residents were described as frequent users of the town's treatment services, particularly the doctor and the pharmacist.They were also noted to need employment (including literacy) and social skills, although these were not provided by the centre because of funding cuts, or from elsewhere in the town.Public and private health services in Seventy are highly regarded by research participants.They described a large community health centre that provided eighteen different types of services or interventions from resident and visiting workers.The services are provided in the centre, by outreach to smaller villages, in the hospital and during home visits. one part of a process and does not determine the type of support that might be delivered.Availability of a type of service depends on what is supported by funding bodies and the ability of local services or the community to implement them.While holistic assessment and working together are generally agreed on principles of the local agencies and workers, competition policy and strict reporting requirements linked to service models are singular and inflexible systems of funding.This approach is not concerned with meeting local need.Nor is a cost-efficient service necessarily an effective one. two communities felt needs were similar but varied Children and elderly people were reported to have the greatest need for health services.Services frequently noted as needed for children included speech pathology and occupational therapy.For elderly people podiatry was specifically noted.Parents had difficulty accessing assessment services for children locally, usually because positions were vacant or limited to part time.Treatment services were described as even more limited.If a child had a disability or required specialist intervention it was frequently not available locally but supplied by a regional or urban service provider.Sometimes these providers visited the town, or children were expected to travel to them.
6,287.4
2007-12-07T00:00:00.000
[ "Political Science", "Medicine", "Economics" ]
A data-driven global observatory addressing worldwide challenges through text mining and complex data visualisation Observing the world on a global scale can help us understand better the context of problems that engage us all. In this paper, we propose a data-driven global observatory methodology that puts together the different perspectives of media, science, statistics and sensing over heterogeneous data sources and text mining algorithms. We also discuss the implementation of this global observatory in the context of epidemic intelligence, monitoring the impact of the COVID-19 pandemic, and in the context of climate change, with a specific focus on water resource management. Moreover, we discuss the value of this global solution in local contexts and priorities, based on the exchange with stakeholders in municipalities, utilities and governmental institutions. II. Introduction The world's globalization phenomena unveiled awareness of worldwide problems, such as the climate crisis, but also to common efforts to find solutions to those problems, as was the case of the several COVID-19 collaborative actions.There are many obstacles still today on such global strategies to which innovative technology and data-driven solutions can help.In this paper we propose the concept of a global observatory based on text mining algorithms that is able to answer the wide range questions that are core to global solutions, using machine learning and Big Data analytic methods over the layered information it is ingesting often in real-time.The main perspectives of this global observatory fall on: (i) the monitoring and exploration of news articles and social media feeds; (ii) the analysis of combinations of indicators through time and what stories can they tell; and (iii) the exploration of published scientific knowledge.All of these perspectives can be combined to provide complementary answers to main topics from health to engineering.In this paper we discuss the results obtained based on two implementations of these observatory approach: (a) the Coronavirus Watch portal released in 1 addressing the worldwide spread of COVID-19 2 ; and (b) the NAIADES Water Observatory 3 , focusing on best practices to build water sustainability. III. Methods Taking into account the schema in Figure 1, we consider the construction of the Global Water Observatory into phases, going from lower to higher complexity.A similar observatory, dedicated to monitor COVID-19 2 , was made available with less functionality but also including a diversity of perspectives for which the interoperability is a core topic of discussion in this paper.We start by putting together data sources that are meaningful to a range of stakeholders targeted within engaged citizens to decision makers and that can leverage the information provided to established evidence-based policies. At the data collection phase, we are concerned with addressing properly the challenges in the heterogeneous nature of the data, their different frequency and size, as well as the levels of access to it established by data providers.These parameters to take into consideration ensure the appropriate data ingestion into the system.The selection of data sources and features to be ingested is done manually, but the ingestion itself is automated when the frequency requires so.The frequency of update depends solely of the data provider.At this first stage we are collecting data from many different data sources (such as, e.g., the Worldwide news, the Microsoft Academic Graph, the Word Bank, the United Nations Sustainable Development Goals), according to their relation to the focus topics and priorities. A forthcoming stage is in the data cleaning, data processing and data integration prior ingestion.This step is highly important to allow for the data quality that is needed in order to obtain useful insights from it.In this step we include the data curation, where the most meaningful datasets are selected and parsed.We also include the exploratory data analysis and some data visualisation for the purpose of prototyping what is then available at the Water Observatory.The Observatory phase is then possible when the curated data streams of a selection of dynamic data sources are live in the system and can be used to obtain insight on particular topics of interest, monitor Key Performance Indicators associated with business priorities, and allow for a global and local perspective on related topics.These include interactive data visualisations of indicators and statistical data, the dynamic view of the news sources over priorities, or the user query over a scientific research topic.This allows for insight on topics in analysis (such as water topics like, e.g., water scarcity and water quality, and public health topics like, e.g., ebola or the new coronavirus) that will be put into the context of local data when sourced from the shared interest of users. The path ahead is a novel concept of a meaningful Digital Twin (i.e., a dynamical model which, given a current state of an observed system, is capable of a digital partial reconstruction of such a system) that builds over the Global Water Observatory to rise above data complexity towards data interoperability.This is usually difficult to achieve in full due to the heterogeneity of the data, the different characteristics of the data sourced (frequency, data types, etc.) and the domain knowledge needed to identify new challenges covering a wide range of Amendments from Version 1 After a useful peer review, we have improved the content taking into consideration the comments that mostly regarded the context to related work.We have improved the readability of this paper adding new references to this work, and updated the context in which it was presented when first submitted. Any further responses from the reviewers can be found at the end of the article business intelligence priorities.Nevertheless, useful aspects of it can be achieved, some of which are already evident from the implementations discussed in this paper.An example of this is to track a topic in the news, its impact in the social media, and explore the range of the problem in the published scientific research, as well as extract good practices to deal with this problem. We add a final stage to this diagram that is usually forgotten in a theoretical framework, which is the adaptation of the system to the needs and priorities in the user side.Here we consider the ingestion of local data, the customization of news streams, the availability of exploratory dashboards, the shareable instances for policy makers, and the APIs for 3rd party integration. The system that is able to access the data sources that relate to the items above, is also able to track the term throughout the several phases of popularization.It is also able to show the current status of a particular topic of interest, and optimally alert for potentially trendy topics in the future.In that particular context, the interactive data visualisation is a key factor to improve the usefulness of the tool and should express visual narratives that comprehend the relevant aspects of the problem.Good examples of such can be consulted for the case of epidemic intelligence in 4 and water intelligence in 5, and will be discussed in the implementations described and explored for the purpose of this study. IV. On The Coronaviruswatch Observatory When the World Health Organization (WHO) announced the global COVID-19 pandemic on March 11th 2020 6 , following the rising incidence of the SARS-CoV-2 in Europe, the world started reading and talking about the new Coronavirus.The arrival of the epidemic to Europe scaled out the news published about the topic, while public health institutions and governmental agencies had to look for existing reliable solutions that could help them plan their actions and the consequences of these. Technological companies and scientific communities invested efforts in making available tools (e.g. the GIS 7 later adopted by WHO), challenges (e.g. the Kaggle COVID-19 competition 8 ), and scientific reports and data (e.g. the repositories medRxiv 9 and Zenodo 10 ). In March 2020 we have released the first implementation of this global observatory as the Coronavirus Watch portal 1 , aiming to contribute to a multinational response to the global crisis.It was made available by the UNESCO AI Research Institute (IRCAI), comprehending several data exploration dashboards related to the SARS-CoV-2 worldwide pandemic.This platform aimed to expose the different perspectives on the data generated and trigger actions that can contribute to a better understanding of the behavior of the disease (see Figure 2). The portal includes a real-time news monitoring system that can be focused at European and national level, side-by-side with the data on the progress of the pandemics made available by the Worldometer 11 and the Center for Disease Control 12 .The visual representation of the details of those indicators were made available over animations showcasing the live comparisons in 5D (as in Figure 2), the trajectories of the most affected countries, and the details of the progression of the disease.It also included perspectives on the mobility, sourced on the Google Community mobility data 13 , a social distancing simulator, and exploration tools based on the published biomedical research (see 2). To improve the resolution of these results and to optimise their relevance for European public health agencies, we have developed a set of COVID-19 focused tools on the MIDAS platform 14 .This system was designed for evidence-based decision-making in public health.This approach allows us to validate the usability of the global observatory on a cross-EU level within the COVID-19 context, integrating both health news and biomedical research exploration (see Figure 3). V. On The Water Observatory Climate change is a global problem that in the recent years has been in the focus of European and Worldwide strategies.The priorities in European Union are rapidly changing towards sustainability and environmental efficiency, transversely to most domains of action.The European Commission's Green Deal aiming for a climate neutral Europe in 2050, and boosting economy through green technology 15 provides a new framework to understand and position water resource management in the context of the challenges of tomorrow 16 .In this context, the NAIADES project 17 aims to improve the water resource management in a global context, including European regions where water scarcity is predicted, also dealing with concerns as, e.g., saline intrusion and groundwater contamination.To contribute to this cause, we deployed a Global Water Observatory 3 that is focusing on water-related aspects allowing the user to explore the several layers of information it is providing, from news and social media to published science, weather models and indicators.The NAIADES Global Water Observatory does not only contribute to the improvement of European sustainability in water-related matters, but also provides the local actors on the water resource management an active role in that taking into consideration the national and international open data available on water resource management-related topics and priorities (see 18 for more details). Water is fundamental to all human activity and ecosystem health, and is a topic of rising awareness in the context of the recent discussions on climate change.Water resource management is central to those concerns, with the industry accounting for over 19% of global water withdrawal, and agricultural supply chains are responsible for 70% of water stress 19 .In 2015 the UN established "clean water and sanitation for all" as one of the 17 Sustainable Development Goals, aiming for eight targets to be achieved by 2030 20 .The UN secretary-general points out in April 2020 that SDG 6 is "badly off track" compromising the progress on the 2030 Agenda 21 .As noted by the Organisation for Economic Cooperation and Development (OECD), the 'water crisis' has often proven to be a crisis of governance 22 , where water scarcity is largely caused by mismanagement of resources, leading to a global prioritisation 23 . The intention to globally monitor water resources is not new, and already in the late 1960s 24 the first spatially-distributed water resources model appeared, with first operational uses of satellite observations in water resources developed in the early 1980s 25 .The reliable management of water resources is only possible under condition of availability of adequate qualitative and quantitative information about state of the water body at any moment of time.Taking advantage of the recent technological progress enabling much innovation that was unthinkable a few years ago, the concept of the Digital Twin is increasingly entering the water sector as an innovation driver.Due to the rapidly growing awareness of the sustainability challenges that we are facing in Europe and worldwide in the context of the water resource management, there has been much work done to develop systems that are able to collect information about the available water and even simulate and forecast that in the near future.These are usually geolocation-based systems ingesting water-related data to enable real-time monitoring of resources and usage 26 .The other typical approach is the systems focused on workflows in the water sector, including the management of water distribution networks, hydraulic efficiency or leak/fraud detection, better suited to those companies that already have their infrastructure in place and know well what do they want to monitor 27 . The approach we proposed in this paper is novel in many ways.The news monitoring perspective is monitoring water scarcity and water quality in worldwide and, in particular, in the surrounding regions of the water resource management agencies is is mainly addressing, together with their audiences.This is also including a Twitter observatory that adds to the already implemented measure of impact of the monitored news in Facebook, a social media component to the observatory.The global indicators that are already available for visualisation, sourced over the UN Open Data Portal, the water-focused Sustainable Development Goal 6, and the World Bank Data Portal, can help us understand water-related aspects of the climate crisis. The important role of scientific research in this context, and the best practices that can be extracted from this data, is explored with a complex data visualisation technology that allows the user to powerful Lucene-based queries over the article's metadata aiming to refine search by moving a pointer over clusters of related topics (see Figure 3).We will also be including other data analytics technologies to analyse simultaneously multiple time-series providing interactive exploration tools to understand trends in the weather and water-related impacts to it. The localisation of this global system entails the customisation of its functionality in news monitoring, ingestion of local indicators and exploration of scientific research on observed problems in, e.g., groundwater contamination.In that, the observatory is synchronising with the priorities of regional water providers.These agencies (e.g.Aguas de Alicante) are collecting data on their water resource management services to improve the customer satisfaction and optimise their system. VI. Conclusions and future work The results discussed in this paper show the potential impact of the proposed data-driven global observatory in contexts like public health and climate change preparedness.This integrated system is capable to monitor in real-time the worldwide news and social media, statistics, published science, weather and many other data streams that are identified as useful and can be provide complementary value to those considered already.We will be deploying this system in the context of other global problems where there is enough data to provide useful and meaningful contribution, either in other aspects of the climate crisis to better plan response, in addressing other epidemiological concerns to serve as early warning, or in addressing a new focus in the context of data science for social good. We are now working on extending this system to integrate the information retrieved by the topics searched over the internet provided by Google Trends, regarding issues related to the context in focus.The user will be able to explore a wide range of indicators and compare trends in a global and local level throughout a meaningful timeline.We will also be reusing EC-funded open datasets and initiatives in order to ingest this information as European-level indicators to complement the analysis.Furthermore, we will be further investigating the validity of the localization of this Global Water Observatory, integrating some of the local data that can be provided by the user, and customizing news sources to their own priorities, as well as making available data exploration dashboards that allow for further insight and evidence-based policy. VIII. Ethics and consent Ethical approval and consent were not required VII. Data availability For this paper, we used only open data.In particular, we have used the MEDLINE dataset 28 and the worldwide news are being collected by 29, freely available online, but which the dataset we do not have permission to share. Strengths: The paper introduces a novel, data-driven global observatory that effectively integrates text mining with complex data visualization to address global challenges.This approach is wellgrounded in addressing critical issues like epidemic intelligence and water resource management. 1. The paper meticulously outlines the stages of constructing the global observatory, from data collection to real-time monitoring and visualization, ensuring clarity in how the system operates. 2. The implementation examples, such as the Coronavirus Watch portal and the NAIADES Water Observatory, provide concrete evidence of the system's applicability and effectiveness in real-world scenarios. 3. Areas for Improvement: Some sections, particularly the methods and implementation, could benefit from clearer language and more concise explanations.Reducing jargon and complex sentences would make the content more accessible. 1. While improvements have been made, the paper could further enhance the discussion on how this work compares with existing solutions, particularly in the areas of text mining and data visualization in global observatories. 2. The paper discusses the system's potential for customization based on local priorities, but more details on how this adaptation is practically implemented would strengthen the argument for its usability and flexibility. 3. Specific Suggestions: Abstract: Consider summarizing the key findings more explicitly to highlight the impact of the proposed observatory.The presented idea is interesting.It would be nice if the authors could consider presenting the work more clearly and providing more details, especially for the corona virus watch observatory.For instance, the authors briefly described how to monitor news articles and social media feeds related to water shortage and quality.However, shorter descriptions were provided for COVID-19. Comment 4: In Figure 1, the authors mentioned the concept of digital twins.However, an important perspective on digital twins' features is missing from the discussion in the current manuscript.It is the "predictive analytics" capability, which differs from an urban digital twin from previous urban or hydrologic information systems (or decision support systems).The conclusion should also include some technical challenges and aspects of developing urban digital twins for the EU.As an example, what are potential big data or computing challenges from the cyber-infrastructure or HPC, respectively?Are there any data privacy or data residency concerns across EU countries? I would recommend a minor revision for this manuscript.Overall the manuscript is in good shape and is well organized. Figure 1 . Figure 1.The approach used leading from data sensing to the digital twin and its approximation to local priorities. Figure 3 . Figure 3. Exploring scientific research through complex data visualisation. ○Figures: Figures:Ensure that all figures are clearly labeled and referenced in the text to aid in the reader's understanding. ○Conclusion: Expand on the implications of your findings for future research and potential broader applications beyond the discussed domains.○CarsonLeungUniversity of Manitoba, Manitoba, Canada Costa et al. made some revisions based on previous peer review reports.They described in this 7page revised submission a data-driven global observatory addressing two worldwide challenges (namely, public health and water) through text mining and complex data visualisation.They focused on using open data appropriately to monitor news articles and social media feeds related to COVID-19 and water shortage.They would collect, clean, process, integrate and ingest data by making use of key performance indicators (KPI) and digital twin.The presented idea is interesting.Is the work clearly and accurately presented and does it cite the current literature?PartlyCarson LeungUniversity of Manitoba, Manitoba, Canada Costa et al. described in this 8-page submission a data-driven global observatory addressing two worldwide challenges (namely, public health and water) through text mining and complex data visualisation.They focused on using open data appropriately to monitor news articles and social media feeds related to COVID-19 and water shortage.They would collect, clean, process, integrate and ingest data by making use of KPI and digital twin. Comment 5 : Section V should provide more examples of digital twins in the water resources management sectors.More cases and real-world implementations should be discussed here.Below is an example; more examples/instances should be reviewed and discussed here.Alperen, C. I., Artigue, G., Kurtulus, B., Pistre, S., & Johannes, A. (2021, November).A hydrological digital twin by Artificial Neural Networks for flood simulation in Gardon de Sainte-Croix basin, France.In IOP Conference Series: Earth and Environmental Science (Vol.906, No. 1, p. 012112).IOP Publishing 3 . An overview of visualization and visual analytics applications in water resources management. Environ A, Liu Y, et al.: Water Scarcity and Droughts in the European Union. 2019 . Reference Source 17. Costa JP, Massri MB, Novalija I, et al.: Observing Water-Related Events for Evidence-Based Decision-Making.In: The Porceedings of the 2021 Slovenian KDD Conference.Institute Jozef Stefan.Reference Source 18. Varady RG, Albrecht TR, Gerlak AK, et al.: Global water initiatives redux: A fresh look at the world of water.Water.2022; 14(19): 3093. hydrological digital twin by Artificial Neural Networks for flood simulation in Gardon de Sainte-Croix basin , France.In: IOP Conf Ser.: Earth Environ Sci.IOP Publishing, 2021; 906(1): 012112.Publisher Full Text 28.North American National Library of Medicine: MEDLINE: Description of the Database.2022.Reference Source 29.Institute Jozef Stefan: IJS Newsfeed: a clean, continuous, real-time aggregated stream of semantically enriched news articles from RSSenabled sites across the world.2022.Reference Source the work clearly and accurately presented and does it cite the current literature? Partly Is the study design appropriate and does the work have academic merit? Yes Are sufficient details of methods and analysis provided to allow replication by others? Partly If applicable, is the statistical analysis and its interpretation appropriate? Not applicable Are all the source data underlying the results available to ensure full reproducibility? Yes Are the conclusions drawn adequately supported by the results? Partly Competing Interests: It would also be nice if the authors could consider providing more details of methods and analysis to allow replication by others.For instance, it is unclear what text mining techniques or what complex data visualisation was used.Out of the list 26 references, only 5 were from journals.It would be nice if the authors could consider citing some more current formal literature.It would be nice if the authors could consider further proofreading.For instance, the authors may want to replace the typo "nnd" by "and".No competing interests were disclosed. have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. USEPA STORET water quality data, and FEMA Hazus flood data, are open to the public.University consortiums, such as CUAHSI and UCAR, provide online data-sharing and model-sharing platforms (e.g., CUAHSI Hydroshare) to share water resources data and simulations produced in academic research.The authors could include a more comprehensive review of the open data sources for both water resources and COVID data.Is there any government open water data initiatives or University consortiums data initiatives within the EU?I would suggest the authors provide a quantitative measure of the trend of different technologies and research areas using Elsevier Scorpous or Google Scholar.These research platforms could provide you with statistics on the literature of a specific research area or contain specific keywords over a few years.A quantified bibliometric analysis is recommended.The visualization perspective of the topic is very weak in the current manuscript.There are many review articles that talk about visual computing approaches (information visualization, scientific visualization, and visual analytics) for COVID analysis and water resource management.Please see the example below: Leung, C. K., Chen, Y., Hoi, C. S., Shang, S., Wen, Y., & Cuzzocrea, A. (2020, September).Big data visualization and visual analytics of COVID-19 data.In 2020 24th International Conference Information Visualisation (IV) (pp.415-420).IEEE. 1 ○Xu, H., Berres, A., Liu, Y., Allen-Dumas, M. R., & Sanyal, J. (2022).An overview of visualization and visual analytics applications in water resources management.Environmental Modelling & Software, 105396. 2
5,549.6
2022-05-30T00:00:00.000
[ "Computer Science" ]
Development of a dynamic model of transients in mechanical systems using argument-functions There are a number of applied problems in which it is necessary to take into account the dynamic component of the process or phenomenon including the fact that the load is applied not instantaneously but in time. For example, in continuous rolling, such combinations of mechanical systems appear in which action transfer from one rolling stand to another via the strip proceeds with some delay affecting transient processes and the strip gripping capability in the adjacent stands of the continuous mill. The strip between the mill stands is in an elastic state. When the rolls start acting on it during the bite in the next stand, they transfer disturbance to the strip in a form of oscillations or in a form of a stationary action. The aim of this research was to expand the application field of the obtained solutions to satisfy boundary and initial conditions formulated by applied production problems. The wave problem was considered as the process of propagation of the initial deviation and initial velocity. On the basis of the method, the essence of which is the use of argument-functions, solution of dynamic linear and spatial problems of the elasticity theory was shown. In the course of the study, conditions for existence of new solutions for the wave problem, which are limited by the boundary conditions of various processes were shown. The initial differential equations and boundary conditions determine the type of differential equations for the argument-functions that close the solution. Argument-functions can be restricted by the Cauchy-Riemann relations and the corresponding differential invariants on the one hand and the differential relationships which result in that the argument-functions are the same for adjacent coordinate-time dependencies on the other hand. Besides, analytical dependences on the parameters entering into the d’Alembert formula were obtained. Introduction There are a number of applied problems in which it is necessary to take into account the dynamic component of the process or phenomenon including the fact that the load is applied not instantaneously but in time. For example, during continuous rolling, there are such combinations of mechanical systems in which action transfer from one rolling stand to another via the strip occurs with some delay. This is reflected in the transient processes and the strip gripping capacity in adjacent continuous mill stands. The strip between the mill stands is in an elastic state. When the rolls start acting on it during gripping in the subsequent stand, they transmit disturbance to the strip in a form of oscillations or in a form of a stationary action. In this period, strip gage variation appears reducing dimensional accuracy of the rolled product, i. e. the product quality worsens. Literature review and problem statement It is of practical and theoretical interest to consider the wave problem as the process of propagation of the initial deviation and the initial velocity. At the same time, a need of defining general schemes (both linear and spatial ones) of solving dynamic problems arises. Reference book [2] outlines general approaches to solution of the simplest dynamical problems. In one of the classic papers [3], loads that vary in time are considered. In solving the dynamical problem, unknown scalar functions φ and ψ were introduced for consideration. Their choice is determined by solving differential equations of the form: с . t ∂ ψ ⋅Ñ ⋅ ψ − = −ψ ∂ Each differential equation corresponds to a certain type of waves. In seismology, such waves are called primary and secondary waves: wave P and wave S (shear wave), respectively. It should be emphasized that the vector solution takes place when the selected functions satisfy the reduced differential equations and actually are argument-functions. However, analysis shows that this is not enough for a number of applied problems. It is necessary to establish a differential relationship between these scalar argument-functions. Solution of the wave equation also assumes dependences on coordinates and time, which ensure a smaller Rayleigh wave amplitude according to the exponential law [3]. Such a structure of solution can be useful in considering dynamic problems in the field of equipment for continuous rolling mills during the roll bite when the pulse action of the rolls applied to the elastic strip in the inter-stand space is represented as a variable in time and space. It It is noted in monograph [4] that the problem in the dynamics of the elasticity theory consists in formulation of the boundary problem variants and assessment of their application field. Homogeneous and inhomogeneous solutions of dynamic problems for hollow bodies were proposed. Although the work repeatedly emphasized the diversity of dynamic problems in the elasticity theory and the variety of boundary conditions to which the new solutions correspond, the solutions were restricted by the dynamic problem for hollow bodies. Another approach to the determination of solutions of the dynamic problem for boundary elements with distributed loads was presented in publication [5]. Influence of impact loading on the indices of the stress-strain state of an elastic medium was shown. The method of boundary integral equations was used. It is indicative that the author limited himself by solution of an applied problem, found necessary theoretical basis and obtained the result acceptable for practice. In this case, there are no recommendations for using the proposed solution in applied problems with other or similar boundary conditions. Quadratures of solutions for the third dynamic problem with mixed boundary conditions were constructed in work [6]. Displacements are specified on one part of the surface, and forces on the other part. Construction of the solution extends the possibilities of its use in applied problems with mixed boundary conditions but such generalizations are not enough for their use in the first and second dynamics problems for boundary conditions associated with a variable damped effect on the elastic medium. Work [7] describes algorithms of the R-function method for solving dynamic problems in the elasticity theory for bodies of finite dimensions deformation of which proceeds in an elastic region. Using the theory of R-functions, the problem of constructing coordinate (trial) functions was solved constructively which made it possible to open the possibilities for practical application of the variational and projection solution methods. Variational and difference methods are used to search for new structures introduced into consideration. As the authors suppose, a universal toolkit is presented that allows one not only solve mechanics of the deformed solid and the problems of mathematical physics but also the problems having relation to the development of new technological processes. Very attractive is the fact that a powerful mathematical apparatus for result generalization was proposed and that it can be applied in finding new solutions for dynamic problems of the elasticity theory. It is necessary to clarify some details of the proposed approach. Coordinate test functions were introduced. They can play role of the proposed argument-functions. However, their definition by variational or other methods may appear to be not the exclusive option. Not so fundamental but more intuitive and practical options are possible. Spatial self-oscillations of orthotropic plates were considered in the presence of an internal viscous resistance, in proportion to the velocity of the medium points [8]. By applying the asymptotic method, equations for longitudinal and shear oscillation frequencies were obtained. The use of new boundary conditions determines a new result and equations of longitudinal and shear oscillation frequencies are obtained. The discussed example is a partial result, which does not apply to solutions for other boundary conditions. The problem of determining stresses on the boundary of an elastic half-space from the given displacements was shown in [9]. The solution was found by the method of Laplace and Fourier integral transforms. The initial data were reduced to a system of three integral Fredholm equations. Numerical solutions were obtained. Introduction to the consideration of transforms allows one to approach the problem solution by a numerical method as one of the variants of the problem under consideration. Variation of boundary and obvious conditions change the solution approaches and the result obtained. However, there are no generalizing relations that are superimposed on the closing equations for obtaining a series of partial solutions and a possibility of a broad analysis of the boundary conditions of new applied problems. In work [10], abilities of another method are considered. It is the method of potentials and the theory of multidimensional singular integral equations for solving three-dimensional stationary and non-stationary boundary problems in the theory of elasticity. Classical solutions of a linear dynamical problem are presented in work [11]. The linear wave equation has the form: Write an equation of characteristics for (1): There are two equations in new variables: The oscillation equation (1) is converted to a simpler expression: The common integral of equation (2) is: 1 2 u(x,t) f (x at) f (x at). Taking into account the boundary conditions, expression (3) is transformed to the form: x at Expression (4) is called the d'Alembert formula. It should be emphasized that the fraction on the right side of (4) is a function of both the coordinate and the time. In this case, the function φ is not defined which makes it impossible to instantiate the boundary conditions of the applied problem. A method of separation of variables or the Fourier method is known. The solution is represented as: = ⋅ Substitution of the proposed form of solution in (1) gives the following: Ordinary differential equations for determination of X and T functions: T a T 0. + λ = A partial solution is known: n n n n n n n n u (x,t) X T cos at sin at sin x l l l Expression (5) satisfies the boundary and edge conditions and equation (1). In the general case, by virtue of linearity and homogeneity, the sum of the partial solutions of (5) is: The constants of integration in (6) are determined by the boundary conditions of the problem. The class of functions that define boundary and edge conditions for the expression (6) is limited. There are difficulties in using solution (6) in practical problems. It is necessary to develop approaches enabling determination of conditions for existence of several solutions corresponding to the specified boundary and edge conditions of various applied problems. Work [12] expands the scope of solutions but it does not show convincing general schemes for determining required dependencies taking into account practical diversity of the initial and boundary conditions. Monograph [13] does not show the abilities of using complex initial and boundary conditions determining the non-stationary action on the strip such as ones occurring, e. g. during rolling in adjacent continuous mill stands. In applied work [14], non-stationary problems are considered with reference to the metal forming equipment. However, the rolling features in the transient processes associated with loading during the period of strip gripping are not disclosed. It should be mentioned that one of the first works where the method of solving applied problems using argument-functions was applied was paper [15]. The solution is presented in the theory of plasticity with no consideration of the loading dynamic component. The method of argument-functions with examples from the applied theory of plasticity and elasticity was given and generalized in [16], but the possibility of its use in solving dynamic problems of the theory of elasticity was not demonstrated. Complication of the problem with the use of argument-functions [17] indicates potentials of the method but the dynamic problem was not considered. The first generalizations of the dynamics results were given in [18] but their further use for various boundary conditions and obtaining a new result was not stated. As analysis of the papers presented shows, there is a wide use of various approaches and methods for solving dynamic problems in the elasticity theory. These include the method of potentials, the method of integral transformants, variational and asymptotic methods. Besides, the theory of R-functions, the method of boundary integral equations, d'Alembert method, Fourier method, the method of argument-functions, etc. can be mentioned. Most of them are used to solve specific problems while having no mathematical generalizations for their further use. In work [4], one of the most important problems of the dynamic theory of elasticity was emphasized: it is formulation of variants of boundary problems and estimation of the field of their application. Practical implementation of such approaches broadens the possibilities of using the resulting solutions by linking and selecting for them boundary conditions of various technological processes and equipment operating conditions. In the course of their development, works appeared which implement such generalizations and algorithms and not just dynamic tasks, e. g. papers [3,6,7,9,15]. Scalar functions [3], R-functions (tested) [7], integral transformants [9], argument-functions [15] are used, which determine not the functional dependencies themselves but the conditions for their existence. In the latter case, the stages of the further, closing solution are considered. But this is just another problem. In the first approximation, one can restrict his attention to solving the simplest invariant differential relations. They are of interest since in many respects they, as a special case, coincide with the known classical solutions. As it follows from works [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18], application of argument-functions involves introduction into consideration of the main functional dependences, which can include exponential, trigonometric, hyperbolic, logarithmic, complex, etc. functions. Their arguments are also functions depending on coordinates and time. These closing unknown coordinate-time relationships are determined in the process of solving the problem, more precisely, conditions for their existence are found. Ultimately, the conditions for the existence of a number of basic functions of the problem being solved are determined. With this statement of a question, it becomes possible to define a whole class of unknown functions the implementation of which expands abilities of the method under consideration, which has been repeatedly emphasized in the course of analysis. Examples of successful use of such approaches are listed in the list of literature in question. The study objective and tasks This work objective was to determine common approaches or conditions of existence of various solutions, which are determined by differential equations of the dynamic problem and boundary conditions. To achieve this objective, the following tasks were set: -development of general approaches to solution of differential wave equations using argument-functions, various boundary conditions of problems in the theory of elasticity; -definition of conditions for existence of trailing partial solutions using invariant differential relations and equations for argument-functions; -development of a dynamic model of the transient processes taking place during rolling in adjacent continuous mill stands. 1. Approaches of analytical solution of dynamical problems of the elasticity theory with the use of argument-functions The totality of solutions of concrete differential equations is a practical necessity with the purpose of choosing the mathematical model that satisfies to the right degree desired boundary conditions of the problem. Considering their diversity, which is associated with the variety of applied problems, problems arise in obtaining the necessary solution for the initial data. In this case, it is expedient to obtain not the concrete result of solution but the conditions for its existence, i.e. determine those restrictions that are imposed on the functions from the side of differential equations and boundary conditions. Thus, the final result is not the functions themselves but the conditions for their existence, in other words, the invariants of the differential relations of the argument-functions introduced into consideration. This approach is partially described in literature. However, the possibilities of transition from one boundary condition to another are not indicated. Some examples of using argument-functions are given thereinafter. Solution of a differential equation of hyperbolic type: was proposed to perform with the help of unknown θ, AF argument-functions introduced into consideration [9-12]: С exp sin , τ = ⋅ θ⋅ ΑΦ where θ and AF are the argument-functions of the deformation-zone coordinates. In this case, differential constraints on the argument functions introduced into consideration are shown: The conditions themselves determine the type of equations that must be used to find the unknown coordinate dependences θ, AF. The simplest option of solution of the Laplace equation for the AF function is: It should be emphasized that the argument-functions are determined not only by Laplace's equations but to a greater extent by differential relations between adjacent dependencies as well. The last expression θ is also a harmonic function satisfying the Laplace equation. The same approach was used to solve the wave equations of the elasticity theory. Restrictions are imposed on the side of the differential equations themselves and boundary 2. Solution of a dynamic problem using argumentfunctions Solution of the dynamic problem in analytical form was presented in [18]. With the help of argument-functions, solution of a linear dynamic problem of a limited application was presented. Use the approaches formulated in these papers, write a fairly simple relationship and introduce argument-functions θ, AF into consideration: where C and A are constants characterizing the process; Θ, Φ are unknown argument-functions of time and coordi-nate, continuous, having second derivatives in time and a corresponding coordinate. Substitution of expression (7) in (1) gives a differential equation of the form: where following notations were in parentheses t , t , etc. Further analysis shows that equation (8) will be substantially simplified if nonlinearity is eliminated and the brackets taken equal to zero, i.e: In this case, all the summed operators on the left-hand side will be zero. On the basis of the result obtained, taking into account (9), solution can be presented in a more general form: provided that the following relations exist for the argument-functions: The differential relations (9), (11) differ by signs from the above Cauchy-Riemann relations. Hence, differential constraints of the functions introduced for consideration change with the change in the form of the differential equations. The trailing constraints (11) define the basic solution (10). Besides, the unknown argument-functions become known for the shown differential equations. The final result can be represented as a superposition of solutions: (12) resembles Fourier solution since a product of trigonometric functions takes place. However, there are a number of fundamental differences. The arguments of trigonometric functions are functions as well not of one variable as in the method of separation but of two variables. In addition, the differential dependencies of solution (12) show the variants of new solutions without being tied to a specific result but to the boundary conditions of the process. The next version of solution of the linear wave equation (1) can be applied to other boundary conditions, e. g. to damped, periodical influences on the elastic medium. Consider the following variant of partial solution using argument-functions. In this case, the θ argument-function is not in the trigonometric, but in the exponential dependence for the displacement u. where the argument-functions θ, AF are to be determined by solution of the problem. Substitute (13) into (1) There are operators in square brackets: Consider the case when the parentheses of the first operator are zero, that is, (14) also becomes an identity when conditions of (16) are satisfied. Variant 3: In this case, signs between the differential relations are opposite. For the second derivatives in the defining differential equations, the identity is maintained but the identity does not hold in the third one. Really: The relationships (17) do not satisfy equation (14). Finally, taking into account the first two variants, the following can be written: provided there are solutions for the argument-functions: 1x 1t a , θ = ±θ Proceed from partial solutions (13), (18) to a general solution in the form: under condition that: Comparison of solutions (7) and (13) shows that the argument-functions satisfy the same type of differential equations (12) and (19). However, the differential relations between the adjacent ones in solution are different. In many cases, this feature is defining and basically distinguishes the solutions shown. Following the trends in developing methods for solving dynamical problems as applied to changeable boundary conditions, consider solution of a more complex problem, the spatial problem [19]. Wave equation for the spatial problem has the form: A unified approach to the solution with the use of argument-functions was formulated above. Represent the general solution u in a form of a superposition of partial solutions u i that simultaneously depend on one of the coordinates and time. The following is obtained: Function: 1 1 1 f (x,t), Α Φ = 1 2 f (x,t), θ = f (y,t), θ = 3 3 5 f (z,t), Α Φ = 3 6 f (z,t). θ = Take the derivatives of (21) taking into account the functional dependences: Substituting (22) Eliminating nonlinearity in (26), obtain variants of simplifications with different signs: 1. ( ) With brackets eliminated, expression (26) will be written as: On the one hand, simplification of equation (26) has taken shape, on the other hand, constraints on unknown argument-functions appeared in the form of (27), (28). Second derivatives of equation (26) are determined from (27) and (28) in the variants: Using (30) and (31), define square brackets in equation (29). It can be shown that they are zero. Indeed, subtraction of the second derivatives gives defining differential relationships of the form: Substitution of (32) into (29) results in a further simplification of the problem. Solution of the partial differential equation (20) is represented as: u C C sin C cos C sin C cos C C sin C cos C sin C cos C C sin C cos C sin C cos , under conditions: ( ) Conditions for existence of various solutions in the form of defining differential equations of hyperbolic type for the arguments of trigonometric functions were shown. In general form, solution (34): C C sin C cos C sin C cos C C sin C cos C sin C cos C C sin C cos C sin C cos under conditions: Thus, the result (35) which was obtained is a superposition of the flat coordinate-time solutions. In this case, each pair is determined by its differential constraints on the argument-function. In this case, complication of the problem is a kind of a generalizing factor of the proposed approach. Comparison of the study results Validity of the presented result is determined not only by finding a solution that satisfies the conditions of the problem but also by comparing it with the solutions already known in literature, assessed and tested. Consider a list of solutions with the use of argument-functions for a flat problem of the plasticity theory, namely (7) and (13). Plasticity theory Restrictions on expressions (36): u C exp C sin A C cos A С exp C sin A C cos A . Restrictions on the expression (38): 1x 1t a , θ = ±θ The following results of solutions are obtained using differential relationships between adjacent argument-functions in all above variants (36)-(38). For each variant, the simplest schemes of solutions of invariant differential equations were considered. x y 2 x y x 1 x y 2 0. y There can be another, more complicated solution for the AF function: 1 3 x y x y . 4 2 Substitute argument-functions into Laplace's equations to see that they are identically satisfied: It is possible to obtain a more complex coordinate dependence for the AF function: ( ) 2 2 6 13 x y x y x y ΑΦ = ΑΑ ⋅ ⋅ ± ΑΑ ⋅ ⋅ ⋅ − and so on. Multiple calculations of the stressed state of metal in the processes of plastic metal working showed qualitative and quantitative convergence of the presented results with experimental data and data of other authors [16,17]. It can be seen from the last examples that there can be many solutions of the same differential equations and for each of them there are certain boundary or initial conditions that must also be ensured. Taking into account different boundary conditions for different applied problems, it becomes possible to choose the required solution provided with the method of argument-functions. As noted above, the same problem is encountered in solving dynamic problems of the theory of elasticity. Linear dynamic problem with basic trigonometric functions Expression (37) can be simplified and reduced to the form: then one of the variants of the solution for the AF argument-function will be written as: 1 2 x t. ΑΦ = ΑΑ ⋅ + ΑΑ ⋅ Taking into account differential relationships, pass to θ function: 2 a x f(t). ⋅θ = ΑΑ ⋅ + or x f(t). ⋅θ = ΑΑ ⋅ + Eventually: Expressions for argument-functions satisfy differential equations of hyperbolic type (18), (19) and relationships between adjacent functions. In further simplification, take AA 2 =0 to obtain: Demonstrate now that the expressions (40) are trailing solutions for the functions obtained by the method of separation of variables or by Fourier method. Taking the boundary conditions same as in [11] and relationships (18), (19) presented above, the following is obtained: In this case, the differential relationships (18), (19) are also sustained, that is, which corresponds to the solution (5) shown in work [11]. It follows that the solution obtained by the Fourier method is a particular case of the solution obtained with the help of argument-functions. In this case, simplifications in the compared solution from work [11] are elementary and multiple with respect to the argument-functions. ΑΑ θ = ⋅ ΑΑ ⋅ + ⋅ Factoring out the AA 1 gives the following: to simplify the result, obtain the following: The argument-functions have the same coordinate-time dependence (41). Taking into account the last expressions for the argument-functions, write expression (37) as: sin AA x a t . The solution can be reduced to other coordinate-time dependence if negative sign is used in relationships (18), (19). Indeed, what is obtained is: The last variants of the argument functions (41) and (42) represent a fragment of the d'Alembert formula (4). Thus, a rather unexpected result was obtained. Maximum simplifications of the argument-functions result in a Fourier solution and the simplifications associated with smaller assumptions lead to the d'Alembert solution. Moreover, in the d'Alembert solution itself there are differences in (41) and (42). It is easy to see that the solutions obtained correspond to different boundary conditions. Consider a variant of the analytic definition of the d'Alembert formula using solution (38) and corresponding differential constraints of the argument-functions. Linear dynamic problem with basic trigonometric and exponential functions This version of the problem is of particular interest since its solution can successfully represent the damped or increasing effect of the rolling tool on the elastic strip during the unsteady roll bite process. Consider (38). There are elementary differential relationships that make it possible to simplify solution of hyperbolic equations of the form (38 a . Α Φ = ±Α Φ Solving the differential equations presented at the beginning of this variant, one can obtain the following dependences: x t, Α Φ = Α Α ⋅ + Α Α ⋅ The argument-functions for exponentials and trigonometric dependencies in (45) are the same. An analogy is observed for the expressions (37) as well when taking into account (41) However, this general result is represented in expression (4) as the D'Alembert formula [11]. In this case, the ϕ function is represented by a concrete coordinate-time expression (45). Besides, the initial conditions of the form as in [20] fit well into solution (45): ( ) ( ) * o 1 2 u C exp bt C sinkt C coskt . = ⋅ ± ⋅ + This dependence is suitable for characterizing non-stationary impulse action on the elastic strip in the rolling mill. Indeed, expression (45) can be simplified for: 1 6 1 3 x 0,a b, k a, = ⋅Α Α = = Α Α u C exp bt C sin kt C cos kt C exp bt C sin k t C cos ka t . Variants (46) of increasing, decreasing functions or their joint action appear. The latter solution is representative when using argument-functions. In the conditions of transient processes taking place during rolling in adjacent stands, rolls as a system are the source of a damped effect on the strip. The impact is transmitted via the strip to the adjacent stand where a stationary rolling process is realized. A dynamic splash in the last stand appears. This leads to oscillations of the gap between the rolls producing longitudinal thickness variation. Solution (45) makes it possible to evaluate this impact and intervene in the rolling process in the mill stream to eliminate defect formation. Discussion of results: generalization using argument-functions Comparison of the results obtained in solution with the use of argument-functions with known solutions shows that the presented approach is quite acceptable for calculating pulsed stressing of an elastic medium. In this case, it was not the coordinate-time dependences of the argument-functions that were defined but the conditions for existence of various solutions of the problem that can fit any boundary condition. It can be seen from the analysis that the result presented in [11][12][13][14] was the simplest partial solution of differential relationships for argument-functions. There is a prospect of defining new dependencies for new tasks about which, perhaps, nothing is known yet. The initial differential equations and boundary conditions determine the type of differential equations for the argument-functions that close solution. On the one hand, argument-functions can be bounded by the Cauchy-Riemann relations and the corresponding differential invariants and on the other hand, by differential relations which lead to the fact that the argument-functions are the same for adjacent coordinate-time dependencies. Besides, analytic dependences on the parameters entering into the d'Alembert formula were obtained. Conclusions The paper presents development of general approaches to solving differential wave equations using argument-functions. The known solutions of the dynamic problem are in accordance with the proposed approaches and are their partial solutions. The result obtained is a superposition of flat coordinate-time solutions. Besides, each pair is determined by its differential constraints on the argument-function. In this case complication of the problem is a kind of generalizing factor of the proposed approach. Conditions for existence of new solutions of the wave problem that are restricted by boundary conditions of different processes were determined using known solutions: plasticity theory, linear dynamic problem with trigonometric solution and constraints, and a linear dynamical problem with basic trigonometric and exponential functions. Invariant differential relationships for argument-functions are the closing element of the solution. A mathematical model of a dynamic problem with an increasing or damped wave action upon an elastic medium was developed which makes it possible to evaluate this effect and intervene in the rolling process in the mill workflow and consequently eliminate defect formation.
7,114.8
2017-06-19T00:00:00.000
[ "Mathematics" ]
Stimuli-Responsive Nanofibers Containing Gold Nanorods for On-Demand Drug Delivery Platforms On-demand drug delivery systems using nanofibers have attracted significant attention owing to their controllable properties for drug release through external stimuli. Near-infrared (NIR)-responsive nanofibers provide a platform where the drug release profile can be achieved by the on-demand supply of drugs at a desired dose for cancer therapy. Nanomaterials such as gold nanorods (GNRs) exhibit absorbance in the NIR range, and in response to NIR irradiation, they generate heat as a result of a plasmon resonance effect. In this study, we designed poly (N-isopropylacrylamide) (PNIPAM) composite nanofibers containing GNRs. PNIPAM is a heat-reactive polymer that provides a swelling and deswelling property to the nanofibers. Electrospun nanofibers have a large surface-area-to-volume ratio, which is used to effectively deliver large quantities of drugs. In this platform, both hydrophilic and hydrophobic drugs can be introduced and manipulated. On-demand drug delivery systems were obtained through stimuli-responsive nanofibers containing GNRs and PNIPAM. Upon NIR irradiation, the heat generated by the GNRs ensures shrinking of the nanofibers owing to the thermal response of PNIPAM, thereby resulting in a controlled drug release. The versatility of the light-responsive nanofibers as a drug delivery platform was confirmed in cell studies, indicating the advantages of the swelling and deswelling property of the nanofibers and on–off drug release behavior with good biocompatibility. In addition, the system has potential for the combination of chemotherapy with multiple drugs to enhance the effectiveness of complex cancer treatments. Introduction On-demand drug delivery systems (DDSs), which are programmable in a patientfriendly manner, can spatially and temporally control drug delivery at a particular site and the rate of drug release over a specific period of time [1][2][3]. Recently, the development of on-demand DDSs from stimuli-responsive nanomaterials, which provide a controlled and pulsatile release of drugs at certain concentrations in the body, has received significant interest [4][5][6]. Conventional DDSs pose some challenges such as difficulty in controlling the drug release rate, unsuitability of the drugs for other body organs, and the production process of the system [7,8]. Several drugs are not appropriate for oral drug delivery, owing to their limitation of drug degradation under the acidic and alkaline conditions of the stomach and intestine, respectively [9,10]. Intravenous injection for drug delivery can resolve some of the problems that occur in oral drug delivery; however, this system also has various issues such as the drug administration requiring professional skill, specific storage the stability of the nanofibers in an aqueous solution [51][52][53]. The PNIPAM nanofibers containing GNRs exhibit optical sensitivity, and the heat generated by the GNRs can control the swelling and deswelling property due to the thermal sensitivity of PNIPAM [54]. This method can be used in various treatments that generate local heat through NIR irradiation, which can penetrate body tissues to up to 10 cm without serious damage to surrounding tissues [55]. The photothermal effect becomes strong by introducing the porous structures and GNRs inside the nanofiber, and the thermal/optical response speed can be increased by rapidly increasing the temperature above the LCST of PNIPAM. The crosslinked composite nanofibers can be used as an on-off drug release system by simply irradiating the surface with NIR [56]. PNIPAM nanofibers containing GNRs with fast thermal/optical response, high heating rate, and high structural stability were prepared through the electrospinning method [57]. Electrospun nanofibers provide easy surface functionalization in the space between small fibers and have high surface-area-to-volume and porosity mass ratios [58,59]. In electrospinning, when a high voltage is applied to a solution being discharged at a constant speed through a nozzle, it forms a Taylor cone by electrostatic force. Furthermore, the solvent evaporates instantaneously, forming nanofibers with a large surface area in the collector grounded with the polymer [60,61]. Through a simple electrospinning process, therapeutic drugs can be conveniently introduced into nanofibers [62,63]. Until now, studies have been conducted on nanofibers in which drugs are introduced using various substances such as antibiotic, chemotherapeutic, and vitamin substances [64]. However, a DDS using composite materials emerges as a promising platform because nanofibers have a long-time stability due to the presence of drugs and GNRs, which is convenient for the on-off cyclic profile of the drug release (Scheme 1). This method provides efficient loading of low-solubility drugs into the nanofibers and is suitable for the encapsulation and release of hydrophobic and hydrophilic drugs. This ideal system allows drugs to be safely introduced into DDSs and to control drug release to treat cancers or overcome other complex diseases [65][66][67]. The objective of this study was to achieve a platform that addresses problems with conventional release methods, such as insufficient drug release at targeted sites owing to drug waste at nontargeted sites and externally uncontrolled release due to the treatment period that leads to reopening and painful operations [68]. The embedding of GNRs into the matrix of nanofibers elevates them to a new category of biomaterials capable of reacting to stimulation [69]. Using this approach, we developed a method to treat glioblastoma (GBM), also known as a grade IV astrocytoma, a fast-growing and aggressive brain tumor through the externally controlled release of camptothecin (CPT), which can promote a senescence-like phenotype in brain cancer cells [70]. The direct delivery of chemotherapy agents to the brain is a clinically proven method for treating glioblastoma multiforme, but current technologies have significant limitations, including severe local tissue toxicity and a limited diffusional penetration of agents, which limit its application and effectiveness [71]. CPT-loaded nanofibers can be delivered to a stereotactically specified position in the brain, providing the simultaneous control of drug release location, diffusion, and duration in our new method [72]. This CPT analog can improve the efficacy and stability on the tumor site for more effective local anticancer therapies against brain cancers cells [73]. Therefore, we used the U-87 MG cell line in this study. Hence, this study emerges as a novel approach for externally controlled drug release for efficient therapeutic effects in cancer treatment. Preparation of Both Organic and Water-Soluble TMA-GNRs The GNRs used in this study were prepared according to the well-known seed-mediated growth method using CTAB, and the stability of GNRs in organic solvents was Preparation of Both Organic and Water-Soluble TMA-GNRs The GNRs used in this study were prepared according to the well-known seedmediated growth method using CTAB, and the stability of GNRs in organic solvents was Characterization of GNRs The morphology of GNRs was investigated by transmission electron microscopy (TEM) (Tokyo, Japan, JEOL JEM-2010). GNRs were dispersed in ionized water (IW). A single drop of GNR solution was applied on a carbon-coated copper grid (200 mesh) and allowed to dry at ambient temperature before imaging with a 50 nm scale bar. The SPR spectra of GNRs were examined using UV-Vis spectroscopy (Scinco, USA) in the wavelength range of 400-1100 nm by adding 2 mL of GNRs solution to a 4 mL quartz cuvette. Zeta potential analysis was determined by adding a maximum of 400 µL of the three different samples to a zeta potential cuvette with a positive and a negative electrode. The equilibration time for each was set at 120 s, with a total of 50 runs. Three separate tests were carried out under these conditions. Fabrication of Light-Responsive Electrospun Nanofibers To make 5 mL of polymeric solution for electrospinning, 0.5 g (10%) of PNIPAM, 0.15 g (3%) of OpePOSS, and 0.01 g (0.2%) of EMI were dissolved in 4 mL of DMF:THF 1:1. The mixture was stirred for 4 h at room temperature. Finally, 1 mL of TMA-GNRs DMF:THF 1:1 solution (200 nM) was added to the obtained solution, and the mixture was stirred for an additional hour to form a uniform solution. A certain polymer concentration that produces the nanofibers of the same diameter and suitable uniformity was chosen. If the concentration is low, the diameter of the resulting fibers becomes nonuniform and some bonds or even the fibers may not be formed. Conversely, at higher concentration, the diameter of the fibers is large; hence, the rate of penetration of water is slow, which affects the response speed of the composite film. The resulting homogeneous polymer solution was injected into a 10 mL plastic syringe. During electrospinning, a direct-current voltage of 13.5 kV was applied to the needle, and the polymer solution was supplied at a flow rate of 0.05 mL/min at room temperature. Furthermore, the distance between the needle and the collection plate was 12 cm. The electrospinning process was performed at 26.4 • C and 45-50% relative humidity (RH) measured by a thermo hygrometer. The prepared nanofibers were placed in a vacuum oven at 160 • C for 4 h to crosslink the PNIPAM nanofibers. Characterization of Nanofibers The morphologies of nanofiber scaffolds were analyzed using a scanning electron microscope (SEM) (Tokyo, Japan, JEOL JSM-6510) and confocal laser scanning microscopy (CLSM) (Solms, Germany, Leica TCS SP8). The nanofibers containing GNRs and fluorescein were prepared and cut into circles of 1 cm diameter using a biopsy punch. The nanofiber samples were coated with 250 Å of gold via a Denton Desk V Sputter Coater. The SEM images were obtained at an accelerated voltage of 20 kV and 20 µm scale bar. Fiber diameter distribution histograms were quantified using the SEM micrographs. For each sample, 10 random field images were taken, and 10 fibers were measured in each image. The samples from the same nanofibers with a diameter of 1 cm were taken for CLSM. In this analysis, two samples were prepared that were the original dry nanofiber and water-treated nanofiber. The CLSM images were obtained under 63× magnification. All these characterizations were observed at room temperature. On-Demand Drug Release To study the behavior of NIR-responsive drug release at different irradiation times, nanofibers containing GNRs and fluorescein (model drug) were prepared. The nanofibers were cut into circles of 1 cm diameter using a biopsy punch and added in 1.5 mL of IW. To check the fluorescence intensity of drug release, the sample tube, which was placed 10 cm from the center of a laser probe, was irradiated directly on the nanofiber sample by a diode Pharmaceutics 2021, 13, 1319 6 of 17 laser (808 nm) at a laser power of 1.6 W/cm 2 up to 60 min at 10 min intervals. The drug release was confirmed by fluorescence spectroscopy after every 10 min of the same sample tube. The pulsatile drug release was performed in a cyclic on-off manner by 10 min of no NIR light irradiation and 2 min of NIR light irradiation at a laser power of 1.6 W/cm 2 up to 60 min, and the solution was quantified via a QM-400 spectrophotometer after 10 min of no irradiation and 2 min of irradiation. Furthermore, the cumulative drug release with 0.6 W/cm 2 , 1.1 W/cm 2 , and 1.6 W/cm 2 laser powers was performed. In this experiment, four different samples were used, and the laser power irradiated up to 60 min at 5 min intervals. For all these drug release experiments, all the samples containing the same amount of drug (2 µg) were used and the experiment was performed at room temperature. Biocompatibility and Toxicity Cell studies were performed to evaluate the biocompatibility and toxicity of the nanofibers containing GNRs and fluorescein using the U-87 MG (brain cancer cells) cell line. The nanofibers were cut into circles of 1 cm diameter using a biopsy punch and attached to the bottom of a 24-well plate. Prior to cell seeding, all wells containing nanofibers were preconditioned overnight in DMEM containing 10% FBS and 1% penicillin/streptomycin at 37 • C in a 5% CO 2 incubator. Thereafter, the media were refreshed and the cells with a density of 3 × 10 3 cells/well were cultured on the nanofibers under the same conditions mentioned above. After 24 h, the supernatant was discarded, and the cells were incubated with trypsin-EDTA for 5 min to detach from the surface of the nanofibers for cell counting. MTT Assay for Cell Viability PNIPAM nanofibers containing GNRs and CPT were prepared. In addition, 1 cmdiameter circles of the nanofiber samples were used, each of which contained 30 µg of payload. Further, 500 cells/well were incubated in a 96-well plate for 24 h in DMEM containing 10% FBS and 1% penicillin/streptomycin at 37 • C in a 5% CO 2 incubator before experiments. The samples were irradiated with NIR light at different laser powers for 2, 10, and 20 min to release the drug from the nanofibers to the cells in the 96-well plate. After 2-4 h or incubation, the MTT assay was conducted using a microplate reader at 570 nm absorption. In this analysis, three control groups such as only cells without nanofibers (cells w/o NFs), nanofibers containing GNRs without CPT (NFs+GNRs), and nanofibers containing GNRs with CPT (NFs+GNRs+CPT) were prepared. The test group analysis was performed based on different laser powers. The laser powers used in this experiment were 0.6, 1.1, and 1.6 W/cm 2 . Data and Statistical Analysis All statistical analyses were performed using ANOVA analysis with a test level set at p ≤ 0.05, which was considered to be a statistically significant difference. The results of all numerical variables were examined by the statistical mean, standard deviation, and graphical analysis using GraphPad Prism 9.2.0 software (GraphPad Software, San diego, Inc., CA, USA). All cell viability tests were performed with three independent samples from each group for all the different assays. Results and Discussion The plasmon-based photothermal effect of GNRs has become an excellent source of controlled drug release with potential applications and outstanding properties in DDSs. As mentioned earlier, GNRs have strong absorption in the NIR wavelength range; hence, in this study, we irradiated GNRs with NIR light to generate heat to shrink the nanofibers, which results in drug release. The GNRs used in this study were prepared using CTAB, which is a toxic surfactant and unstable in organic solvents. Hence, surface modification was performed on the GNRs to ensure good stability in organic solvents. For this purpose, CTAB attached on the surface of the GNRs was exchanged with TMA through the ligand exchange process. The obtained TMA-GNRs exhibited good stability in DMF:THF 1:1, which we used as the electrospinning solution. CTAB-GNRs and TMA-GNRs were characterized through TEM, UV-Vis spectroscopy, and the zeta potential ( Figure 1). The TEM images (Figure 1a,b) show that the GNRs retained their rod shape even after exchanging the surface functionality from CTAB to TMA. The UV-Vis spectrum (Figure 1c) of the CTAB-GNRs solution was confirmed to have specific peaks at 513 nm and 722 nm. Furthermore, in the UV-vis spectrum of TMA-GNRs, the peaks corresponding to TMA-GNRs appeared at 519 nm and 765 nm. Due to the change in the dielectric constant of the environment of each GNR and the sensitivity to the different organic solvents used for the spectrum measurement, the absorption spectra of TMA-GNRs showed pronounced shifts in both the transverse and longitudinal surface plasmon resonance (LSPR) bands after ligand exchange. The zeta potential (Figure 1d) value of CTAB-GNRs was measured at 20.1 ± 1.3 mV, whereas that of TMA-GNRs increased to 35.6 ± 2.6 mV owing to the high charge density after surface modification, which indicates that TMA successfully replaced CTAB from the surface of the GNRs. CTAB attached on the surface of the GNRs was exchanged with TMA through the ligand exchange process. The obtained TMA-GNRs exhibited good stability in DMF:THF 1:1, which we used as the electrospinning solution. CTAB-GNRs and TMA-GNRs were characterized through TEM, UV-Vis spectroscopy, and the zeta potential (Figure 1). The TEM images (Figure 1a,b) show that the GNRs retained their rod shape even after exchanging the surface functionality from CTAB to TMA. The UV-Vis spectrum (Figure 1c) of the CTAB-GNRs solution was confirmed to have specific peaks at 513 nm and 722 nm. Furthermore, in the UV-vis spectrum of TMA-GNRs, the peaks corresponding to TMA-GNRs appeared at 519 nm and 765 nm. Due to the change in the dielectric constant of the environment of each GNR and the sensitivity to the different organic solvents used for the spectrum measurement, the absorption spectra of TMA-GNRs showed pronounced shifts in both the transverse and longitudinal surface plasmon resonance (LSPR) bands after ligand exchange. The zeta potential (Figure 1d) value of CTAB-GNRs was measured at 20.1 ± 1.3 mV, whereas that of TMA-GNRs increased to 35.6 ± 2.6 mV owing to the high charge density after surface modification, which indicates that TMA successfully replaced CTAB from the surface of the GNRs. In general, the morphology of nanofibers does not depend only on the electrospinning solution, but also on the parameters of the electrospinning process such as flow rate, high voltage, and distance between the nozzle and the collector. All the parameters used in this method were selected based on the requirements of this study. As shown in Figure 2a, SEM images of the nanofibers obtained the morphology after electrospinning and had an average diameter of 600-700 nm [54]. The nanofibers had a 34% frequency of 700 nm and a 29% frequency of 650 nm, according to the diameter distribution histogram in Figure 2b. As shown in Figure 3, the fluorescein-loaded nanofibers were visualized using CLSM. The CLSM images in Figure 3a show optical images of the dry nanofibers, and a In general, the morphology of nanofibers does not depend only on the electrospinning solution, but also on the parameters of the electrospinning process such as flow rate, high voltage, and distance between the nozzle and the collector. All the parameters used in this method were selected based on the requirements of this study. As shown in Figure 2a, SEM images of the nanofibers obtained the morphology after electrospinning and had an average diameter of 600-700 nm [54]. The nanofibers had a 34% frequency of 700 nm and a 29% frequency of 650 nm, according to the diameter distribution histogram in Figure 2b. As shown in Figure 3, the fluorescein-loaded nanofibers were visualized using CLSM. The CLSM images in Figure 3a show optical images of the dry nanofibers, and a strong green fluorescence matrix is observed in Figure 3b,c, thereby indicating that fluorescein was successfully loaded onto the matrix of the nanofibers. The CLSM of the fluoresceinloaded nanofibers was measured in two different states, that is, original dry nanofibers and water-treated nanofibers. In Figure 3a-c, the dry nanofibers exhibited uniform nanofibrous structures with an average diameter of 600-700 nm. In addition, Figure 3d-f show that the nanofibers were swollen, and their average diameter increased to 2 µm after immersing in water. The crosslinking of PNIPAM and OpePOSS through heat treatment at 160 °C is a promising method for the high and long-time stability of nanofibers in an aqueous solution. Heat-treatment at 160 °C for 0.5, 2 h, 4 h, and 0 h with four different samples confirmed the cross-linking of prepared nanofibers containing GNRs and fluorescein for ondemand drug release in Figure 4a. The time of 0 h indicates that no heat treatment was provided. As shown in Figure 4a, the nanofibers with 0 h, 0.5 h, and 2 h of heat treatment The crosslinking of PNIPAM and OpePOSS through heat treatment at 160 °C is a promising method for the high and long-time stability of nanofibers in an aqueous solution. Heat-treatment at 160 °C for 0.5, 2 h, 4 h, and 0 h with four different samples confirmed the cross-linking of prepared nanofibers containing GNRs and fluorescein for on- Heat-treatment at 160 • C for 0.5, 2 h, 4 h, and 0 h with four different samples confirmed the cross-linking of prepared nanofibers containing GNRs and fluorescein for on-demand drug release in Figure 4a. The time of 0 h indicates that no heat treatment was provided. As shown in Figure 4a, the nanofibers with 0 h, 0.5 h, and 2 h of heat treatment had low stability with fluorescein leaking in water. However, after 4 h of heat treatment, the high stability of the nanofibers was observed without any fluorescein leakage. Meanwhile, the stimuli-responsive behavior of the nanofibers containing GNRs was investigated using NIR light (Figure 4b). The NIR laser power of 0.6 W/cm 2 was considered as a minimal power. As shown in Figure 4c, the original nanofibers had a surface area of 0.81 cm 2 ; however, when the nanofibers were exposed to the NIR light, they immediately shrank, and their surface area decreased to 0.3 cm 2 . Moreover, Figure 4c demonstrates that when the NIR light was off, the nanofibers returned to the original surface area. These quick and reversible area changes upon the irradiation of the NIR light in a cyclic on-off manner were observed without any significant defects. On the other hand, the nanofibers without GNRs did not respond to the NIR light (Figure 4d), even after increasing the laser power from 0.6 W/cm 2 to 1.6 W/cm 2 , indicating that the system had a strong photothermal effect owing to the presence of the GNRs. These results also confirmed the presence of GNRs in the matrix of the nanofibers. 3, x FOR PEER REVIEW 9 of 16 minimal power. As shown in Figure 4c, the original nanofibers had a surface area of 0.81 cm 2 ; however, when the nanofibers were exposed to the NIR light, they immediately shrank, and their surface area decreased to 0.3 cm 2 . Moreover, Figure 4c demonstrates that when the NIR light was off, the nanofibers returned to the original surface area. These quick and reversible area changes upon the irradiation of the NIR light in a cyclic on-off manner were observed without any significant defects. On the other hand, the nanofibers without GNRs did not respond to the NIR light (Figure 4d), even after increasing the laser power from 0.6 W/cm 2 to 1.6 W/cm 2 , indicating that the system had a strong photothermal effect owing to the presence of the GNRs. These results also confirmed the presence of GNRs in the matrix of the nanofibers. The characteristics of the controlled drug release, as a photothermal response to the irradiated NIR light, from the nanofibers containing GNRs and fluorescein were confirmed with a fluorescence spectrophotometer ( Figure 5). The NIR-triggered drug release from the nanofibers was monitored (Figure 5a). The nanofiber samples of 1 cm diameter with 2 μg of fluorescein (a model drug) were immersed in 1.5 mL of water at room temperature, and the NIR laser with a power of 1.6 W/cm 2 directly irradiated the nanofibers. The drug release was observed every 10 min. The drug release intensity increased after The characteristics of the controlled drug release, as a photothermal response to the irradiated NIR light, from the nanofibers containing GNRs and fluorescein were confirmed with a fluorescence spectrophotometer ( Figure 5). The NIR-triggered drug release from the nanofibers was monitored (Figure 5a). The nanofiber samples of 1 cm diameter with 2 µg of fluorescein (a model drug) were immersed in 1.5 mL of water at room temperature, and the NIR laser with a power of 1.6 W/cm 2 directly irradiated the nanofibers. The drug release was observed every 10 min. The drug release intensity increased after every 10 min upon NIR irradiation. The thermal response to the NIR light by the GNRs was confirmed. When the nanofibers were not irradiated with the NIR light, there was a slight emission of the drug; hence, the slope of the fluorescence graph was low. However, when the nanofibers were irradiated with the NIR light, the drug was released, and the slope of the fluorescence graph increased rapidly. The on-off drug release profile was further confirmed (Figure 5b). The on-off mechanism was performed in the sequence of 10 min of no NIR light irradiation and 2 min of NIR light irradiation (of power 1.6 W/cm 2 ). This cyclic process showed that 62.1 ± 1.1% of drug was released until 60 min. In each step, the nanofibers shrunk owing to the increase in temperature upon NIR light irradiation and swelled when we turned off the NIR light. This process ensured the swelling and deswelling property of the nanofibers [75]. Different laser powers were used to investigate the drug release behavior from the matrix of the nanofibers. The different laser powers exhibited different release rates and amounts of drugs. The laser powers 0.6, 1.1, and 1.6 W/cm 2 resulted in drug releases of 31.1 ± 1.4%, 53.7 ± 1.8%, and 90.5 ± 3.5%, respectively, as shown in Figure 5c. In the absence of light, a small amount of drug release was observed from the nanofibers, indicating that the GNRs played an essential role as a heat-generating source. These findings appeared to be appropriate for biomedical practice with a drug release of more than 90% within 60 min [76]. maceutics 2021, 13, x FOR PEER REVIEW 10 of deswelling property of the nanofibers [75]. Different laser powers were used to investig the drug release behavior from the matrix of the nanofibers. The different laser pow exhibited different release rates and amounts of drugs. The laser powers 0.6, 1.1, and W/cm 2 resulted in drug releases of 31.1 ± 1.4%, 53.7 ± 1.8%, and 90.5 ± 3.5%, respective as shown in Figure 5c. In the absence of light, a small amount of drug release was observ from the nanofibers, indicating that the GNRs played an essential role as a heat-generati source. These findings appeared to be appropriate for biomedical practice with a dr release of more than 90% within 60 min [76]. In addition, the NIR thermal response characteristics of the nanofibers were furth confirmed. It was observed that the increase in water temperature depended on the pow of the NIR laser light source, and the water temperature increased above 45 °C when ir In addition, the NIR thermal response characteristics of the nanofibers were further confirmed. It was observed that the increase in water temperature depended on the power of the NIR laser light source, and the water temperature increased above 45 • C when irradiated with the NIR laser with a power of 1.6 W/cm 2 . As shown in Figure 5d, upon 15 min of NIR laser irradiation of powers 0.6, 1.1, and 1.6 W/cm 2 , the water temperature increased to the average values of 29.6 ± 0.05, 38.1 ± 0.03, and 47.1 ± 0.03 • C, respectively. These results confirm the capability of hyperthermia therapy for cancer treatment. When the nanofibers were placed in water at room temperature and the NIR laser irradiated the nanofibers, the increase in the water temperature was recorded using an infrared camera ( Figure 6). Furthermore, the nanofibers did not increase the water temperature in the absence of GNRs or light. Pharmaceutics 2021, 13, x FOR PEER REVIEW 24 and 48 h. The cells maintained a proper morphology even on the surfa fibers, similar to that observed in the controls. After 48 h, the supernatant and the cells were incubated with trypsin-EDTA for 5 min to detach the cel of adhered cells increased by 508 × 10 3 cells/well in the control and 431 × 1 the nanofibers, demonstrating the capability of easy cell growth on the sur ofibers. Furthermore, according to the results, nanofibers were found non absence of NIR light owing to successful cell proliferation (Figure 7a-c). The biocompatibility and toxicity of the fluorescein-loaded nanofibers were evaluated through cell studies. The nanofiber samples were preconditioned before adding the cells. The U-87MG cells with a density of 3 × 10 3 cells/well were cultured in a 24-well plate without the nanofibers as a control and on the surface of the nanofibers in a DMEM solution at 37 • C in a 5% CO 2 incubator. The cell growth and morphology were monitored for 24 and 48 h. The cells maintained a proper morphology even on the surface of the nanofibers, similar to that observed in the controls. After 48 h, the supernatant was discarded, and the cells were incubated with trypsin-EDTA for 5 min to detach the cells. The number of adhered cells increased by 508 × 10 3 cells/well in the control and 431 × 10 3 cells/well on the nanofibers, demonstrating the capability of easy cell growth on the surface of the nanofibers. Furthermore, according to the results, nanofibers were found noncytotoxic in the absence of NIR light owing to successful cell proliferation (Figure 7a-c). The cellular uptake was evaluated through cell culturing with a density of 3 × 10 3 cells/well on the nanofibers, and the cells were incubated for 24 h under the same conditions mentioned above. The cellular uptake of released fluorescein was measured using CLSM in the absence and presence of NIR light. First, the nanofibers were not exposed to NIR light, which resulted in no release of fluorescein. As shown in Figure 7a-c, fluorescence intensity was not observed in the cells after 6 h of incubation. Then, the NIR light (0.6 W/cm 2 ) irradiated the nanofibers for 5 min, which provided the fluorescence intensity inside the cells, indicating the release of fluorescein upon NIR light irradiation (Figure 7d-f) [77,78]. The cellular uptake was evaluated through cell culturing with a density of 3 × cells/well on the nanofibers, and the cells were incubated for 24 h under the same con tions mentioned above. The cellular uptake of released fluorescein was measured usi CLSM in the absence and presence of NIR light. First, the nanofibers were not exposed NIR light, which resulted in no release of fluorescein. As shown in Figure 7a-c, fluor cence intensity was not observed in the cells after 6 h of incubation. Then, the NIR lig MTT assay was performed to evaluate the therapeutic potential of our platform through cell viability with U87 cells, and CPT was chosen as an anti-cancer drug. Cells were incubated for 24 h at 37 • C in a 5% CO 2 incubator before experiments. In this analysis, three control groups were prepared as: only cells without nanofibers (cells w/o NFs), nanofibers containing GNRs without CPT (NFs+GNRs), and nanofibers containing GNRs with CPT (NFs+GNRs+CPT). Figure 8a shows that there was no substantial decrease in cell viability (approximately 7.6 ± 3.6%) in the case of NFs+GNRs, whereas NFs+GNRs+CPT showed a 10.2 ± 3.9% decrease in cell viability. The nanofibers containing both GNRs and CPT showed maximum cell death upon irradiation of different laser powers after drug release. According to the results, upon NIR light irradiation at 0.6 W/cm 2 for 2, 10, and 20 min, the cell viabilities were 85.3 ± 6.4%, 76.6 ± 2.7%, and 62.8 ± 4.7%, respectively. Upon the NIR irradiation at 1.1 W/cm 2 for 2, 10, and 20 min, the cell viabilities were 82.2 ± 2.03%, 70.9 ± 2.4%, and 53.7 ± 4.1%, respectively. Furthermore, upon the NIR light irradiation at 1.6 W/cm 2 for 2, 10, and 20 min, the cell viabilities were 61.8 ± 9.5%, 46.4 ± 14.1%, and 8.5 ± 4.3%, respectively. The highest cell death was achieved upon increasing the irradiation time and laser power. In Figure 8b, the amount of released CPT has been reported. As shown in Figure 8b, without NIR light irradiation for 2, 10, and 20 min, no significant amount of drug was released. Upon NIR light irradiation at 0.6 W/cm 2 for 2, 10, and 20 min, 0.6 ± 0.01, 3.2 ± 0.04, and 5.4 ± 0.07 µg of drug were released, respectively. At irradiation with 1.1 W/cm 2 of laser power for 2, 10, and 20 min, the released amounts were 1.2 ± 0.4, 9.0 ± 0.2, and 11.6 ± 0.36 µg, respectively. Moreover, upon the NIR light irradiation at 1.6 W/cm 2 for 2, 10, and 20 min, the drug release amounts were 3.4 ± 0.06, 14.0 ± 0.2, and 16.9 ± 0.35 µg, respectively. The hyperthermia effect was obtained due to the presence of GNRs in nanofibers. Cell viability decreased as the NIR laser power increased, as seen in Figure 9a. As the GNRs were present in the matrix of the nanofibers, the hyperthermia effect weakened owing to the poor exposure of the GNRs to the cancer cells. As a result of the hyperthermia treatment, the number of dead cells increased as the temperature increased above 40 • C. The most severe hyperthermia effects were observed in the case of NIR irradiation at 1.6 W/cm 2 , that is, as the temperature increased in the range from 41 • C to 45 • C, the cell viability decreased to 88.6 ± 6.2%, 80.7 ± 2.6%, and 72.3 ± 0.05% after 2, 10, and 20 min of NIR light exposure, respectively [79,80]. The toxicity of the NIR light was also investigated (Figure 9b). There was no significant reduction in the cell viability when the cells were exposed to the NIR light at 0.6, 1.1, and 1.6 W/cm 2 , thereby indicating that the NIR light was less toxic to the cells. Based on these findings, we conclude that our method appears to be promising for on-demand drug release and therapeutic efficacy in cancer treatment. light irradiation at 1.6 W/cm 2 for 2, 10, and 20 min, the drug release amounts were 3. 0.06, 14.0 ± 0.2, and 16.9 ± 0.35 μg, respectively. The hyperthermia effect was obtained d to the presence of GNRs in nanofibers. Cell viability decreased as the NIR laser pow increased, as seen in Figure 9a. As the GNRs were present in the matrix of the nanofibe the hyperthermia effect weakened owing to the poor exposure of the GNRs to the can cells. As a result of the hyperthermia treatment, the number of dead cells increased as temperature increased above 40 °C. The most severe hyperthermia effects were observ in the case of NIR irradiation at 1.6 W/cm 2 , that is, as the temperature increased in range from 41 °C to 45 °C, the cell viability decreased to 88.6 ± 6.2%, 80.7 ± 2.6%, and 7 ± 0.05% after 2, 10, and 20 min of NIR light exposure, respectively [79,80]. The toxicity the NIR light was also investigated (Figure 9b). There was no significant reduction in cell viability when the cells were exposed to the NIR light at 0.6, 1.1, and 1.6 W/cm thereby indicating that the NIR light was less toxic to the cells. Based on these findin we conclude that our method appears to be promising for on-demand drug release a therapeutic efficacy in cancer treatment. Conclusions Herein, we developed PNIPAM nanofibers containing GNRs and drugs that can control the drug release through NIR light irradiation. As CTAB-GNRs are stable only in water and are not dispersed in organic solvents, TMA-GNRs were prepared through an exchange reaction with TMA ligands that are well-dispersed in organic solvents. To prevent the PNIPAM nanofibers from dissolving in water below the LCST, stable PNIPAM nanofibers were prepared through a crosslinking reaction with OpePOSS. The PNIPAM nanofibers containing GNRs and drugs obtained through electrospinning have high thermal/optical responsiveness. In this study, the on-demand drug release was achieved through our versatile nanofiber platform. The results showed that the fabricated nanofibers are structurally stable and have a very large surface-area-to-volume ratio for effective delivery of drugs. A strong photothermal effect was observed by introducing the GNRs in the nanofibers. The heat generated by the GNRs upon NIR light irradiation could Conclusions Herein, we developed PNIPAM nanofibers containing GNRs and drugs that can control the drug release through NIR light irradiation. As CTAB-GNRs are stable only in water and are not dispersed in organic solvents, TMA-GNRs were prepared through an exchange reaction with TMA ligands that are well-dispersed in organic solvents. To prevent the PNIPAM nanofibers from dissolving in water below the LCST, stable PNIPAM nanofibers were prepared through a crosslinking reaction with OpePOSS. The PNIPAM nanofibers containing GNRs and drugs obtained through electrospinning have high thermal/optical responsiveness. In this study, the on-demand drug release was achieved through our versatile nanofiber platform. The results showed that the fabricated nanofibers are structurally stable and have a very large surface-area-to-volume ratio for effective delivery of drugs. A Pharmaceutics 2021, 13, 1319 14 of 17 strong photothermal effect was observed by introducing the GNRs in the nanofibers. The heat generated by the GNRs upon NIR light irradiation could control the swelling and deswelling property of the nanofibers owing to the thermal sensitivity of PNIPAM, which results in drug release. This optimal method allows both hydrophilic and hydrophobic drugs to be safely introduced into DDSs and control the drug release to treat cancers and other complex diseases. Through cell studies, good biocompatibility of the nanofibers was confirmed. Furthermore, our method may contribute to the application of the sequential release of multiple drugs, which is the scope of our future studies. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to institutional policies.
8,590.4
2021-08-01T00:00:00.000
[ "Medicine", "Materials Science" ]
Proof of dispersion relations for the amplitude in theories with a compactified space dimension The analyticity properties of the scattering amplitude in the nonforward direction are investigated for a field theory in the manifold R3,1 ⨂ S1. The theory is obtained from a massive, neutral scalar field theory of mass m0 defined in flat five dimensional spacetime upon compactification on a circle, S1. The resulting theory is endowed with a massive scalar field which has the lowest mass, m0, as of the original five dimensional theory and a tower of massive Kaluza-Klein states. We derive nonforward dispersion relations for scattering of the excited Kaluza-Klein states in the Lehmann-Symanzik-Zimmermann formulation of the theory. In order to accomplish this object, first we generalize the Jost-Lehmann-Dyson theorem for a relativistic field theory with a compact spatial dimension. Next, we show the existence of the Lehmann-Martin ellipse inside which the partial wave expansion converges. The scattering amplitude satisfies fixed-t dispersion relations when |t| lies within the Lehmann-Martin ellipse. Introduction This article is a continuation of our investigation of the analyticity properties of scattering amplitude in scalar field theory defined in a manifold R 3,1 ⊗ S 1 . First we consider a neutral, massive scalar field theory of mass m 0 in a flat five dimensional Minkowski space. Subsequently, one spatial coordinate is compactified on a circle of radius R. The spectrum of the resulting theory consists of a neutral scalar of mass m 0 (same as the mass of the original uncompactified theory) and a tower of massive Kaluza-Klein (KK) states carrying the KK charges. We adopt the Lehmann-Symanzik-Zimmermann (LSZ) [1] formalism to construct the amplitude and to study the analyticity property of the scattering amplitude. We had proved the forward dispersion relation for scattering of KK states in an earlier paper [2] (henceforth referred to as I). The present investigation brings our programme to a completion. The analyticity properties of scattering amplitude plays a very important role in our understanding of collisions of relativistic particle in the frame works of general field theories without appealing to any specific model. The scattering amplitude, F (s, t), is an analytic function of the center of mass energy squared, s, for fixed momentum transferred squared, t. The fixed-t dispersion relations in s have been proved when |t| lies within the Lehmann ellipse in the axiomatic approach in the case of D = 4 field theory, mostly for a single neutral massive field. These results are derived from the general field theories (axiomatic field theories) in the axiomatic approach of Lehmann-Symanzik-Zimmenmann (LSZ) [1] and in the more general frameworks of axiomatic formulation of field theories [3][4][5][6][7][8][9][11][12][13][14][15][16]. We recall that some of the fundamental principles of such formulations are locality, mircocausality, Lorentz invariance to mention a few. There are very strong reasons to believe that if the dispersion relations are violated then the validity of some of the axioms of these generalized relativistic field theories might be in question. The subsequent progress in JHEP06(2020)139 this field has led to several rigorous theorems which impose constraints on experimentally observable parameters, generally stated as bounds. These bounds have been put to tests in high energy collision experiments and there is so far no evidence of the violation of these bounds. Notable among them is the Froissart-Martin bound [17][18][19] that restricts the growth of total cross sections at asymptotic energies: σ t ≤ 4π t 0 (logs) 2 where t 0 is determined from the first principles for a given scattering process. The experimental data respect this upper bound for diverse scattering processes over a wide energy range. In the event of any experimental violations of the bound, we shall be compelled to reexamine some of the axioms of the general theories. The scattering amplitude in nonrelativistic potential scattering exhibit certain analyticity properties in energy k for a large class of potentials as is known for a very long time [20][21][22]. We recall that the analyticity of scattering amplitude in QFT enjoys a very intimate relationship with the principle of microcausality. In contrast, however, in the context of potential scattering, there is no such deep reason which leads to analyticity of the corresponding amplitude. We recall that the nonrelativistic theory is invariant only under Galilean transformations whereas QFT's are required to be Lorentz invariant. Khuri [27] encountered a situation, in a nonrelativistic potential model, where the amplitude does not satisfy analyticity in momentum k. The consequences of such a violation of analyticity would not the so serious. Whereas, if the amplitude constructed in the frame works of general field theories based on LSZ or Wightman axioms, does not exhibit analyticity then it will raise serious concerns. The roles played by higher spacetime dimensional field theories (D > 4) has become increasingly important. One of the primary reasons is that our quest to construct unified fundamental theories have led physicists to explore consistent theories in higher spacetime dimensions so that the physical phenomena understood in four spacetime dimensions are through effective theories. It is worth while to recall, in this context, supersymmetric theories, supergravity theories and the string theories which have been investigated intensively over past several decades, are consistently defined in higher spacetime dimensions. In order to understand the physics in four dimensions, we adopt the ideas of Kaluza-Klein compactifications in the modern perspective. Thus it is invoked that some of the extra spatial dimensions are compactified in order to facilitate construction of four dimensional theories enabling us to comprehend physical phenomena observed in the present accessible energies. There are a large class of effective four dimensional theories arising from various compactification schemes. Moreover, there are proposals, the so called large radius compactification schemes where the signatures of the extra spatial dimensions might be observed in current high energy colliders [23,24]. As a consequence, there has been a lot of phenomenological studies to investigate and build models for possible experimental observations of the decompactified dimensions at the present high energy accelerators such as LHC. Indeed, the scale of the extra compact dimensions is extracted from the LHC experiments and it puts the compactification scale to be more than 2 TeV [25,26]. The signatures of models of large radius compactification and the number of extra compactified dimensions envisaged in a model, go into getting the experimental limits. In some cases, even the limit could be higher than 2 TeV and we refer the readers to the two papers cited here. JHEP06(2020)139 The large radius compactification ideas motivated Khuri [27], in order to investigate the analyticity properties of scattering amplitude in a nonrelativistic potential model. He identified a model where the potential is spherically symmetric as a function of noncompact coordinates and is of short range, on the other hand one extra spatial coordinate is compctified on S 1 . Khuri [27] discovered that, under certain circumstances, the amplitude does not always satisfy the analyticity properties. He also recalled that the analyticity properties of amplitudes were investigated earlier [21,22] with noncompact spatial coordinates (for d = 3 case); (when there was no S 1 compactification) the amplitude satisfied the dispersion relations. Khuri [27] provided counter examples for a model with the S 1 compactification to demonstrate how the analyticity of the forward scattering amplitude beaks down in the presence of S 1 compactification. This result is based on perturbation theoretic approach to nonrelativistic potential scattering. We shall very briefly summarize Khuri's result in the next section. It was shown in I that the forward scattering amplitude in a relativistic quantum field theory (QFT), with a compact spatial coordinate, satisfies forward dispersion relation unlike what Khuri had has concluded in his potential model [27]. We had considered a five dimensional massive, neutral scalar field theory in five dimensional Minkowski (flat) space to start with. Subsequently, one spatial coordinate was compactified on S 1 . The LSZ formalism was adopted to derive the scattering amplitude. As mentioned earlier, if dispersion relations were violated in such a theory then foundations of general relativistic quantum field theories would be questioned. However, the proof of dispersion relations in the forward direction does not provide a complete study of analyticity properties of the theory. It is necessary to prove the nonforward dispersion relations for a general relativistic QFT. We had discussed the requisite steps necessary in order to accomplish this goal in I. The purpose of this article is to bring to completion the investigation of the analyticity of the four point amplitude. We briefly recall our previous work [28] on study of analyticity in higher dimensional theories as those results will be quite useful for the continuation to the present investigation. We proceeded as follows to study high energy behaviors and analyticity of higher dimensional theories. It was shown, in the LSZ formalism, that the scattering amplitude has desired attributes in the following sense: (i) We proved the generalization of the Jost-Lehmann-Dyson theorem for the retarded function [29,30] for the D > 4 case [31]. (ii) Subsequently, we showed the existence of the Lehmann-Martin ellipse for such a theory. (iii) Thus a dispersion relation can be written in s for fixed t when the momentum transfer squared lies inside Lehmann-Martin ellipse [32,33]. (iv) The analog of Martin's theorem can be derived in the sense that the scattering amplitude is analytic the product domain D s ⊗ D t where D s is the cut s-plane and D t is a domain in the t-plane such that the scattering amplitude is analytic inside a disk, |t| <R,R is radius of the disk and it is independent of s. Thus the partial wave expansion converges inside this bigger domain. (v) We also derived the analog of Jin-Martin [37] upper bound on the scattering amplitude which states that the fixed t dispersion relation in s does not require more than two subtractions. (vi) Therefore, a generalized Froissart-Martin bound was be proved. JHEP06(2020)139 In order to accomplish our goal for a D = 4 theory which arises from S 1 compactification of a D = 5 theory i.e. to prove nonforward dispersion relations, we have to establish the results (i) to (iv) for this theory. It is important to point out, at this juncture, that (to be elaborated in sequel) the spectrum of the theory consists of a massive particle of the original five dimensional theory and a tower of Kaluza-Klein states. Thus the requisite results (i)-(iv) are to obtained in this context in contrast to the results of the D-dimensional theory with a single massive neutral scalar field. The paper is organized as follows. In the next section (section 2) we recapitulate the main results of Khuri's work [27] without details. The interested reader might consult the original paper of Khuri or section 2 of paper I. This section also contains essential aspects of the LSZ formulation which are utilized to prove the dispersion relations. The third section is devoted to investigation of the analyticity of the scattering amplitude. Our first step is to obtain the Jost-Lehmann-Dyson representation. Consequently, we would obtain the domain free from singularity in t-plane. Next, we shall outline the derivation of the Lehmann ellipses in the present context. The derivation needs to account for the fact that, unlike the case of the usual derivation for single scalar theory, there are the KK towers and theirs presence is to be considered. Subsequently, we are in a position to write the nonforward dispersion relations. The spectral representations of retarded function, advanced function and the causal function play an important role where we have to sum over complete set of physical intermediate states. A theory with the KK tower is endowed with an infinite sum (we shall explain this point later). It is natural to ask how to deal with this problem. We shall argue that as long as s is finite, may be very large, the contributions of the number of intermediate KK states to the sum is finite once the unitarity constraint is imposed. One of our important results is that we prove the analog of Martin's theorem where the unitarity and positivity properties are invoked. Moreover, Martin's theorem leads to constrain the growth properties of partial wave amplitudes. We also derive a version of the Froissart-Martin bound for a field theory with S 1 compactification. Another important question is to find out how many subtractions are required to write the fixed-t dispersion relation. This issue is intimately related to the proof of Jin-Martin bound. We prove that the scattering amplitude requires at most two subtractions. We summarize and discuss our results in section 5. Analyticity property of scattering amplitude and compact spatial dimension In this section, we shall briefly present some of the results which motivated the present investigation. We enlist important axioms and the relevant kinematical variables. First we summarize essential results of Khuri's work [27]. The interested reader on this topic may go through his paper for details. Scattering in nonrelativistic quantum mechanics with a compact dimension Khuri [27] studied analyticity property of scattering amplitude in a nonrelativistic potential model with a compact spatial dimension. The theory is defined as follows: the potential JHEP06(2020)139 is V (r, Φ), where r is the radial coordinate, |r| = r, of the three dimensional space and Φ is compact coordinate; Φ + 2πR = Φ. The radius of compactification, R, is taken to be very small, R 1, compared to the scale available in the potential theory (there is no Planck scale here). The perturbative Greens function technique is adopted. The scattering amplitude depends on three variables: the momentum k, the scattering angle and an integer associated with the periodicity of Φ. The free Greens function satisfies the free Schrödinger equation: The plane wave solution to the Schrödinger equation is Ψ 0 (x, Φ) = 1 (2π) 2 e ik.x e inΦ , n ∈ Z and K 2 = k 2 + (n 2 /R 2 ). The closed form expression for the free Greens function has been derived in [27]. A notable feature is that for (n 2 /R 2 ) > K 2 the Greens function is exponentially damped as e − √ n 2 /R 2 −K 2 . The expression for the scattering amplitude is extracted from the large |x| limit when one looks at the asymptotic behavior of the wave function, where [KR] is the largest integer less than KR and Khuri [27] identifies a conservation rule: K 2 = k 2 + (n 2 /R 2 ) = k 2 + (m 2 /R 2 ). Moreover, it is argued that the scattered wave has only (2[KR] + 1) components and those states with (m 2 /(R 2 ) > k 2 + (n 2 /R 2 ) are exponentially damped for large |x| and consequently these do not appear in the scattered wave. Now the scattering amplitude is extracted by Khuri using the standard prescriptions. It takes the following form Note that the condition, k 2 + n 2 /R 2 = k 2 + n 2 /R 2 is to be satisfied. Thus the scattering amplitude describes the process where incoming wave |k, n > is scattered to final state |k , n > with the above constraint. Khuri proceeds further to extract the scattering amplitude starting from the full Greens function. It satisfied the Schrödinger equation in the presence of the potential. The equation assumes the following form Here T B is the Born term given by JHEP06(2020)139 The perturbative Greens function technique is utilized to extract the scattering amplitude order by order. The crucial observation of Khuri [27] is that when he considers the forward amplitude for the case of n = 1, to second order, the amplitude does not satisfy analyticity property in k, whereas for n = 0 he does not encounter any such problem. He had considered a general class of potentials of the type where u m (r) = λ m e −µr r and the potential is short range in nature. Khuri drew attention to an important fact that in absence of any compactified coordinates, when analyticity of scattering amplitude was investigated in a theory in the 3-dimensional space with same type of potential as above the amplitude did respect analyticity [21,22]. Remarks. (i) Khuri [27] noted that, in the context of large radius compactification scenario, if the amplitude exhibits such a nonanalytic behavior in k, there will be serious implications for the physics at LHC energies. (ii) Moreover, it is to be noted that in the frameworks of nonrelativistic quantum mechanics, the analyticity of scattering amplitude is not so intimately connected with causality compared to a close relationship between the two as in relativistic quantum field theory. In other words, the analyticity of the scattering amplitude in nonrelativistic quantum mechanics is not so sacred as in QFT since analyticity is deeply related with a fundamental principle like microcausality. Recall that the nonrelativistic theory is only invariant under Galilean transformations i.e. they are not required to be Poincaré invariant. The relativistic quantum field theories (QFT) are Poincaré invariant. The principle of microcausality plays a very crucial role in local field theories. Furthermore, microcausality and analyticity are very intimately related. Thus the proof of dispersion relations in QFT very critically depends on microcausality. A violation of dispersion relation would necessarily lead to questioning the foundations of general quantum field theories. (iii) In view of above remarks, we are led to investigate the analyticity property of scattering amplitude in a quantum field theory with a compactified spatial dimension. Quantum field theory with compact spatial dimensions We have shown in I that the forward scattering amplitude of a theory, defined on the manifold R 3,1 ⊗ S 1 , satisfied dispersion relations. This result was obtained in the frame works of the LSZ formalism. We summarize, in this subsection, the starting points of I as stated below. We considered a neutral, scalar field theory with mass m 0 in flat five dimensional Minkowski space R 4,1 . It is assumed that the particle is stable and there are no bound states. The notation is that the spacetime coordinates are,x, and all operators are denoted JHEP06(2020)139 with a hat when they are defined in the five dimensional space where the spatial coordinates are noncompact.The LSZ axioms are [1]: A1. The states of the system are represented in a Hilbert space,Ĥ. All the physical observables are self-adjoint operators in the Hilbert space,Ĥ. A2. The theory is invariant under inhomogeneous Lorentz transformations. A3. The energy-momentum of the states are defined. It follows from the requirements of Lorentz and translation invariance that we can construct a representation of the orthochronous Lorentz group. The representation corresponds to unitary operators,Û (â,Λ), and the theory is invariant under these transformations. Thus there are Hermitian operators corresponding to spacetime translations, denoted asPμ, withμ = 0, 1, 2, 3, 4 which have following properties: Pμ ,Pν = 0 (2.8) IfF (x) is any Heisenberg operator then its commutator withPμ is It is assumed that the operator does not explicitly depend on spacetime coordinates. If we choose a representation where the translation operators,Pμ, are diagonal and the basis vectors |p,α > span the Hilbert space,Ĥ, Pμ|p,α >=pμ|p,α > (2.10) then we are in a position to make more precise statements: • Existence of the vacuum: there is a unique invariant vacuum state |0 > which has the propertyÛ (â,Λ)|0 >= |0 > (2.11) The vacuum is unique and is Poincaré invariant. • The eigenvalue ofPμ,pμ, is light-like, withp 0 > 0. We are concerned only with massive stated in this discussion. If we implement infinitesimal Poincaré transformation on the vacuum state thenPμ from above postulates and note thatMμν are the generators of Lorentz transformations. A4. The locality of theory implies that a (bosonic) local operator at spacetime pointxμ commutes with another (bosonic) local operator atx μ when their separation is spacelike i.e. if (x −x ) 2 < 0. Our Minkowski metric convention is as follows: the inner product of two 5-vectors is given byx. The micro causality, for two local field operators, is stated to be It is well known that, in the LSZ formalism, we are concerned with vacuum expectation values of time ordered products of operators as well as with the retarded product of fields. The requirements of the above listed axioms lead to certain relationship, for example, between vacuum expectation values of R-products of operators. Such a set of relations are termed as the linear relations and the importance of the above listed axioms is manifested through these relations. In contrast, unitarity imposes nonlinear constraints on amplitude. For example, if we expand an amplitude in partial waves, unitarity demands certain positivity conditions to be satisfied by the partial wave amplitudes. We summarize below some of the important aspects of LSZ formalism as we utilize them through out the present investigation. Moreover, the conventions and definitions of I will be followed for the conveniences of the reader. (i) The asymptotic condition: according to LSZ the field theory accounts for the asymptotic observables. These correspond to particles of definite mass, charge and spin etc. φ in (x) represents the free field and a Fock space is generated by the field operator. The physical observable can be expressed in terms of these fields. (ii)φ(x) is the interacting field. LSZ technique incorporates a prescription to relate the interacting field,φ(x), withφ in (x); consequently, the asymptotic fields are defined with a suitable limiting procedure. Thus we introduce the notion of the adiabatic switching off of the interaction. A cutoff adiabatic function is postulated such that this function controls the interactions. It is 1 at finite interval of time and it has a smooth limit of passing to zero as |t| → ∞. It is argued that when adiabatic switching is removed we can define the physical observables. (iii) The fieldsφ in (x) andφ(x) are related as follows: By the first postulate,φ in (x) creates free particle states. However, in generalφ(x) will create multi particle states besides the single particle one since it is the interacting field. Moreover, < 1|φ in (x)|0 > and < 1|φ(x)|0 > carry same functional dependence inx. If the factor ofẐ were not the scaling relation between the two fields (2.15), then canonical commutation relation for each of the two fields (i.e.φ in (x) andφ(x)) will be the same. Thus in the absence ofẐ the two theories will be identical. Moreover, the postulate of asymptotic condition states that in the remote futurê JHEP06(2020)139 We may as well construct a Fock space utilizingφ out (x) as we could withφ(x) in . Furthermore, the vacuum is unique forφ in ,φ out andφ(x). The normalizable single particle states are the same i.e.φ in |0 >=φ out |0 >. We do not displayẐ from now on. If at all any need arises,Ẑ can be introduced in the relevant expressions. We define creation and annihilation operators forφ in ,φ out . We recall thatφ(x) is not a free field. Whereas the fieldsφ in,out (x) satisfy the free field equations [ 5 +m 2 0 ]φ in,out (x) = 0, the interacting field satisfies an equation of motion which is endowed with a source current: . We may use the plane wave basis for simplicity in certain computations; however, in a more formal approach, it is desirable to use wave packets. The relevant vacuum expectation values of the products of operators in LSZ formalism are either the time ordered products (the T-products) or the retarded products (the Rproducts). We shall mostly use the R-products and we use them extensively throughout this investigation. It is defined as note that Rφ(x) =φ(x) and P stands for all the permutations i 1 , . . . i n of 1, 2 . . . n. The R-product is hermitian for hermitian fieldsφ i (x i ) and the product is symmetric under exchange of any fieldsφ 1 (x 1 ) . . .φ n (x n ). Notice that the fieldφ(x) is kept where it is located in its position. We list below some of the important properties of the R-product for future use [6]: (ii) Another important property of the R-product is that whenever the time componentx 0 , appearing in the argument ofφ(x) whose position is held fix, is less than time component of any of the four vectors (x 1 , . . .x n ) appearing in the arguments ofφ(x 1 ) . . .φ(x n ). JHEP06(2020)139 Therefore, the vacuum expectation value of the R-product dependents only on difference between pair of coordinates: in other words it depends on the following set of coordinate differences:ξ 1 =x 1 −x,ξ 2 =x 2 −x 1 . . .ξ n =x n−1 −x n as a consequence of translational invariance. We may define 'in' and 'out' states in terms of the creation operators associated with 'in' and 'out' fields as follows We can construct a complete set of states either starting from 'in' field operators or the 'out' field operators and each complete set will span the Hilbert space,Ĥ. Therefore, a unitary operator will relate the two sets of states in this Hilbert space. This is a heuristic way of introducing the concept of the S-matrix. We shall define S-matrix elements through LSZ reduction technique in subsequent section. We shall not distinguish between notations likeφ out,in orφ out,in and therefore, there might be use of the sloppy notation in this regard. We record the following important remark en passant. The generic matrix element <α|φ(x 1 )φ(x 2 ) . . . |β > is not an ordinary function but a distribution. Thus it is to be always understood as smeared with a Schwarz type test function f ∈ S. The test function is infinitely differentiable and it goes to zero along with all its derivatives faster than any power of its argument. We shall formally derive expressions for scattering amplitudes and the absorptive parts by employing the LSZ technique. It is to be understood that these are generalized functions and such matrix elements are properly defined with smeared out test functions. We are in a position to study several attributes of scattering amplitudes in the five dimensional theory such as proving existence of the Lehmann-Martin ellipse, give a proof of fixed t dispersion relation to mention a few. However, these properties have been derived in a general setting recently [28] for D-dimensional theories. The purpose of incorporating the expression for the VEV of the commutator of two fields in the 5-dimensional theory is to provide a prelude to the modification of similar expressions when we compactify the theory on S 1 as we shall see in the next section. The compactification of scalar field theory: R 4,1 → R 3,1 ⊗ S 1 . We compile below the relevant materials necessary to proceed further in order to prove the fixed-t dispersion relations. The details are presented in I. One spatial dimension of the 5-dimensional theory is compactified on S 1 . If y is the compact coordinate and x µ are spacetime coordinates, defined on R 3,1 , thenxμ = (x µ , y). The asymptotic field in D = 5 satisfy the free field equation [ 5 + m 2 0 ]φ in,out (x) = 0. Due to the periodicity of y, y + 2πR = y, R being the radius of S 1 ,φ in,out (x) admit KK mode expansion. JHEP06(2020)139 The equation of motion is where φ in,out n (x, y) = φ in,out n e iny R and n = 0 term has no y-dependence being denoted as φ 0 (x); from now on 4 = here and everywhere; and m 2 n = m 2 o + n 2 R 2 . Thus we have tower of massive states. The momentum associated in the y-direction is q n = n/R and is quantized in the units of 1/R. It is an additive conserved quantum number and designated as the Kaluza-Klein (KK) charge. The interacting fieldφ(x), admits a similar KK expansion; however, the equation motion contains a source current,ĵ(x, y). This current is expanded in KK modes: {j 0 (x), J n (x)e iny/R } where n takes values from −∞ to +∞ and n = 0 is excluded from J n . Note that j 0 (x) and J n (x)e iny/R are source associated with φ 0 and φ n (x)e iny/R respectively. The mode expansion, analogous to (2.25) iŝ and each current carries the KK charge n. Let us consider the set of asymptotic fields, {φ in 0 , φ in n }. We can construct the Fock spaces associated with each of these fields from their corresponding creation operators. Let a † (k) and A † (p, q n ) be creation operators for φ 0 (x) and φ n (x) respectively. The latter is endowed with the KK charge q n . Now each set of operator will create Hilbert spaces H n , n = 0, ±1, . . .. The same construction can be carried out with the set of out field. The spectrum of the compactified theory is: a field of mass m 0 , associated with φ 0 and tower of Kaluza-Klein (KK) states characterized by mass and discrete 'charge', (m 2 n = m 2 o + n 2 R 2 , q n ), respectively. The Hilbert space,Ĥ, of D = 5 theory is decomposed as a direct sum of Hilbert spaces where each one is characterized by its quantum number q n H = ⊕H n (2.28) Thus H 0 is the Hilbert space constructed from φ in 0 with q n = 0 and H n are constructed from the KK fields. Therefore, vectors belong to two spaces with different KK charges are orthogonal to one another < p, q n |p , q n >= δ 3 (p − p )δ n,n (2.29) Remark. Note also that n ∈ Z; if there is an additional 'parity' invariance under y → −y we need only to sum over positive set of integers {n} in the KK expansions. We introduce the notion of an antiparticle. If a 'particle' carried charge q n > 0 its 'anti-particle' has negative charge −q n ; however, it must have positive energy. Thus an intermediate state |q n , q −n > has vacuum quantum number and so on. Definitions and kinematical variables In order to investigate the analyticity of an amplitude and determine its analyticity domains we have to define kinematical variables and mass thresholds. There are three different class JHEP06(2020)139 of scattering processes: (i) scattering of states with q n = 0, i.e. scattering of zero modes. (ii) The scattering of a zero mode state with a KK state i.e. q n = 0. The reactions (i) and (ii) have been dealt in I for forward scattering. The study of the nonforward scattering of reactions (i) and (ii) is a straight forward generalization. It is omitted in this work. The states carrying q n = 0 are denoted by χ n (from now on a state carrying charge is defined with a subscript n and momenta carried by external particles are denoted as p a , p b , . . .). Moreover, we shall consider elastic scattering of states carrying equal charge; the elastic scattering of unequal charge particles is just the elastic scattering of unequal mass states due to mass-charge relationship for the KK states. Let us consider a generic 4-body reaction (all states carry non-zero n) The particles (a, b, c, d) (the corresponding fields being χ a , χ b , χ c , χ d ) respectively carrying momenta p a , p b , p c , p d . The Lorentz invariant Mandelstam variables are The independent identities of the four particles will facilitate the computation of the amplitude so that we keep track of the fields reduced using LSZ procedure. We list below some relevant (kinematic) variables which will be required in future These correspond to lowest mass two or more particle states which carry the same quantum number as that of particle a, b, c and d respectively. We define below six more variables The variable M ab carries the same quantum number as (a and b) and it corresponds to two or more particle states. Similar definition holds for the other five variables introduced above. We define two types of thresholds: (i) the physical threshold, s phys , and s thr . In absence of anomalous thresholds (and equal mass scattering) s thr = s phys . Similarly, we may define u phys and u thr which will be useful when we discuss dispersion relations. We assume from now on that s thr = s phys and u thr = u phys . Now we outline the derivation of the expression for a four point function in the LSZ formalism. We start with |p d , p c out > and |p b , p a in > and considers the matrix element < p d , p c out|p b , p a in >. Next we subtract out the matrix element < p d , p c in|p b , p a in > to define the S-matrix element. JHEP06(2020)139 We have reduced fields associated with a and c in (2.34). In the next step we may reduce all the four fields and in such a reduction we shall get VEV of the R-product of four fields which will be operated upon by four K-G operators. However, the latter form of LSZ reduction (when all fields are reduced) is not very useful when we want to investigate the analyticity property of the amplitude in the present context. In particular our intent is to write the nonforward dispersion relation. Thus we abandon the idea of reducing all the four fields. Remark. Note that on the right hand side of the equation (2.34) the operators act on Rχ a (x)χ(x ) c and there is a θ-function in the definition of the R-product. Consequently, the action of K x K x on Rχ a (x)χ c (x ) will produce a term RJ a (x)J c (x ). In addition the operation of the two K-G operators will give rise to δ-functions and derivatives of δ-functions and some equal time commutators i.e. there will terms whose coefficients are δ(x 0 − x 0 ). When we consider Fourier transforms of the derivatives of these δ-functions they will be transformed to momentum variables. However, the amplitude is a function of Lorentz invariant quantities. Thus one will get only finite polynomials of such variables, as has been argued by Symanzik [38]. His arguments is that in a local quantum field theory only finite number of derivatives of δ-functions can appear. Moreover, in addition, there are some equal time commutators and many of them vanish when we invoke locality arguments. Therefore, we shall use the relation keeping in mind that there are derivatives of δ-functions and some equal time commutation relations which might be present. Moreover, since the derivative terms give rise to polynomials in Lorentz invariant variables, the analyticity properties of the amplitude are not affected due to the presence of such terms. This will be understood whenever we write an equation like (2.36). Nonforward elastic scatting of n = 0 Kaluza-Klein states We envisage elastic scattering of two equal mass, m 2 n = m 2 0 + n 2 R 2 , hence equal charge KK particles and we take n positive. Our first step is to define the scattering amplitude for this reaction (see (2.34)) JHEP06(2020)139 andK x = ( +m 2 n ). We let the two KG operators act onR(x; x ) in the VEV and resulting equation is Here J a (x) and J c (x ) are the source currents associated with the fields χ a (x) and χ b (x ) respectively. We arrive at (3.3) from (3.1) with the understanding that the r.h.s. of (3.3) contains additional terms; however, these terms do not affect the study of the analyticity properties of the amplitude as alluded to earlier. We shall define three distributions which are matrix elements of the product of current. The importance of these functions will be evident in sequel |Q i > and |Q f > are states which carry four momenta; these momenta are held fixed and treated as fixed parameter. Let us focus attention on the matrix element of the causal commutator defined in (3.6). The prescription is to open up the commutator of the currents and introduce a complete set of physical states. Let us assign KK charge n to initial and final states. Thus the conservation of KK charge dictates which intermediate physical states are permitted consistent under the KK charge conservation law. The complete set of physical states are: n |P nαn >< P nαn | = 1 and n |P n β n ><P n β n | = 1. Here {α n ,β n } stand for quantum numbers that are permitted for the intermediate states. The momenta P n ,P n ∈ V + ; V + is the forward light cone. The matrix element defining F C (q), (3.7), assumes the following form In order to derive the spectral representation for (3.8) following steps are used. We implement judicious translation operations to get rid of the z-dependence of the currents. Then carry out the integration d 4 z which leads to a δ-function. The details of the derivations are given in I. Finally, F C (q) is expressed as Consequences of microcausality. The Fourier transform of F C (q),F C (z), vanishes outside the light cone. Moreover, F C (q) will also vanish as function of q wherever, both A s (q) and A u (q) vanish simultaneously. Furthermore, since ( and The above two conditions, for nonvanishing of A u (q) and A s (q) implies the existence of the minimum mass parameters for the nonvanishing of A u (q) and A s (q): (i) ( Let us discuss how the theory with KK states differs from the one with a single scalar field. In the spectral represents of A u (q) and A s (q) we sum over all physical intermediate states which means the sum includes the KK states as long as their quantum numbers are such that KK charge conservation is satisfied (depending on the charges of |Q i > and |Q f >). On the other hand, for theory with a single scalar field the intermediate states correspond to physical multiparticle states. Naturally, it begs an answer to the question whether the entire KK tower (infinite number of them) contributes. This issue cannot be resolved in the 'linear program' to study analyticity in the frameworks of axiomatic field theory. We shall return to this question and provide a resolution in the next section. In order to derive a fixed-t dispersion relation we have to identify a domain which is free from singularities in the t-plane. The first step is to obtain the Jost-lehmann-Dyson representation for the causal commutator, F C (q), for the case of equal mass elastic processes with n = 0. Therefore, the technique of Jost and Lehmann [29] is quite adequate. We do not have to resort to more elegant and general approach of Dyson [30]. We present the results concisely and refer to [28] for details. As noted in (3.12) and (3.13), F C (q) is nonvanishing in those domains. We designate this region asR, JHEP06(2020)139 where Q = Q i +Q f 2 and V + being the future light cone. There is no need to repeat the derivation of Jost-Lehmann representation here. The present case differs from the singlefield case in the following way. Here we are looking for the nearest singularity to determine the singularity free region. For the case at hand, the presence of the towers of KK states is to be envisaged in the following perspective. Since we consider equal mass scattering the location of nearest singularity will be decided by the lowest values of M + and M − . Let us elaborate this point. We recall that there is the tower of KK states appearing as intermediate states (see (3.10) and (3.11)). Thus each new threshold could create region of singularity of F C (q). We are concerned about the identification of the singularity free domain. Thus the lowest threshold of two particle intermediate state, consistent with desired constraints, control the determination of this domain of analyticity. Therefore, for the equal mass case, the Jost-Lehmann representation for F C (q) is such that it is nonzero in the regionR, Note that u is also a 4-dimensional vector (not the Mandelstam variable u). The domain of integration of u is the region S specified below and Φ(u, Q.χ 2 ) arbitrary. Here χ 2 is to be interpreted like a mass parameter. Moreover, recall the assumptions about the features of the causal function stated above. Since the retarded commutator involves a θ-function, if we use integral representation for it (see [29]) we derive an expression for the retarded function, Moreover, for the retarded function, F R (q), the corresponding Jost-Lehmann representation reads [29] Note that these integral representations are written for the case where the integral converges. It is well known, in the LSZ framework, that the integrand will, at most, have polynomial growth. It follows from the fact that the matrix elements are tempered distributions. In any case, the aforementioned properties of the integrand in the representation does not affect the analyticity of F R (q). One important observation is that the singularities lie in the complex q-plane. 1 We provide below a short and transparent discussion for the sake of completeness. The locations of the singularities are found by examining where the denominator (3.18) vanishes, JHEP06(2020)139 We conclude that the singularities lie on the hyperboloid give by (3.19) and those points are in domain S as defined in (3.16). There are points in the hyperboloid which belong to the domain S. These are called admissible. Moreover, according to our earlier definition, the domainR is where F C (q) is nonvanishing (see (3.14)). Then there is a domain which contains a set of real points where F C (q) vanishes, call it R and this is compliment to real elements ofR. From the above arguments, we arrive at the conclusion that F C (q) = 0 for every real point belonging to R (the compliment ofR). Thus these are the real points in the q-plane where F R (q) = F A (q) since F C (q) = 0 there. Recall the definition ofR, (3.14). we identify the coincidence region to be the domain bordered by the two parabolae. It is obvious from the above discussions that the set S is defined by the range of values u and χ 2 assume in the admissible parabola. Now we see that those set of values belong to a subset of (u, χ 2 ) of all parabolas (recall equation (3.19)) [8] and [29,30]. In order to transparently discuss the location of a singularity, let us go through a few short steps as the prescription to illustrate essential points. We discussed about the identification of admissible parabola. The amplitude is function of Lorentz invariant kinematical variables; therefore, it is desirable to express the constraints and equations in terms of those variables eventually. Let us focus on Q ∈ V + and choose a Lorentz frame such that four vector Q = (Q 0 , 0) where 0 stands for the three spatial components of Q. Next step is to choose four vector q appropriately to exhibit the location of singularity in a simple way. This is achieved as follows: choose one spatial component of q in order to identify the position of the singularity in this variable and treat q 0 and the rest of the components of q as parameters and hold them fixed [8]. We remind the reader that all the variables appearing in the Jost-Lehmann representation for F C (q) and F R (q) are Lorentz invariant objects. Thus going to a specific frame will not alter the general attributes of the generalized functions. If we solve for q 2 1 in (3.19) after obtaining an expression for q 2 We remind that the set of points {u 0 , u 1 , u 2 , u 3 ; χ 2 min = min χ 2 } lie in S. The above exercise has enabled us to identify the domain where the singularities might lie with the choice for the variables Q and u we have made. We are dealing with the equal mass case and note that the location of the singularities are symmetric with respect to the real axis. We now examine a further simplified scenario where the coincidence region is bounded by two branches of hyperboloids so that M 2 + = M 2 − = M 2 . Now the singular points are For the case under considerations: (Q + q) 2 = (Q − q) 2 = M 2 , and The above result paves the way to prove the existence of the Lehmann ellipses. It is important to recognize the essential difference between the present investigation (i.e. the JHEP06(2020)139 presence of the KK towers) and the results derived for a single massive scalar field. We have to deal with the appearance of several thresholds for identification of the coincidence regions. These thresholds are the multiparticle states in various channels as discussed earlier and introduced earlier in this section through the two equations (2.32) and (2.33). Their relevance is already reflected in the spectral representations, (3.10) and (3.11), when we introduced complete set of intermediate states. We remark that the presence of the excited KK states do not shrink the singularity free regions. Therefore, the domain we have obtained is the smallest domain of analyticity; nevertheless, we feel that in order to arrive at this conclusion the entire issue had to be examined ab initio. The Lehmann Ellipses. Our goal is to derive fixed-t dispersion relations. Noted that as s → s thr , cosθ goes out of the physical region −1 ≤ cosθ ≤ +1, (θ being the c.m. scattering angle) when we wish to hold t fixed. We choose the following kinematical configuration in order to derive the Lehmann ellipse for the case at hand i.e. elastic scattering of equal (nonzero) charge KK states, hence particles of equal mass. Here (a, b) and (c, d) are respectively the incoming and outgoing particles. They are assigned the following energies and momenta in the c.m. frame: . Although all the particles, (a, b, c, d), are identical, we keep labeling them as individual one for the purpose which will be clear shortly. Thus E a = E b and E c = E d andk.k = cosθ. It is convenient to choose the following coordinate frame for the ensuing discussions. With these definitions of q and P , when we examine the conditions for nonvanishing of the spectral representations of A s and A u we arrive at Thus the coincidence region is given by the condition We are dealing with the equal mass case; therefore, M + 2 = M − 2 = M 2 . We conclude from the energy momentum conservation constraints (use the expressions for P and q) JHEP06(2020)139 that p 2 c = (P − q) 2 < M 2 c and p 2 d = (P + q) 2 < M 2 d in this region. Moreover, (p a − p c ) 2 = (P − q − p a ) 2 < M ac 2 and (p a + p d ) 2 = (P − q − p a ) 2 < M ad 2 . We also note that (P − q) ∈ V + and (P + q) ∈ V + . The admissible hyperboloid is (q − u) 2 = χ 2 min + ρ, ρ > 0 with ( pa+p b 2 ± u) ∈ V + . χ 2 min assumes the following form for the equal mass case, Notice that M appearing in the second term of the curly in (3.29) is the mass of two or more multiparticle states carrying the quantum numbers of particle c; whereas M appearing in the third term inside the curly bracket is the mass of two or more multiparticle states carrying the quantum numbers of particle d. In the present case M has the same quantum number as that of the incoming state carrying KK charge n. Thus, in this sector, we can proceed to show the existence of the small Lehmann Ellips (SLE). It is not necessary to present the entire derivation here. The extremum of the ellipse is given by We note that M c = m 2 n + m 2 0 is the mass of the lowest multiparticle state (one particle with KK charge n and another with KK charge zero); moreover, M c = M d . Thus the denominator is k 2 s. It will be a straightforward work to derive the properties of the large Lehmann Ellipse (LLE) by reducing all the four fields in the expression for the four point function as is the standard prescription; also note that the value of cosθ(s) depends on s. Important remark. The first point to note is that in the presence of the other states of KK tower, we have to carry out the same analysis as above for each sector. Notice, however, each multiparticle state composed of KK towers has to have the quantum numbers of c (same as d since we consider elastic channels of equal mass scattering). Thus if c carries charge n, then a possible KK state could be q + l + m = n since KK charges can be positive and negative. The second point is when we derive the value of cosθ 0 , for each such case, it is rather easy to work out that value will be away from original expression (3.30). Thus the nearest singularity in cosθ plane is given by the expression (3.31) although there will be Lehmann ellipses associated with higher KK towers. We expand the scattering amplitude in partial waves (in the Legendre polynomial basis) in the domain of convergence. This domain of analyticity is enlarged (earlier it was only physically permitted values of cosθ) to a region which is an ellipse whose semimajor axis is given by (3.31). Moreover, the absorptive part of the scattering amplitude has a domain of convergence beyond cosθ = ±1; it converges inside the large Lehmann ellipse (LLE). Therefore, we are able to write fixed-t dispersion relations as long as t lies in the following domain |t| + |t + 4k 2 | < 4k 2 cosθ 0 (3.32) JHEP06(2020)139 The absorptive parts A s and A u defined on the right hand and left hand cuts respectively, for s > s thr and u > u thr are holomorphic in the LLE. Thus, assuming no subtractions We shall settle the important issue of number of required subtraction in the dispersion relations in the next section. Although we have not proved the crossing symmetry explicitly, it will not be hard to provide a proof following the arguments of [28]. Essentially, either one follows the procedures which employed the techniques of Bremmermann, Oehme and Taylor [34] or those of Bross, Epstein and Glaser [35]. We shall only indicate the steps followed in I and the interested reader may consults I for the details. We have not investigated the consequences of unitarity so far. It is a nonlinear relation and imposes strong constraints. Let us define the T-matrix from the S-matrix: S = 1 − iT. The unitarity of S implies: (T † − T) = iT † T. Let us consider the matrix element < p d , p c , in|T † − T|p b , p c , in > and look at the matrix element < p d , p c , in|T † T|p b , p c , in >. We introduce a complete set of physical states |N >< N | = 1. Here the set of states |N > correspond to all admissible physical states consistent with energy-momentum conservation and KK-charge conservation. Thus, at this stage, entire KK tower is to be included. After going to through a series of steps, we arrive the following expression (see I for details) The essential point to note is the presence of the δ-functions in the r.h.s. The presence of the δ-function implies (p c + p d = p n ) Thus (p c + p d ) 2 = M 2 n , where M 2 n is the intermediate physical state mass-squared and (p c + p d ) 2 = s. Therefore, unitarity constrains the number of KK towers that can contribute to the sum; not the entire infinite tower is allowed. Similarly, it is easy to see that the second term corresponds to the cross channel. To recall, the linear program is unable to cut off contributions of the entire KK tower; however, unitarity, the nonlinear relation, resolves the issue. In other words, the entire KK tower does JHEP06(2020)139 not contribute to the spectral representation (3.10) and (3.11). An analogous argument holds for the 'crossed channel' contribution as detailed derivations showed in I. Let us turn the attention to the partial wave expansion of the amplitude and the power of the positivity property of absorptive part of the amplitude. We recall that the scattering amplitude admits a partial wave expansion where k = |k|, the c.m. momentum and θ is the c.m. scattering angle. The expansion converges inside the Lehmann ellipse with focii at ±1 and semimajor axis 1+ const 2k 2 . Unitarity leads to the positivity constraints on the partial wave amplitudes As is well known, the semimajor axis of the Lehmann ellipse shrinks as s grows. Recall that derivation of the Lehmann ellipse is based on the linear program. Martin [33] has proved an important theorem. It is known as the procedure for the enlargement of the domain of analyticity. He demonstrated that the scattering amplitude is analytic in the topological product of the domains D s ⊗ D t . This domain is defined by |t| <R,R being independent of s and s is outside the cut s thr + λ = 4m 2 n + λ, λ > 0. In order to recognize the importance of this result, we briefly recall the theorem of BEG [36]. It is essentially the study of the analyticity property of the scattering amplitude F (s, t). It was shown that in the neighborhood of any point s 0 , t 0 ; −T < t 0 ≤ 0, s 0 outside the cuts, the amplitude, F (s, t), is analytic in s, and t in a region |s − s 0 | < η 0 (s 0 , t 0 ), |t − t 0 | < η 0 (s 0 , t 0 ) (4.4) Note the following features of BEG theorem: it identifies the domain of analyticity; however, the size of this domain may vary as s 0 and t 0 vary. Furthermore, the size of this domain might shrink to zero; in other words, as s → 0, η(s) may tend to zero. The importance Martin's theorem lies in his proof that η(s) is bounded from below i.e. η(s) ≥R andR is s-independent. It is unnecessary to repeat the proof of Martin's theorem here. The interested reader may consult [28]. Instead, we shall summarize the conditions to be satisfied by the amplitude as stated by Martin [33]. II. F (s, t) is an analytic function of the two Mandelstam variables, s and t, in a neighborhood ofs in an interval below the threshold, 4m 2 n − ρ <s < 4m 2 n and also in some neighborhood of t = 0, |t| < R(s). This statement hold due to the work of Bros, Epstein and Glaser [35,36]. JHEP06(2020)139 III. Holomorphicity of A s (s , t) and A u (u , t): the absorptive parts of F (s, t) on the right hand and left hand cuts with s > 4m 2 n and u > 4m 2 n are holomorphic in the LLE. IV. The absorptive parts A s (s , t) and A u (u , t), for s > 4m 2 n and u > 4m 2 n satisfy the following positivity properties and where k is the c.m. momentum. Then F (s, t) is analytic in the quasi topological product of the domains D s ⊗ D t . (i) s ∈ cut-plane: s = 4m 2 n + ρ, ρ > 0 and (ii) |t| <R, there exists someR such that dispersion relations are valid for |t| < R, independent of s. We may follow the standard method to determineR. The polynomial boundedness, in s, can be asserted by invoking the simple arguments presented earlier. Consequently, a dispersion relation can be written down for F (s, t) in the domain D s ⊗ D t . The importance of Martin's theorem is appreciated from the fact that it implies that the η of BEG is bounded from below by an s-independent R. Moreover, value ofR can be determined by the procedure of Martin (see [8] for the derivations). We outline proof of a few more results as corollaries without providing detailed computations: (I) The amplitude, F (s, t), satisfies following properties: (i) Polynomial boundedness i.e. |F (s, t)| < s N ; N is a finite integer. This follows from the fact that the LSZ reduced amplitudes are tempered distributions. It is necessary that |t| lies within the Lehmann-Martin ellipse. (ii) The partial wave expansion converges inside the Lehmann-Martin ellipse and the positivity conditions are satisfied by partial wave amplitudes (4.3). Then the Froissart-Martin bound is proved. We sketch a pedagogical proof of the Froissart-Martin bound on the total cross section, σ t , of our interest. Let us consider the absorptive part of the scattering amplitude, A s (s, t) = Im F (s, t). It admits a partial wave expansion which converges inside the large Lehmann ellipse. Consider A s (s, t) at the right extremity value of t, i.e. t = t 0 , on the ellipse of convergence. Notice the properties: (i) A s (s, t) < s N ; the polynomial boundedness for t inside the ellipse and P l (x) > 1 for x > 1. (iii) 0 ≤ Imf l (s) ≤ 1 from partial wave unitarity. Furthermore, since each term in the partial wave expansion (4.7) is positive it is also bounded as (2l + 1)Im f l (s)P l 1 + t 0 2k 2 < s N k √ s (4.8) JHEP06(2020)139 We recall that for x > 1, P l (x) >c (2l+1) (1 + (2x − 2) 1/2 ) l . Since P l (x) grows exponentially with x for x > 1, Im f l (s) would damp exponentially beyond some cut off value of l Im f l (s) < C e N logs−l t 0 2k 2 (4.9) if the polynomial boundedness (4.8) is to be respected. Thus the effective cut off value is L 0 = √ s(log s). We can split the partial wave expansion into two parts: a sum from 0 to L 0 and the rest is L 0 + 1 to ∞. Now consider the imaginary part of forward amplitude, F (s, t = 0); we have been considering, so far, the absorptive part at t = t 0 . It is bounded as We have used the unitarity bound on Im f l (s) and set Im f l (s) = 1; P l (1) = 1 any way. The last term on the r.h.s is the sum of the terms from L 0 + 1 to ∞ and it is negligible for large s. Therefore, Im F (s, t = 0) ≤ L 2 0 = Cslog 2 s (4.11) C = 4π t 0 fixed from the first principles. Therefore, from the optical theorem σ t ≤ Clog 2 s (4.12) This is a quick derivation of the Froissart bound. In our case, t 0 is the lowest threshold for t-channel process and it is t 0 = 4m 2 0 . Thus the constant, C, also gets fixed. (II) We have proved the analog of the Jin-Martin bound [37]. The arguments are as follows: the scattering amplitude, F (s, t) is polynomially bounded for |t| lying inside the ellipse of convergence. F (s, t) admits the partial wave expansion (4.2). Note that (2l + 1)|f l (s)|P l 1 + |t| 2k 2 (4.13) since P l (1 + x) ≤ P l (1 + |x|) and the Legendre polynomial are positive for arguments greater than 1. Utilizing partial wave unitarity and positivity (i.e. 0 ≤ |f l (s)| 2 ≤ Im f l (s) ≤ 1), it follows that |F (s, t)| ≤ slog 2 s The above inequality is for the r.h.s cut and it also holds for the l.h.s cut. Therefore, by invoking Phragman-Lindelof theorem [39], one arrives at the conclusion that |F (s, t)| is bounded as constant slog 2 s in the complex s-plane. Therefore, the fixed-t dispersion relations do not need more than two subtractions as long as |t| lies inside the Lehmann ellipse. We would like to draw the attention of the reader to the fact that a field theory defined on the manifold R 3,1 ⊗ S 1 whose spectrum consists of a massive scalar field and a tower of Kaluza-Klein states satisfies nonforward dispersion relations. This statements begs certain clarifications. The theory satisfies LSZ axioms. The analyticity properties can be derived in the linear program of axiomatic field theory which leads to the proof of the existence of the Lehmann ellipses. The role of the KK tower is to be assessed in this program. Once we invoke unitarity constraint stronger results follow and the enlargement of the domain of analyticity in s and t variables can be established. JHEP06(2020)139 5 Summary and discussions We summarize our results in this section and discuss their implications. The objective of the present work is to investigate the analyticity property of the scattering amplitude in a field theory with a compactified spatial dimension on a circle i.e. the so called S 1 compactification. We were motivated to undertake this investigation from work of Khuri [27] who considered potential scattering with a compact spatial coordinate. He showed the lack of analyticity of the forward scattering amplitude under certain circumstances. Naturally, it is important to examine what is the situation in relativistic field theories. As has been emphasized by us before, lack of analyticity of scattering amplitude in a QFT will be a matter of concern since analyticity is derived under very general axioms of QFT. Thus a compactified spatial coordinate in a theory with flat Minkowski spacetime coordinates should not lead to unexpected drastic violations of fundamental principles of QFT. In this paper, initially, a five dimensional neutral massive scalar theory of mass, m 0 , was considered in a flat Minkowski spacetime. Subsequently, we compactified a spatial coordinate on S 1 leading to a spacetime manifold R 3,1 ⊗ S 1 . The particles of the resulting theory are a scalar of mass m 0 and the Kaluza-Klein towers. In this work, we have focused on elastic scattering of states carrying nonzero equal KK charges, n = 0, to prove fixed-t dispersion relations. We have left out the elastic scattering of n = 0 states as well as elastic scattering of an n = 0 state with an n = 0 state for nonforward directions. These two cases can be dealt with without much problem from our present work. Moreover, our principal task is to prove analyticity for scattering of n = 0 states and thus complete the project we started with in order to settle the issue related to analyticity as was raised by Khuri [27] in the context of potential scattering. We showed in I that forward amplitude satisfies dispersion relations. However, it is not enough to prove only the dispersion relations for the forward amplitude but a fixed-t dispersion relation is desirable. We have adopted the LSZ axiomatic formulation, as was the case in I, for this purpose. Our results, consequently, do not rely on perturbation theory whereas, Khuri [27] arrived at his conclusions in the perturbative Greens function techniques as suitable for a nonrelativistic potential model. Thus the work presented here, in some sense, has explored more than what Khuri had investigated in the potential scattering. We have gone through several steps, as mentioned in the discussion section of I, in order to accomplish our goal. The principal results of this investigations are as follows. First we obtain a spectral representation for the Fourier transform of the causal commutator, F c (q). We discussed the coincidence region which is important for what followed. In order to identify the singularity free domain, we derived analog of the Jost-Lehmann-Dyson theorem. A departure from the known theorem is that there are several massive states, appearing in the spectral representation, and their presence has to be taken into considerations. Thus, we identified the singularity free region i.e. the boundary of the domain of analyticity. Next, we derived the existence of the Lehmann ellipse. We were able to write down fixed-t dispersion relations for |t| lying within the Lehmann ellipse. We have proceeded further. It is not enough to obtain the Lehmann ellipse since the semimajor axis of the ellipse shrinks as s increases. Thus it is desirable to derive the JHEP06(2020)139 analog of Martin's theorem [33]. We appealed to unitarity constraints following Martin and utilized his arguments on the attributes of the absorptive amplitude and showed that indeed Martin's theorem can be proved for the case at hand. As a consequence, the analog of Froissart-Martin upper bound on total cross sections, for the present case, is obtained. The convergence of partial wave expansions within the Lehmann-Martin ellipse and polynomial boundedness for the amplitude, F (s, t) for |t| within Lehmann-Martin ellipse, lead to the Jin-Martin upper bound [37] for the problem we have addressed here. In other words, the amplitude, F (s, t), does not need more than two subtractions to write fixed t dispersion relations in the domain D s ⊗ D t . We have made two assumptions: (i) existence of stable particles in the entire spectrum of the theory defined on R 3,1 ⊗ S 1 geometry. Our arguments are based on the conservation of KK discrete charge q n = n R ; it is the momentum along the compatified direction. (ii) The absence of bound states. We have presented some detailed arguments in support of (ii). To put is very concisely, we conveyed that this flat space D = 4 theory with an extra compact S 1 geometry results from toroidal compactification of five dimensional defined in flat Minkowski space. In absence of gravity in D = 5, the lower dimensional theory would not have massless gauge field and consequently, BPS type states are absent. It is unlikely that the massive scalars (even with KK charge) would provide bound states. This is our judicious conjecture. Another interesting aspect needs further careful investigation. Let us start with a five dimensional Einstein theory minimally coupled to a massive neutral scalar field of mass m 0 . We are unable to fulfill requirements of LSZ axioms in the case of the five dimensional theory in curved spacetime. Furthermore, let us compactify this theory to a geometry R 3,1 ⊗ S 1 . Thus the resulting scalar field lives in flat Minkowski space with a compact dimension. We have an Abelian gauge field in D = 4, which arises from S 1 compactification of the 5-dimensional Einstein metric. The spectrum of the theory can be identified: (i) There is a massive scalar of mass m 0 descending of D = 5 theory accompanied by KK tower of states. (ii) A massless gauge boson and its massive KK partners. (iii) If we expand the five dimensional metric around four dimensional Minkowski metric when we compactify on S 1 , we are likely to have massive spin 2 states (analog of KK towers). We may construct a Hilbert space in D = 4 i.e with geometry R 3,1 ⊗ S 1 . It will be interesting to investigate the analyticity properties of the scattering amplitudes and examine the high energy behaviors. Since only a massless spin 1 particle with Abelian gauge symmetry appears in the spectrum, it looks as if the analyticity of amplitudes will not be affected. However, there might be surprises since a massive spin 2 particle is present in the spectrum. Khuri [27] was motivated by the large extra dimension scenario to undertake the problem. He had raised the question what will be the consequences of his conclusions (in the potential scattering model) if indeed the dispersion relation is not valid at LHC energies. However, the field theory we have considered here, the dispersion relations are proved for fixed t. It will be worthwhile to undertake phenomenological analysis to check if there are Froissart-Martin bound violation at extremely high energies. We have noticed that, so far, the issue of the validity of Froissart-Martin bound has not received adequate attention. The data on σ t is accumulating from the LHC experiments. If the experiments unambigously confirm JHEP06(2020)139 that energy dependence of the total cross sections show a clear deviation from the (ln s) 2 behavior then we have to resolve an important problem. The important question would be whether the Froissart bound violation is a challenge to question the axioms of local quantum field theories. Alternatively, one might propose that the violation of the bound is an indication that, at the LHC energies, the extra dimensions are decompactified as envisaged in the large radius extra compact dimension scenario. If there would be evidences in favor of the latter scenario we would witness emergence of new physical phenomena.
16,448.6
2020-06-01T00:00:00.000
[ "Mathematics", "Physics" ]
Influence of Electric Fields and Boundary Conditions on the Flow Properties of Nematic-Filled Cells and Capillaries From a rheological point of view, nematic liquid crystals are interesting because they exhibit unique flow properties. Although some of these properties have been known for a long time, they continue to attract the attention and interest of the scientists. As a result, a large amount of theoretical, numerical, and experimental work has been produced in recent years. In particular, a number of publications treat the behavior of nematic liquid crystals in shear and Poiseuille flow fields (Denniston, Orlandini, and Yeomans 2001, Vicente Alonso, Wheeler, and Sluckin 2003, Marenduzzo, Orlandini, and Yeomans 2003, Marenduzzo, Orlandini, and Yeomans 2004, Guillen and Mendoza 2007, Medina and Mendoza 2008, Mendoza, Corella-Madueno, and Reyes 2008, Reyes, Corella-Madueno, and Mendoza 2008, Zakharov and Vakulenko, 2010). Introduction From a rheological point of view, nematic liquid crystals are interesting because they exhibit unique flow properties. Although some of these properties have been known for a long time, they continue to attract the attention and interest of the scientists. As a result, a large amount of theoretical, numerical, and experimental work has been produced in recent years. In particular, a number of publications treat the behavior of nematic liquid crystals in shear and Poiseuille flow fields (Denniston, Orlandini, and Yeomans 2001, Vicente Alonso, Wheeler, and Sluckin 2003, Marenduzzo, Orlandini, and Yeomans 2003, Marenduzzo, Orlandini, and Yeomans 2004, Guillen and Mendoza 2007, Medina and Mendoza 2008, Zakharov and Vakulenko, 2010. On the other hand, it has been shown that the influence of an electric field strongly modifies the rheology of liquid crystals. This has considerable interest due to its possible application in microsystems since homogeneous fluids, like liquid crystals, present some advantages over conventional electrorheological fluids. This is mainly due to the fact that liquid crystals, in contrast to other active fluids, do not contain suspended particles, which is of particular importance for microsystems since small channels are easily obstructed by suspended particles. Also, they prevent agglomeration, sedimentation and abrasion problems (de Volder, Yoshida, Yokota, and Reynaerts 2006). In this chapter we review recent theoretical results on the rheology of systems consisting of a flow-aligning nematic contained in cells and capillaries under a variety of different flow conditions and under the action of applied electric fields. In particular, we revise steadystate flows and the behavior of viscometric quantities like the local and apparent viscosities and the first normal stress differences. Among the important issues that were recently studied by us and by others is the possibility of multiple steady state solutions due to the competition between shear flow and electric field that give rise to a complex non-Newtonian response with regions of shear thickening and thinning. From these results one can construct a phase diagram in the electric field vs. shear flow space that displays regions for which the system may have different steady-state configurations of the director's field. The selection of a given steady-state configuration depends on the history of the sample. Interestingly, as a consequence of the hysteresis of the system, this response may be asymmetric with respect to the direction of the shear flow. Possible applications of these phenomena are also discussed together with future research. Fundamentals Liquid crystal systems (De Gennes P.G. and Prost J. 1993) are well defined and specific phases of matter (mesophases) characterized by a noticeable anisotropy in many of their physical properties as solid crystals do, although they are able to flow. Liquid crystal phases that undergo a phase transition as a function of temperature (thermotropics), exist in relatively small intervals of temperature lying between solid crystals and isotropic liquids. Liquid crystals are synthesized from organic molecules, some of which are elongated and uniaxial, so they can be represented as rigid rods; others are formed by disc-like molecules (Chandrasekhar S. 1992). This molecular anisotropy in shape is manifested macroscopically through the anisotropy of the mechanical, optical and transport properties of these substances. Liquid crystals are classified by symmetry. As it is well known, isotropic liquids with spherically symmetric molecules are invariant under rotational, O(3), and translational, T(3), transformations. Thus, the group of symmetries of an isotropic liquid is O(3)×T(3). However, by decreasing the temperature of these liquids, the translational symmetry T(3) is usually broken corresponding to the isotropic liquid-solid transition. In contrast, for a liquid formed by anisotropic molecules, by diminishing the temperature the rotational symmetry is broken O(3) instead, which leads to the appearance of a liquid crystal. The mesophase for which only the rotational invariance has been broken is called nematic. The centers of mass of the molecules of a nematic have arbitrary positions whereas the principal axes of their molecules are spontaneously oriented along a preferred direction n, as shown in Fig. 1. If the temperature decreases even more, the symmetry T(3) is also partially broken. The mesophases exhibiting the translational symmetry T(2) are called smectics (see Fig. 1), and those having the symmetry T(1) are called columnar phases (not shown). The elastic properties of liquid crystals determine their behavior in the presence of external fields and play an essential role in characterizing many of the electro-optical and magnetooptical effects occurring in them. In this work we shall adopt a phenomenological approach to describe these elastic and viscous properties. A liquid crystal will be considered as a continuum, so that its detailed molecular structure will be ignored. This approach is feasible because all the deformations observed experimentally have a minimum spatial extent that greatly exceed the dimensions of a nematic molecule. The macroscopic description of the Van der Waals forces between the liquid crystal molecules is given in terms of the following formula (Frank F. C. 1958) for the elastic contribution to the free-energy density: Here the unit vector n is the director, the elastic moduli K , K , and K describe, respectively, transverse bending (splay), torsion (twist), and longitudinal bending (bend) deformations. The free energy of the LC cylinder has, in addition to the above elastic part, also an electromagnetic part due to the applied electrostatic field. As we have already discussed, the first contribution is given by Eq. (1). The electromagnetic free energy density, in MKS units, Here ij δ is the Kronecker delta, a εεε ⊥ =−  is the dielectric anisotropy of the LC, ε ⊥ and ε  represent the dielectric constants perpendicular and parallel to the director. Also, and are the dielectric permittivity and magnetic permeability constants in vacuum. Nematodynamics The hydrodynamic description of complex condensed matter systems like superfluids, ferromagnets, polymeric solutions, etc. has been possible thanks to the deep understanding of the role played by the symmetries and thermodynamic properties of the system (Kadanoff andMartin P.C. 1963, Hohenberg andMatin P.C., Kalatnikov I. M. 1965). The extension of this linear hydrodynamic to liquid crystals has been started in the seventies, (Parodi O. 1970, Forster D. 1975, and in recent years it has been generalized to the nonlinear case and to more complex liquid crystal phases (Brand H. R. & Pleiner H. J. 1980). www.intechopen.com The key idea of the hydrodynamic formalism is based on the observation that for most complex condensed matter systems in the limit of very large temporal and spatial scales, only a very small number of slow processes, compared with the enormous number of microscopic degrees of freedom, survives. The evolution of these processes is described by the evolution of the corresponding hydrodynamic variables that describe cooperative phenomena that are not to be relaxed in a finite time for a spatially homogeneous system. That is to say, the hydrodynamic variables are such that their Fourier transform satisfy the relation: ω(k→0)→0. Moreover the hydrodynamic variables can be identified uniquely by utilizing conservation laws (global symmetries) and symmetry breaking assumptions, for spatio-temporal scales such that the microscopic degrees of freedom have already been relaxed. For these scales, the description of the systems is exact. When the microscopic degrees of freedom reach thermodynamic equilibrium (local equilibrium) one can use thermodynamics to follow the evolution of the slow variables. Thus, one has to consider a thermodynamic potential, for instance, the internal energy as a function of the system variables (Pleiner H. 1986, Pleiner H. 1988. In a second step we obtain the dynamics of the system by expressing the currents or thermodynamic fluxes in terms of their corresponding thermodynamic forces, which are the gradients of the conjugated thermodynamic variables, and performing a series expansion of the fluxes in powers of the forces. This expansion will be expressed in terms of dynamical phenomenological coefficients (transport coefficients) which can be determined only from an experiment or a microscopic theory. Then, we separate the fluxes in those for which the entropy is conserved (reversible part) and those that make the entropy to increase (irreversible part) and use classical thermodynamic laws to find the evolution equations for the hydrodynamic variables. After obtaining these equations for the liquid crystal, it is possible to include the effects of external fields like electromagnetic fields, stresses, thermal gradients, etc. In what follows we sketch the steps of this theoretical formalism for nematics. The first class of hydrodynamic variables is associated with local conservation laws which express the fact that quantities like mass, momentum or energy cannot be locally destroyed or created and can only be transported. If ρ(r,t), g=ρv(r,t) and (r,t), where v is the hydrodynamic velocity, denote respectively, the density of these quantities, the corresponding conservation equations are ( (Landau L.D. and Lifshitz E. 1964). Here // ii dd t tv =∂ ∂ + ∇ denotes the hydrodynamic velocity, ij σ is the nematic's stress and e i j is the energy flow. When a phase transition to the liquid crystal state occurs after reducing the temperature, the rotational symmetry O(3) is broken spontaneously and the number of hydrodynamic variables increase. Any rotation around an axis different from n  transforms the system to a different and distinguishable state form that without rotation. This rotational symmetry broken is called spontaneous since the energy is a rotational invariant and there is no energy that favors one www.intechopen.com orientation of n  with respect to any other. This is equivalent to say that the state of the system becomes infinitely degenerate. Under these conditions, one soft variation of the degeneracy parameter is related to a slow relaxation of the system that increases as q→0. This type of behavior is the basic content of the Goldstone Theorem (Forster D. 1975). Therefore, the degeneracy parameter is related to the order parameter of the liquid crystal and adopts different structures for different mesophases. For a nematic phase the order parameter has the following form where S is the degree of order, i. e., S=0 for the isotropic phase and S=1 for a nematic phase having the molecules completely aligned. In agreement with this statement the dynamics of ij Q is determined by that of n. In summary, the macroscopic state of a nematic can be described by means of two scalar variables that can be chosen as ρ(r,t), (r, ) et , one vectorial variable, g=ρv(r,t) and one tensorial variable ij Q , that can be selected, for instance, as the anisotropic part of the dielectric tensor. Since n  is related to a conservation law, its balance equation is a dynamical equation of the where i Y is not a current, since its surface integral is not a flux, but a quasi-current. This quantity must be orthogonal to n to fulfill the nematic symmetry n→-n; however, there are other contributions to i Y which does not come from the symmetries but from thermodynamic requirements. If a specific physical situation is given, the state of the system can be described in terms of an appropriate thermodynamic potential. This can be chosen, for example, as the total free energy E, (Callen H. B. 1985) (, , , where V denotes the volume of the system and σ is the entropy per unit of volume. From this assumption and using Euler's relation, we can derive the Gibbs' expression and the Gibbs-Duhem's relation Here is the chemical potential, Φ ij y h i are called the molecular fields, which are defined as the partial derivatives of the thermodynamic potential with respect to the corresponding conjugated variable. Since in equilibrium the state variables are constants, any inhomogeneous distribution of these variables takes the system out of equilibrium. For this reason the gradients of these quantities are taken as thermodynamic forces. Hence, the presence of ∇ , ∇T, j i n ∇ and ∇₁Φ ij give rise to irreversible processes in the system. The dynamical part of the hydrodynamic equations is obtained by expressing the currents σ ij , j i e , and Y i in terms of the thermodynamic variables T, , v i , and Φ ij . If additionally we separate in these expressions the reversible part, which does not generate entropy increase and it is www.intechopen.com invariant under temporal inversion, from the irreversible part, which increases the entropy and is not invariant under the transformation t→-t, we obtain the following expressions for the fluxes (Landau L. D. andLifshitz E. 1986, Plainer H. 1988) In these equations the superscript indexes R and D denote, respectively, the reversible and irreversible or dissipative parts, and Here K , K y K are the elastic constants of the nematic and ijk is the totally antisymmetric tensor of Levy-Civitta. The projector tensor is lm ik i k nn δδ ⊥ =− and kji can be expressed as (1 ) (1 ) . In this expression = / , is the reversible parameter, also called flux alignment parameter, being and two of the five independent viscosities of the nematic. The molecular field h k , that we have already defined as h k ≡h i ′-∇₁Φ ij , turns out to be explicitly It should be mentioned that a different choice for this tensor has been done in the ELP formulation (Ericksen, J. L. 1960, Leslie, F. M. 1966, Parodi, 0. 1970. The complete stress tensor for this formulation is given by Eq.(39) which for this case replaces Eq.(13). The second law of thermodynamics establishes that any irreversible process that occurs in the system should increase the entropy. Thus, the entropy obeys the following balance equation where R is the dissipation function for irreversible processes. This quantity can be interpreted as the energy per unit of volume dissipated by the microscopic degrees of freedom and divided by the temperature (R/T), represents the entropy production of the nematic. If, as we did previously, we relate Eq.(21) with Eqs. (6), (7), (8) and (9), by using the Gibbs' expression (11) and the expressions (13)-(21), we obtain an explicit formula for R, that is where γ ⁻¹ is the rotational viscosity and the tensor ij describes the heat conduction (thermal conductivity). The second law of thermodynamics requires R to be a definite positive form, which in turns implies that every single coefficient of the previous expression is positive. Notice that Eq. (22) implies as well that the dissipative currents and quasicurrents are given by the partial derivatives of the dissipation function, that is In summary equations (6), (7), (8), (9) and (22) constitute a complete set to describe the irreversible dynamics of a low molecular weight nematic (thermotropic) in absence of external fields. Constitutive equations It is usual that applied external fields like electric and magnetic fields, gravity, temperature gradients, pressure and concentration, shear and vortex flows carry out the nematic to a new equilibrium state so that these fields must be included in the hydrodynamic equations. It is well known that for any polarizable medium an electric field E induces a polarization P =D-E, where D is the displacement electric vector. Now, in a nematic the molecular dipolar moments are oriented approximately parallel with respect to the long axis of the molecules. Thus, the induced polarization gives rise to a director orientation. In contrast the influence of the magnetic field in a nematic is much weaker and in general, the induced magnetization can be neglected. A very well known result based on conventional thermodynamic arguments establishes that the work associated to an electric field E =-∇Φ, is which should be added to the Gibbs' expression (11) and to the Gibbs-Duhem's relation (12). By modifying these expressions and using a procedure completely analogous to the one we followed in the last section, it is possible to show that in the presence of an electric field Eq. (7) where the charge density is given by ρ E = div D. To linear order in the thermodynamic forces, the expression for σ ij has to include in addition the electric contributions, so that we replace Eq. (13) by the expression . Analogously, the currents (14), now are given by where σ ij E is the electric conductivity, and in consequence the entropy current is Here the material tensors of second rank, ij E and σ ij E have uniaxial form and each one should be expressed in terms of two dissipative transport coefficient, that is, On the other hand, the third order tensor kji E is irreversible and contains a dynamical coefficient, the flexoelectric coefficient E , ( Most of the parameters involved in the hydrodynamic and electrodynamic equations for a nematic have been measured for different substances that show a uniaxial nematic phase. Among these one can mention the elastic constants (Blinov L. M. and Chigrinov V. G. 1994); specific heat, the flux alignment parameter and the viscosities i , i=1,2...5, the inverse of the diffusion constant γ , the thermal conductivity (Ahlers, Cannell, Berge and Sakurai 1994), and the electric conductivity σ ij E . Finally the dynamical equations for a nematic in an isothermal process can be obtained by inserting Eqs. (27) and (28) in Eqs. (7) and (9). This leads to www.intechopen.com Apparent viscosity The viscosity function or apparent viscosity connects the force per unit area and the magnitude of the local shear (Carlsson T. 1984 where α , α ,α ,α and α are the Leslie coefficients (Parodi O. 1970). Since the orientation angle is given by Eq. (35), from the above equation it follows that the dependence of on indicates that the system is non-Newtonian in its behavior, in the sense that is strongly dependent on the driving force. If we integrate the result over the cross section area of the flow we obtain the averaged apparent viscosity where A t is the total area of the cross section. First normal stress difference One of the distinctive phenomena observed in the flow of liquid crystal polymers in the nematic state is that of a negative steady-state first normal stress difference, N , in shear flow over a range of shear rates. N is zero or positive for isotropic fluids at rest over all shear rates, which means that the force developed due to the normal stresses, tends to push apart the two surfaces between which the material is sheared. In liquid crystalline solutions, positive normal stress differences are found at low and high shear rates, with negative values occurring at intermediate shear rates (Kiss G. and Porter R. S. 1978). On the other hand, Marrucci et al (Marrucci G. and Maffettone 1989) have solved a two dimensional version of the Doi model for nematics (Doi M. and Edwards S.. F. 1986), in which the molecules are assumed to lie in the plane perpendicular to the vorticity axis, that is, in the plane parallel to both, the direction of the velocity and the direction of the velocity gradient. Despite this simplification, the predicted range of shear rates over which N is negative, is in excellent agreement with observations. This result opens up the possibility that negative first normal stress differences may be predicted in a two dimensional flow. We shall now examine the effects produced by the stresses generated during the reorientation process by calculating the viscometric functions that relate the shear and normal stress differences. For a planar geometry and using the convention in (Bird R. et al 1971) Here A ij ≡(1/2)(∂v i /∂x j +∂v i /∂x j ) is the symmetric part of the velocity gradient and Ω≡dn/dt-(1/2)∇×v×n represents the rate of change of the director with respect to the background fluid. The α i for i=1,..6, denote the Leslie coefficients of the nematic. The integration of the first normal stress difference, Eq.(38) over the whole cell and along the velocity gradient direction renders the net force between the plates as a function of the Reynolds number, which is proportional to N A positive force exerted by the fluid motion tends to push the plates apart, or otherwise, if the force is negative, the fluid tends to pull the plates close together. Nematic cells under shear flow In this section we study the flow properties of nematic-filled cells under shear flow. The cell geometry is important because many micro-fluidic devices are designed with channellike shapes and its mathematical treatment is simpler as compared to the case of capillaries. Liquid crystals and their electrorheological properties under flows with a constant shear rate over the height of the channel have been treated in a number of papers. However, in all of these papers the studies have focused on situations in which the anchoring was the same at all boundaries. Only recently, a systematic study of the influence of different boundary conditions on the shear flow was treated together with the influence of an applied electric field. Hybrid-aligned nematic cell First we are going to present the case of a hybrid-aligned nematic (HAN) cell since it is a common geometry used in devices (Guillen and Mendoza 2007). In this geometry, the director is aligned perpendicularly (also called homeotropic alignment) to one of the boundaries of the confining cell while it is parallel (also called homogeneous alignment) to the opposite boundary as shown in Fig. 2. The separation between the plates l is small compared to the transverse dimensions L of the cell, which is under the action of a perpendicular electric field E. The director's configuration is given by (41) where θ(z) is the angle with respect to the z axis. We assume strong anchoring conditions at the plates of the cell (42) As shear flow is applied as depicted in Fig. 2 www.intechopen.com A second torque that acts on the LC molecules is due to the electric field (46) An elastic torque can be derived form the Frank-Oseen elastic energy to give (47) Finally, the rotational inertia and viscous damping gives the following contribution to the torques (48) All the above contributions result in the differential equation for the director´s orientation that describes the equilibrium of torques www.intechopen.com In Fig. 3 we show the orientational profile for various values of q and m. We observe a tendency of the molecules to align with the direction of the electric field. In contrast, θ www.intechopen.com increases as the value of m increases, for m>0, which means that the molecules tend to be aligned with the direction of the flow. On the other hand, for m<0, a remarkable difference is observed. In this case the cell shows two different regions, in the lower part of the cell, where the effect of the flow dominates, the molecules are tilted to the left whilst on the upper part, where the anchoring dominates, they are tilted to the right. The averaged apparent viscosity [Eq. (37)] is depicted in Fig. 6. We observe a moderate electrorheological effect and an interesting non-Newtonian behavior with alternate regions of shear thickening (shaded region) and thinning. Finally, the first normal stress difference (54) www.intechopen.com is plotted in Fig. 7 and the corresponding averaged value is shown in Fig. 8 Homogeneous nematic cell In this subsection we study the flow of a homogeneous nematic cell as depicted in Fig. 9. The only difference of this cell as compared to the HAN cell is that here the alignment is homogeneous at both plates. At first sight one may think that this small difference may only produce slight changes in the rheological behavior of the cell. However, this is not the case and a completely different behavior arises. The most striking feature is the appearance of multiple steady-state configurations for certain combinations of the applied electric field and shear flow (Medina and Mendoza 2008). www.intechopen.com Using again the theory of Ericksen, Leslie, and Parodi together with the momentum conservation we obtain the differential equations that govern the steady state of the system (55) Here α i are the Leslie viscosities and κ≡K 3 /K 1 , with the homogeneous (56) and non-slip boundary conditions. The stationary configuration of the nematic's director can be found by solving Eq. (55) numerically using the "shooting" method. Results are presented for 5CB as before. In Fig. 10 we show the nematic's configuration for different values of the applied electric field, q, without flow (a) and with flow (b). In this last case two sets of solutions of Eq. (55) are shown. www.intechopen.com In Fig. 11 we show the nematic's configuration for q=2 and different values of the shear flow. The case m=5 lies in a region where exist only one solution of Eq. (55) while the case m=10 lies in the region where Eq. (55) accepts multiple solutions. A phase diagram in the q-|m| space is also shown in Fig. 11 that separates the region with only one solution from the region with multiple solutions. In the lower right panel we sketch the steady-state nematic's configuration for these cases. The selection of one of the configurations over the other depends on the history of the sample as exemplified in Fig. 12. In this figure, we have recasted the phase diagram drawing the positive and negative parts of the m-axis and considered two different processes depicted by the arrows in the phase diagram. The two processes start at zero applied electric field, but with opposite starting shear flows (points A and A' in the diagram). Then, following the processes depicted by the arrows in the phase diagram they arrive to the same final q, m pair with different configurations (point B'). In Fig. 13 we show the averaged viscosity as function of m for the trajectory starting at point A in Fig. 12. We observe an interesting non-Newtonian behavior with alternate regions of shear thickening and thinning. The second trajectory (the one starting at A' in Fig. 12) would produce the same curves for the viscosity but interchanging m with -m. A moderate electrorheological effect is also evident in this figure. Nematic capillaries In this section we study the flow properties of nematic-filled capillaries under the action of an electric field for two different flow conditions. In first place we treat the case of capillaries subjected to a pressure gradient and in second place we consider the case of a Couette flow. Hybrid nematic capillary under Poiseuille flow We consider a capillary consisting of two coaxial cylinders whose core is filled with a nematic liquid crystal subjected to the simultaneous action of both a pressure gradient applied parallel to the axis of the cylinders (Poiseuille flow) and a radial low frequency electric field as depicted in Fig. 14 . The nematic's director in cylindrical coordinates can be written as (57) with the hybrid hard anchoring conditions (58) The constant pressure drop along the axis of the cylinders produces a flow profile given by (59) with the non slip boundary conditions (60) The nematodynamic equations adopt a more involved look, as compared to the case of the cells. The reader can find the appropriate expressions in ). Here we just present the relevant results using as in the previous sections a 5CB nematic liquid crystal. In Fig. 15 we show the nematic's configuration as function of x ≡ r/R 2 , parametrized with q, the ratio of the electric and elastic energies, and Λ, the ratio of the hydrodynamic and elastic energies . The undistorted state corresponding to Λ = 0 and q = 0 is similar to the escaped configuration. For q = 50, is much more aligned with the radial direction than for q = 0. This is so because the director tends to be parallel to the electric field. For positive Λ > 0, corresponding to negative velocity, tends to be axially aligned, whereas for negative Λ < 0 the trend is the opposite. In contrast, for q = 50 the influence of the pressure gradient is influenced by the electric field for regions near the inner cylinder. In Fig. 16 we show a typical velocity profile for a given value of the electric field and different values of Λ. This figure exhibits a clear difference in the magnitude of the velocity between forward and backward flows, which is a consequence of the asymmetry of the undistorted director's configuration (so called escaped configuration). Moreover, the extreme of the curves, representing a vanishing shear stress, are closer to the inner cylinder for all the curves, with no significant dependence on the value of Λ. This behavior is different from a Newtonian fluid for which the maximum is approximately at the middle of the distance between both cylinders. Fig. 17 presents the averaged apparent viscosity for this configuration. Note the non-Newtonian behavior of the system and the non-symmetric response with respect to the direction of the flow. In particular, in Fig. 17b we observe that for a given value of the electric field and for the range of flow considered the viscosity decreases as Λ increases. This means that for backward flow (Λ>0) the viscosity decreases as the magnitude of the flow increases whereas for forward flow (Λ<0) the viscosity increases as the magnitude of the flow increases. Therefore, we have flow thinning in one direction and flow thickening in the other. This directional response is due to the fact that the initial undistorted nematic configuration is asymmetrical. Even more, for the forward case most of the mechanical energy is elastically accumulated in distorting the nematic's configuration instead of being used to move the fluid, as compared to the backward case. In this sense the undistorted configuration is working like a biased spring inherent to the liquid, stiffer in one direction than in the other. The averaged first normal stress difference is shown in Fig. 18. Panel (a) shows that N 1 depends almost linearly on q for backward flow, 50 Λ=− , whereas it has a minimum in q = www.intechopen.com 10 for forward flow 50 Λ= . Panel (b) displays clearly the contrast between forward and backward flows for small values of q where a local minimum moves to the right as increases. This shows that the directional dependence of this confined nematic can be electrically controlled. Homogeneous nematic capillary under a Couette flow In this subsection we are going to present the case of a homogeneous nematic capillary subjected to a Couette flow and a radial electric field as shown in Fig. 19 . The inner cylinder is rotating with angular velocity Ω 1 and the outer cylinder with angular velocity Ω 2 . This case resembles the one corresponding to the homogeneous cell, and in fact the phenomenology is very similar. Fig. 19. Sketch of a nematic liquid crystal confined by two rotating coaxial cylinders and subjected to a radial electric field. Adapted from Reyes, Corella-Madueño, and Mendoza 2008. According to the figure, the director can be written as www.intechopen.com (61) and the velocity as (62) As in the previous situations, we are considering hard anchoring and non-slip boundary conditions at the cylinders (63) and (64) The orientational configuration for 5CB is shown in the left panel of Fig. 20. In (a) for q=20 and different values of ΔΩ. The angle grows from zero up to a maximum value, then, it decreases to zero at the outer cylinder. This maximum increases as we increase the value of ΔΩ. This simply means that the nematic's molecules tend to be more aligned with the flow www.intechopen.com as it increases. As we can see, for the largest value of ΔΩ shown, there are two possible stationary configurations. In (b) we plot the same as in (a) but for q=50. In this case for any value of ΔΩ the system may adopt multiple steady-state solutions. Here, we have plotted two different possible solutions. In the right panels we plot the corresponding velocity profiles. In Fig. 21 we present the average viscosity as a function of |ΔΩ|. Notice that the electrorheological effect is less pronounced for larger values of the shear flow since the cylinder's rotation turns the nematic perpendicularly to the electric field and as a consequence its influence is reduced. Conclusion We have presented a series of results that characterize the flow behavior of a flow-aligning thermotropic liquid crystal (5CB) under the action of an applied electric field in a variety of different flow geometries and boundary conditions. It is clear from these results that the influence of the boundary is enormous and may lead to completely different behaviors. Among the interesting results we can mention the existence of a rich non-Newtonian response with regions of shear thinning and thickening, a moderate electrorheological effect and a history dependent directional response. Acknowledgment CIM acknowledges partial financial support provided by DGAPA-UNAM through grant DGAPA IN-115010.
7,761.6
2012-03-07T00:00:00.000
[ "Physics" ]
The effects of the steel’s surface quality on the properties of anti-corrosion coatings The requirements of painted vehicle construction in terms of corrosion resistance apply not only to cars but also to heavy duty lorries, trucks and agricultural machinery. Phosphating is the most commonly used method for the pre-treatment before painting of ferrous metals in the vehicle manufacturing industry. The task of the phosphate layer between the metal surface and the paint is to protect the metal from corrosion under the paint film and promotes the adhesion of the paint film to the metal substrates. In this study the zinc phosphate conversion layer was deposited onto the steel surfaces from a bi-cationic (nickel-free) phosphating bath by dipping using identical technological parameters. The developed crystal structure and morphology was examined in terms of surface roughness and blasting quality of the metal. The surface quality of the metal was tested by digital light microscopy (LM), and the structure of phosphate coating was investigated by scanning electron microscopy (SEM). Introduction Surface preparation of steel parts is one of the main factors influencing the effectivity and longevity of anticorrosion coatings. Improperly prepared surfaces prevent the formation of an even conversion layer, meaning insufficient paint layer adhesion, leading to corrosion under the protective layer and causing continuous peeling-off of the coating [1,3,4,5]. In addition to cleaning the surfaces also must be modified prior to applying the anti-corrosion layers. A zinc phosphate layer is added onto metal surfaces, to ensure good adhesion of the additional layers to the raw metal surface and protects the base metal from water, carbon dioxide, sulfur oxides, ozone, aggressive ions and other harmful substances getting through the damaged paint coatings [1,4,5]. The performance of zinc phosphate layer on steel depends on the fraction of the total covered surface area. This coverage fraction is effected by process parameters and chemical composition of phosphating bath and surface morphology of the deposit [6]. Besides chemical surface cleaning, the blasting is used for cleaning, surface preparation, and surface treatment as well. This process reduces the susceptibility to corrosion, seal porous surfaces. A blasted surface can be a very clean surface providing excellent mechanical adhesion. Media selection plays an important role in effective blasting. Shoot blasting can be widely used to remove rust, oxides, oils and mill scales, even the welding beads, welding silicate layer from the surface and welding seams of steel parts, structures and even complex assemblies. Its main scope is to provide surfaces free of adhesion preventing materials. Also, increased surface roughness provides good adhesion for the first layer of the coating for the interfacial reactions. [2,4,5]. Quality of the phosphate layer is influenced by the material quality [2], surface roughness, method of surface preparation and parameters of the phosphating process (concentration of ingredients, temperature, time, etc.) [1,4,5], structural defects and presence of contaminants in the crystallizing method [7]. The purpose of this work is to investigates the properties of the zinc phosphate conversion layer on blasted steel surfaces with different surface roughness after blasting under the same technological parameters. Metal substrates preparation As a substrate for zinc-phosphate coatings deposition both sides of a S420MC steel sheet (70 mm × 150 mm × 10 mm) and for comparison a cold rolled steel Q-Panel (102 × 152 × 0,81 mm) were used. Chemical composition of S420MC steel panel by EN 10149-2 (High-strength steels for coldforming, thermomechanically-rolled, normalized): max. 1.6 % manganese, max. 0.12 % carbon, max. 0.50 % silicon, max. 0.025 % phosphorus, max. 0.015 % sulfur. This steel is used for cold-formed components, easy to cut and bend, suitable for welding, and can be easily machined. Due to the favorable mechanical properties this is a commonly used sheet material in different sheet thicknesses in BPW axle and chassis manufacturing. Table 1 lists the chemical composition of samples. Prior to deposition of zinc-phosphate coatings, the S420MC test panel was blasted with steel shots in an automated equipment. One side of the panel was properly blasted ("good side"), while the other wasn't ("not good side"). The two different sides under the same conditions received the surface pretreatment. Pretreatment and zinc phosphating process The tests were done in day-to-day production conditions on a working production line of the company and within the limits of process capability. Free acid and total acid were determined by titrating of 10 ml bath sample against 0. l N NaOH and total alkalinity was determined against 0.1 N H2SO4 using indicators. The classical volumetric titration methods were performed with Schilling burettes, in Erlenmeyer flasks. After shot blasting (Q-Panel without shot blasting) the test panels were pre-degreased at 60 °C temperature for 5 min. in 20 g/l of Gardoclean S5165 and 2 g/l Gardobond Additiv H7375 alkaline degreasing dip bath (pH = 12.7; total alkalinity = 21.5 points). Degreasing was done at 60 °C temperature for 5 min. in 40 g/l of Gardoclean S5165 and 3 g/l Gardobond Additiv H7375 alkaline degreasing dip bath (pH = 11.7; total alkalinity = 23.8 points) and dip rinsed with tap water in three successive cascade flushing baths. Treatment time was 1minute in each bath by stirring the rinsing water, at ambient temperature. The conductivity of the rinsing baths during the progress of the technological process were: 1871 µS/cm and 875 µS/cm and 717 µS/cm. Zinc-phosphate coatings were formed chemically after dip activation with 2 g/l Gardolene V6599 bath (pH = 9,17), by immersion in phosphating bath (pH = 2,84 ; free acidity = 2,0 points; total acidity = 23.4 points; accelerator = 5.5 points) for 4 min, at same temperatures of the nickel-free double cation Zn/Mn phosphating bath (58 °C). It should be noted here that nickel has long been known to significantly improve paint adhesion and corrosion protection. However, nickel compounds are noxious and closely regulated in the effluent stream. Nickel-free processes, therefore, are desirable to satisfy health and environmental demands. NaNO2 -acting as an accelerator -was added to the phosphating bath (5.5 points). Finally, all plates were dip rinsed with tap water (pH > 4,5), three times in a successive cascade flushing bath. The conductivity of the rinsing baths during the progress of the technological process were < 300 μS/cm and < 150 μS/cm and < 50 μS/cm) and finally they were also rinsed by dipping and by spraying in deionized water (conductivity < 20 µS/cm). Treatment time was 1 minute in each bath by stirring the rinsing water, the spraying was done with fresh deionized water at ambient temperature. Total acidity/alkalinity and free acidity were determined with acid -base titration. All solutions were prepared with chemicals of Chemetall Ltd, according to the technical data sheets, with deionized or tap water. The pretreatments are applied in a dipping process, all technological parameters were set to the specified mean value within the allowable tolerances. Examination of appearance after the pretreatment process: parts of different tones visible on the surface on "not good" side ( micrograph of S420MC steel panel on "not good" side, b) 3D micrograph of S420MC steel panel on "not good" side, c) Surface roughness of S420MC steel panel on "not good" side, d) 2D micrograph of S420MC steel panel on "good" side, e) 3D micrograph of S420MC steel panel on "good" side, f) Surface roughness of S420MC steel panel on "good" side, There is a separating boundary section between properly blasted and not properly blasted surface. Scanning electron micrographs show irregular crystal growth at the border (see Figure 4). Surface and surface coverage measurements The weight per unit area in g/m 2 of the phosphate layers coatings was determined by chemical dissolution of the layers in a solution of 4 % TEA + 12 % Na2EDTA x 2H2O + 9 % NaOH + 75 % water at 70 °C degree. The phosphate conversion coatings weight of the Q-Panel was 2.65 g/m 2 , which corresponds to the regulatory mean value of line. 2D and 3D structure of surfaces was visualized by a KEYENCE digital light microscope Model WHX J20T. Morphology, crystal shape and size, as well as orientation was measured by a ThermoFisher/FEI Apreo S scanning electron microscope (see Figure 5). Structure of the metal surface, the micro-and macrostructure is just as important as the material composition. As the surface roughness increases, so does the calculated surface area per coating weight, and better coverage is achieved on steel surfaces due to reactions taking place on the boundary layer, corrosion resistance in case of damaged top coating is improved. Increased surface roughness correlates with finer structure of the coating, as smooth surfaces have weaker reaction to phosphating. If a surface contains lots of indentations and microcracks, then the acid corrosion effect is amplified during phosphating, leading to stronger layer adhesion but also changing the layer's microstructure. Conclusions The crystalline size and crystal type were found to be surface roughness after the blasting dependent. Surfaces properly prepared with shoot blasting have a regular and evenly covered microstructure ( Figure 5A). Improper or incomplete shoot blasting causes irregular grain growth in the corrosion protection layer as shown in Fig. 5B and decrease the fraction of the total covered surface area by the deposit. On the sample plates representing the "ideal" surface to be phosphate layer, the structure of the zinc phosphate does not show the same structure as the phosphate layer structure used in the vehicle manufacturing steels exhibiting different surface roughness and quality. Irregular crystal growth for reason of improper shot blasting, on work pieces resulting from defective mechanical surface preparation causes subsequent corrosion problems because the base metal surface is not completely covered by the zinc phosphate crystals. A completely different zinc phosphate crystal structure was deposited onto the surface of Q-Panel ( Figure 5C). When evaluating SEM micrographs of zinc-phosphate conversion coatings on steel 5 µm 10 µm 10 µm 10 µm 10 µm
2,286.2
2020-08-26T00:00:00.000
[ "Materials Science" ]
Steric control in the metal – ligand electron transfer of iminopyridine – ytterbocene complexes † b A systematic study of reactions between Cp* 2 Yb(THF) (Cp* = η 5 -C 5 Me 5 , 1 ) and iminopyridine ligands (IPy = 2,6- i Pr 2 C 6 H 3 N v CH(C 5 H 3 N-R), R = H ( 2a ), 6-C 4 H 3 O ( 2b ), 6-C 4 H 3 S ( 2c ), 6-C 6 H 5 ( 2d )) featuring similar electron accepting properties but variable denticity and steric demand, has provided a new example of steric control on the redox chemistry of ytterbocenes. The reaction of the unsubstituted IPy 2a with 1 , either in THF or toluene, gives rise to the paramagnetic species Cp* 2 Yb III (IPy) (cid:129) − ( 3a ) as a result of a formal one-electron oxidation of the Yb II ion along with IPy reduction to a radical-anionic state. The reactions of 1 with substituted iminopyridines 2b – d , bearing aryl or hetero-aryl dangling arms on the 6 position of the pyridine ring occur in a non-coordinating solvent (toluene) only and a ff ord coordination compounds of a formally divalent ytterbium ion, coordinated by neutral IPy ligands Cp* 2 Yb II (IPy) 0 ( 3b – d ). The X-ray di ff raction studies revealed that 2a – c act as bidentate ligands; while the radical-anionic IPy in 3a chelates the Yb III ion with both nitrogens, neutral IPy ligands in 3b and 3c participate in the metal coordination sphere through the pyridine nitrogen and O or S atoms from the furan or thiophene moieties, respectively. Finally, in complex 3d the neutral IPy ligand formally adopts a monodentate coordination mode. However, an agostic interaction between the Yb II ion and an ortho C – H bond of the phenyl ring has been detected. Imino-nitrogens in 3b – d are not involved in the metal coordination. Variable temperature magnetic measurements on 3a are consistent with a multicon fi gurational ground state of the Yb ion and suggest that the largest contribution arises from the 4f 13 -radical con fi guration. For complexes 3b and 3c the data of magnetic measurements are indicative of a Yb II -closed shell ligand electronic distribution. Complex 3d is characterized by a complex magnetic behavior which does not allow for an unambiguous estimation of its electronic structure. The results are rationalized using DFT and CSSCF calculations. Unlike diazabutadiene analogues, 3a does not undergo a solvent mediated metal – ligand electron transfer and remains paramagnetic in THF solution. On the other hand, complexes 3b – d readily react with THF to a ff ord 1 and free IPy 2b – d . Introduction Since the pioneering studies of Cloke and Edelmann 1 in the early 1980s who introduced diimines in organolanthanide chemistry, this field of coordination chemistry has received a large boost due to the diversity of coordination modes offered by this class of ligands along with their intriguing redox properties. 2 The idea to combine redox active diimine ligands 3 and ytterbium ions possessing two stable oxidation states 4 has proven to be particularly fruitful 5 thus leading to the discovery of a series of challenging phenomena such as solvent mediated redox transformations, 5c,6 sterically induced reduction 7 and redox isomerism. 7,8 The course of the reactions between ytterbocenes and diimine ligands has been found to be affected by a number of factors. Depending on the nature of cyclopentadienyl-type ligands (Cp = cyclopentadienyl, indenyl, fluorenyl), their binding mode to the Yb II ion ( propensity to Cp-haptotropic rearrangements), 9 and the steric demand of a diimine ligand, the reactions of ytterbocenes with diimines can result in either Yb II /Yb III oxidation, 6,10 formation of new C-C bonds 11 or C-H bond activation at the diamine skeleton. 12 The one-electron oxidation of ytterbocenes by diazabutadienes proved to be a sterically governed redox process. 13 Depending on the steric hindrance of the ytterbium coordination sphere, Yb III complexes containing either radical-anionic diazabutadiene or covalently bonded imino-amido ligands can be formed. Iminopyridine (IPy) ligands belong to the same family of redox active diimine systems but their coordination chemistry with lanthanide ions still remains only moderately investigated 14 and certainly much less studied than that of their saturated aminopyridine (APy) counterparts. 14 In recent years our group has shown how the reaction of variably hindered ytterbocenes [L] 2 Yb(THF) n (L = η 5 -C 13 H 9 (fluorenyl); η 5 -C 9 H 7 (indenyl); η 5 -C 5 Me 5 ; η 5 -C 5 H 4 Me) with a bidentate IPy ligand [2,6-i Pr 2 C 6 H 3 NvCH(C 5 H 4 N)] can lead to a dramatic rearrangement of the metal coordination sphere through unusual reactivity paths together with a permanent change of the ion oxidation state (Fig. 1). The Yb II bis(fluorenyl) complex (η 5 -C 13 H 9 ) 2 Yb(THF) 2 in the presence of an excess of IPy, proceeds through the complete oxidative cleavage of the two η 5 -coordinated fragments, affording the paramagnetic Yb III species coordinated by three iminopyridyl radical-anions (I). 11b Similarly, the Yb II bis (indenyl) complex (η 5 -C 9 H 7 ) 2 Yb(THF) 2 undergoes an unusual NvC bond insertion into a formal η 1 -Yb-C 9 H 7 bond thus leading to the half-sandwich and oxidized compound (II). 11b When bis(cyclopentadienyl) ytterbium complexes Cp 2 Yb(THF) n (Cp = η 5 -C 5 Me 5 , η 5 -C 5 H 4 Me) are reacted with two equivalents of the same IPy a similar oxidative cleavage at one Yb-Cp bond takes place and Yb III species (III) coordinated by two iminopyridyl radical-anions are formed. 11a, 15 For all these reactions, steric factors and the inherent tendency of the η 5 -coordinated ligands to undergo haptotropic rearrangement are claimed to play a fundamental role in the whole transformation. Despite the intriguing reactivity issues observed in the reaction of (η 5 -C 9 H 7 ) 2 Yb(THF) 2 with the bidentate IPy ligand, 6-(hetero)aryl substituted analogues [2,6-i Pr 2 (C 6 H 3 )NvCH (C 5 H 3 N)-6-R′ (R′ = 2-furyl, 2-thiophenyl, phenyl)] behave unexpectedly as neutral (κ 2 -or κ 3 -coordinated) systems by replacing THF molecules from the metal coordination sphere, without affecting the metal oxidation state of the targeted coordinative compounds (IV). 16 The reason for the inhibition of metal-toligand electron transfer in the latter compounds is likely due to the lack of convergence between the ytterbium center and the bulkier substituted IPy ligands as a consequence of a metal ion size decrease in the case of a Yb II /Yb III oxidation. 17 To gain additional insight into the steric regulation of these redox processes and to clarify the role of the steric and electronic properties of ligands bound to the ytterbium ion, hereafter we performed the study of the model ytterbocene complexes Cp* 2 Yb(THF) (Cp* = η 5 -C 5 Me 5 ) with unsubstituted and 6-(hetero)aryl-substituted iminopyridine ligands. A series of di-and trivalent ytterbium complexes coordinated by neutral or radical anionic ligands have been synthesized and fully characterized. General considerations and material characterization All experiments were performed under an inert atmosphere, using standard Schlenk-tube and glove-box techniques. After drying over KOH, THF was purified by distillation from sodium/benzophenone ketyl and hexane and toluene were dried by distillation from sodium/triglyme benzophenone ketyl prior to use. (η 5 -C 5 Me 5 ) 2 Yb(THF), 18 IPy 2a 19 and IPy 2b-d 13,16,20 were prepared according to literature procedures. Unless otherwise stated, all commercially available chemicals were used as provided from commercial supplies. 1 H and 13 C { 1 H} NMR spectra were obtained on either a Bruker Avance-II 400 MHz NMR spectrometer or a Bruker DPX 200 MHz NMR spectrometer. Chemical shifts (δ) are reported in ppm relative to tetramethylsilane (TMS), referenced to the chemical shifts of residual solvent resonances ( 1 H and 13 C). IR spectra were recorded as Nujol mulls on a Bruker-Vertex 70 spectrophotometer. The N, C, and H elemental analyses were carried out in the microanalytical laboratory of the IOMC by means of a Carlo Erba Model 1106 elemental analyzer with an accepted tolerance of 0.4 unit on carbon (C), hydrogen (H), and nitrogen (N). Lanthanide metal analysis was carried out by complexometric titration. 21 General procedure for the synthesis of 3a-d In a typical procedure, a toluene solution (20 mL) of (η 5 -C 5 Me 5 ) 2 Yb(THF) (1) (0.50 g, 0.97 mmol) was treated drop- Magnetic characterization Magnetic measurements of crystalline samples of 3a-d were carried out by using an MPMS-XL SQUID magnetometer in the temperature range of 1.8-300 K with an applied magnetic field of 1000 Oe (up to 40 K) and 10 000 Oe (from 40 K to 350 K), to avoid saturation effects. Samples were prepared in a glove box by wrapping them in Teflon and then loaded in the SQUID magnetometer in less than 30″. The susceptibility is evaluated in the low field limit as χ m = M m /H. The raw data have been corrected for diamagnetic contribution of the sample holder, and the intrinsic diamagnetism of the sample, estimated by using Pascal's constant to obtain the paramagnetic susceptibility. X-ray crystallography The X-ray data for 3a-d were collected on Bruker Smart Apex (3b,d), Bruker D8 Quest (3a) and Agilent Xcalibur E (3c) diffractometers (MoK α -radiation, ω-scans technique, λ = 0.71073 Å, T = 100(2) K) using SMART, 22 APEX2 23 and CrysAlis PRO 24 software packages. The structures were solved by direct and dualspace 25 methods and were refined by full-matrix least squares on F 2 for all data using SHELX. 25a SADABS 26 and CrysAlis PRO were used to perform area-detector scaling and absorption corrections. All non-hydrogen atoms were found from Fourier syntheses of electron density and were refined anisotropically. Hydrogen atoms H(10A) in 3b,c and H(3A) in 3d also were found from Fourier syntheses of electron density and were refined isotropically. Other hydrogen atoms in 3a-d were placed in calculated positions and were refined in the "riding" model with U(H) iso = 1.2 U eq of their parent atoms (U(H) iso = 1.5 U eq for methyl groups). 1552992 (3a), 1552993 (3b), 1552994 (3c), 1552995 (3d) † contain the supplementary crystallographic data for this paper. The crystallographic data and structure refinement details are given in Table S1. † Computational details The calculations have been carried out with the Gaussian09 software 27 at the DFT level using the hybrid functional B3PW91. 28 Ytterbium atoms were treated with the small-core pseudopotential from the Stuttgart group, 29 that explicitly account for the 4f electrons. Oxygen, carbon and hydrogen atoms have been treated with the all electron 6-31G** Pople basis set. 30 The geometry optimization has been performed without any symmetry constraints taking the geometry obtained from the X-ray diffraction measurement of the product. Analytical calculations of the vibrational frequencies of the optimized geometry were done to confirm that it's a minimum. CASSCF calculations were also carried out based on the SCF orbitals obtained for the triplet state and the active space choice is detailed in the manuscript. Results and discussion When a dark-red THF solution of Cp* 2 Yb(THF) (1) is treated at room temperature with an equimolar amount of IPy ligand 2a, the colour of the reaction mixture changed rapidly to deep brown thus indicating that a reaction takes place. Following the process by 1 H NMR (THF-d 8 , 293 K) spectroscopy proved the rapid oxidation of the metal ion to Yb III and the subsequent formation of a paramagnetic species. This result contrasts with earlier observations dealing with the reactivity of ytterbocenes in the presence of N,N-disubstituted diazabutadienes 7 where only non-coordinating aromatic and aliphatic hydrocarbons 5,6,16 allowed the reaction to take place, whereas no interaction occurred in the presence of coordinating solvents (i.e. THF, py). Hereby, 2a reacts with ytterbocene 1 in THF to give the paramagnetic complex (C 5 Me 5 ) 2 Yb[κ 2 -2,6-i Pr 2 (C 6 H 3 )NCH(C 5 H 4 N)] •− (3a) as a result of a formal oneelectron oxidation of the metal ion and IPy ligand reduction to the radical-anionic state (Scheme 1). In the present case, THFto-IPy ligand displacement at the ytterbium center is likely facilitated by the reduced steric hindrance of IPy 16 compared to the more commonly used and sterically demanding N,N-disubstituted diazabutadiene systems. 7 3a was isolated in 76% yield as dark brown, air-and moisturesensitive, microcrystals. Single crystals suitable for X-ray study were obtained from a concentrated toluene solution of the compound, cooled down to −20°C for days. A perspective view of the 3a structure is given in Fig. 2 along with a selection of the main structural details [bond lengths (Å) and angles (°)]. The structure refinement data are given in Table S1 (see the ESI †). The ytterbium ion is η 5 -coordinated by two Cp* ligands together with the two nitrogen atoms from the IPy framework that formally increase the final complex coordination number to eight (assuming CN 3 assigned for the Cp ligand). The Yb-C bond lengths in 3a [Yb-C mean : 2.651(6) Å; Yb-Cp centre : 2.345(2) Å] are shorter than those measured on both the IPy-free ytterbocene 1 (Yb-C mean 2.69 Å; Yb-Cp centre : 2.41 Å) 31 and the diamagnetic eight-coordinated pyridine adduct (Cp*) 2 Yb(Py) 2 (Yb-C mean 2.74 Å). 32 On the other hand, they are in good accord with Yb-C bond distances measured in related eight-coordinated Yb III complexes of the state-of-the-art {i.e. (η 5 -C 5 Me 4 H) 2 YbI(THF) [Yb-C mean 2.60(4) Å], 33 . All these data taken together largely support the hypothesis of a trivalent oxidation state for the ytterbium ion in 3a as a result of one-electron metal oxidation along with a reduction of the coordinated IPy ligand to the state of a radical-anionic species. 11b, 15,37 However, all our attempts to detect the paramagnetic radicalanionic iminopyridine ligand in 3a (both in the solid state and in solutionhexane, toluene; 203-293 K) by the EPR technique failed. This outcome is in line with our previous studies where we demonstrated that Yb III complexes coordinated by paramagnetic radical-anionic diimino ligands are EPRsilent. 6,10 In addition, this behavior was not surprising at all given the fast electronic relaxation characterizing Yb III complexes and the even number of unpaired electrons of the resulting species. 38 The temperature dependence of the magnetic moment and of the molar susceptibility of 3a is reported in Fig. 3. A maximum in χ M vs. T data is observed, together with the presence of an unavoidable paramagnetic impurity of Yb III coordinated to diamagnetic ligands, which is responsible for the paramagnetic tail observed at T < 50 K (Fig. 3A). By a fit of the low temperature region using a Curie law this is estimated to be 8.1%. This allowed correction of the data of Fig. 3A by following the procedure reported by Booth and co-workers. 39 The corrected curves shown in Fig. 3B present a maximum in χ M at 170 K along with a temperature independent paramagnetism Scheme 1 Reaction scheme for the generation of the paramagnetic species 3a. contribution of 5.4 × 10 −3 emu mol −1 . Both these values are consistent with a multiconfigurational ground state 39 and the observed μ eff value at the highest measured temperature (4.19 at 350 K) suggests that the largest contribution is from the 4f 13 -radical configuration (expected μ eff = 4.8 for uncoupled spins), in agreement with the results from the X-ray diffraction data. The 1 H NMR spectroscopy is a sensitive tool that allows for determining the magnetism of complexes in solution. The proton NMR spectrum can qualitatively indicate whether the complex is diamagnetic or paramagnetic. The 1 H NMR spectrum of 3a (benzene-d 6 , 293 K) shows a set of broadened signals that are substantially shifted with respect to those of related diamagnetic species (Fig. S1, ESI †) thus giving an additional proof of its paramagnetic nature. The methyl protons of the C 5 Me 5 ring appear as a slightly broadened singlet at 3.41 ppm, in good agreement with the chemical shifts measured for related paramagnetic species containing Yb III ions coordinated by radical-anionic bipyridine, phenanthroline or related ligands. The protons of the radicalanionic iminopyridine ligand give rise to a set of signals in a broad interval from −70 to 80 ppm. Finally, the room-temperature electronic absorption spectra of 3a (recorded in nonane), the radical-anionic potassium derivative [2,6-i Pr 2 (C 6 H 3 )NCH(C 5 H 4 N)] •− K + (recorded in THF) and of the neutral Ipy 2a (recorded in hexane) were put at comparison for the sake of completeness. The spectra are shown in Fig. S3 (see the ESI †). The UV-Vis spectrum of 3a shows two absorption bands at 355 and 455 nm, respectively. While the neutral IPy ligand 2a displays a single absorbance at 340 nm, its potassium radical-anionic derivative [2,6-i Pr 2 (C 6 H 3 )NCH (C 5 H 4 N)] •− K + also displays two main absorbances at 355 and 525 nm, respectively. This comparative analysis is in line with the presence of a radical-anionic iminopyridine ligand in solution and corroborates with the NMR data on the paramagnetic nature of 3a. With the aim of gaining additional insight into the role of steric factors driving the metal-to-ligand electron transfer in the above-mentioned coordinative compounds, we studied the reaction of 1 in the presence of bulkier IPy ligands (2b-d), bearing coordinating or not-coordinating (hetero)aryl fragments attached to the six-position of the pyridine core (Scheme 2). This systematic investigation takes advantage of the previous outcomes of our study of the coordination modes and electronic effects resulting from the reaction of the same class of IPy ligands with the bulkier bis(indenyl) ytterbium complex (η 5 -C 9 H 7 ) 2 Yb II (THF) 2 . 16 In that case, the coordination of 6-(hetero)aryl-substituted IPy ligands occurred with the generation of diamagnetic species as a result of the simple replacement of the two coordinated THF molecules. Unlike the above-mentioned reaction with the bidentate IPy 2a, Cp* 2 Yb(THF) (1) does not react in THF with the more sterically demanding ligands 2b-d. This was preliminary confirmed by the absence of any appreciable color change in the solutions of 1 upon treatment with an equimolecular amount of 2b-d. In addition, monitoring the reaction course over several hours by 1 H NMR (THF-d 8 , 293 K) spectroscopy has showed only signals from unreacted starting materials. On the contrary, the addition of 2a-d to a toluene solution of 1 at room temperature (Scheme 2) resulted in the immediate change of the solution colour from dark-red to brownishblack. After slow concentration of each solution, brownishblack microcrystals of complexes (C 5 Me 5 ) 2 Yb[κ 2 -2,6-i Pr 2 C 6 H 3 NCH(C 5 H 4 N)-6-(C 4 H 3 O)] (3b), (C 5 Me 5 ) 2 Yb[κ 2 -2,6-i Pr 2 C 6 H 3 NCH (C 5 H 4 N)-6-(C 4 H 3 S)] (3c), (C 5 Me 5 ) 2 Yb[κ 1 -2,6-i Pr 2 C 6 H 3 NCH (C 5 H 4 N)-6-Ph] (3d) were obtained in 70, 48 and 32% isolated yields, respectively. Complexes 3b-d were isolated as highly air-and moisture-sensitive solids showing moderate solubility in aromatic and aliphatic hydrocarbons thus hampering their full NMR characterization ( 13 C{ 1 H} NMR spectra). Crystals suitable for single-crystal X-ray diffraction studies were grown by slow concentration of the respective toluene solutions at ambient temperature under a gentle stream of nitrogen. Complex 3b crystallizes as a solvate with one molecule of toluene while crystals of 3c and 3d do not contain any crystallization solvent. ORTEP representations of the three crystal structures are given in Fig. 4-6 along with a selection of the main structural details [bond lengths (Å) and angles (°)]; Table S1 † lists their main crystal data and structural refinement details. At odds with previously reported bis(indenyl)ytterbium/IPy coordination derivatives 16 where ligands 2b and 2c-d behaved as tridentate and bidentate systems, respectively, the reaction of 1 with 2b-d leads to complexes 3b-d featuring different denticity of the coordinated IPys. As shown in Fig. 4 and 5, the potentially tridentate ligands 2b-c coordinate the metal ion as bidentate (κ 2 ) systems involving donor atoms of both heterocycles thus resulting in a formal coordination number of eight for the ytterbium ion, whereas imino nitrogens point away from the metal coordination sphere. In the case of the bidentate ligand 2d, only the pyridine core coordinates the ytterbium ion . Therefore, 2d formally behaves as a monodentate (κ 1 ) system, again keeping away the imino nitrogen from the metal center as for previously described complexes 3b-c. The increased steric demand of η 5 -Cp* with respect to indenyl in the ytterbocene complex is largely responsible of the different IPy coordination mode around the metal ion. Although Yb-C bond distances in 3b-d are reasonably longer than those measured in precursor 1 [d(Yb-C mean ) for 1: 2.651(6) Å; for 3b: 2.689(8) Å; for 3c: 2.711(9) Å, for 3d: 2.715(7) Å] as well as in the (Cp*) 2 Yb(bipy) adduct from the literature 40 (Yb-C mean : 2.62 Å), an appreciable shortening is measured with respect to another eight-coordinate ytterbocene/pyridine adduct of the state-of-the-art [d(Yb-C mean ) for (Cp*) 2 Yb(Py) 2 : 2.74 Å]. 32 In spite of close stereo-electronic and redox properties of IPy ligands of this series (2b-d) 16 such a shortening effect is more evident in 3b compared to the other related complexes 3c,d. This trend is also observed for the Yb-N bond distances. Indeed, complex 3b presents a Yb-N bond length [2.535(2) Å] that is noticeably shorter than that measured on the related compounds 3c,d [3c: 2.623(2) Å; 3d: 2.602(2) Å]. The magnetic properties of 3b-d are outlined in Fig. 7 as μ eff vs. T. For compounds 3b and 3c only a weak paramagnetic contribution is observed, likely due to the presence of an Yb III containing impurity. This behavior is in agreement with structural features observed for these systems, which point to an Yb II -closed shell ligand electronic distribution. On the other hand, 3d is characterized by a much more complex magnetic behavior. It presents a μ eff value of 2.4 up to 280 K that is much lower than that expected for a single Yb III -species and a radical anionic ligand, even assuming them to be antiferromagnetically coupled. Such a μ eff value is also completely different from that presumed for a diamagnetic Yb II -closed shell system. On the other hand, no maximum in χ M vs. T, which would be indicative of a multiconfigurational ground state, is observed (see Fig. S17 of the ESI †). Notably, heating 3d from 270 K to 350 K gives rise to a clear-cut increase of the μ eff value to 3.55. Such a variation is irreversible as witnessed by repeating the measurement after cooling down the sample. The value observed at high temperature is consistent with that expected for a strong antiferromagnetically coupled Yb IIIradical system, resulting in a 1 F 3 state due to the combination of the 2 F 7/2 term of Yb III and the 2 S 1/2 term of a radical-anionic ligand. 40 It is noteworthy that the μ eff vs. T profile obtained after the first heating/cooling cycle can be perfectly rescaled, up to 270 K, on the pristine curve (see Fig. S18 of the ESI †). This result suggests that the species formed after heating above 270 K is also responsible for the intermediate μ eff value measured before heating. Such a behavior can be reasonably explained by considering that the process leading to the increased μ eff is already operative at room temperature (since the increase is visible above 270 K). Therefore, transformation already partially occurred when the sample is initially cooled down for measuring its magnetic properties. In other terms, the results of the magnetic investigation coupled with the outcomes of X-ray diffraction suggest that what we are actually measuring is not the structurally characterized form of 3d, but rather a product which evolves from 3d at T > 270 K. In this framework, it is tempting to attribute to 3d a diamagnetic character and to the evolution product a Yb III -radical one. However, in the absence of definitive clues or clear-cut experimental evidence, this is a purely speculative hypothesis. Finally, we wish to stress here that a simple redox isomeric transition involving 3d cannot be invoked to explain the observed behaviour, since this should favour a closed shell configuration at high temperature. 41 However, if a structural rearrangement takes place on heating, this might favour the Yb III -radical charge distribution; such a coupled structural/ electronic transition would be in analogy to what is often observed in the case of Co II complexes presenting similar temperature dependence of magnetic moment. 19,42 In contrast to 3a, the 1 H NMR spectra of 3b-d (benzene-d 6 , 293 K) prove their diamagnetic nature in solution. The methyl protons of C 5 Me 5 rings in 3b, 3c and 3d appear as singlets at 1.92, 2.11 and 2.24 ppm respectively, and they are in line with the chemical shifts earlier reported for diamagnetic complexes (C 5 Me 5 ) 2 Yb(L) n with coordinated O-, N-, and P-containing Lewis bases. 43 At the same time, the signals of IPy ligands (2b-d) coordinated to the Yb ions in the 1 H NMR spectra give rise to the expected sets of signals with typical chemical shifts (see the ESI †). Finally, the UV-Vis spectra of 3b-d are consistent with the diamagnetic nature of these complexes in solution. Calculations were carried out in order to determine the electronic nature of complexes 3a,b,d. The computational strategy is the same as that was developed by one of us (LM) to study multireference ground state complexes in bipyridine/ phenanthroline Yb complexes. 39,40,44 In summary, first DFT (B3PW91) geometry optimization using small core Relativistic Effective Core Potentials (RECP) was carried out and in a second step CASSCF calculations were performed. For complex 3a, within the precision of the DFT calculations, the displacement of a THF molecule from Cp* 2 Yb(THF) by the ligand 2a is expected to generate a Yb III complex through an exothermic process (3.8 kcal mol −1 ) whereas the formation of a Yb II species is virtually thermoneutral (0.7 kcal mol −1 ). 45 Due to this very small energy difference between the Yb II and Yb III complexes, CASSCF calculations were carried out. As in previous CASSCF calculations on ytterbium, different active spaces were used. A first one distributing 14 electrons into the 8 orbitals (the 7 4f of the Yb and the π* one of the ligand) was used and leads to similar results to the 8 electrons/5 orbitals (4 4f + π*) one. Finally, a reduced active space of 4 electrons/3 orbitals (2 4f + π*) was tested and gave the same qualitative results as the two others. Therefore, only the results obtained with the last one are discussed here for the sake of simplicity. Two singlet states, a closed shell one (f 14 ) and an open-shell one (multiconfigurational state f 14 + f 13 ), and a triplet state were computed. The open-shell singlet state was found to be the ground state with the triplet state only 4.1 kcal mol −1 higher in energy and the closed-shell singlet is 5.3 kcal mol −1 higher in energy than the ground-state. The open-shell singlet state is formed by 78% of Yb III and 22% of Yb II . This is in line with the magnetic measurement that indicates that complex 3a is exhibiting a multiconfigurational character in the ground state with a major contribution from Yb III . In the same way, calculations were carried out on complex 3b. Unlike complex 3a, the coordination of the iminopyridine is not exothermic but rather slightly endothermic (5.8 kcal mol −1 ). The formation of the Yb III complex from the Yb II one is also endothermic by up to 8.9 kcal mol −1 , making this highly unfavorable. To ensure this value, similar CASSCF calculations Fig. 7 μ eff vs. T curve for 3b (blue triangles), 3c (green rhombus) and 3d. For the latter, after the first heating cycle (red and white circles) the sample has been cooled down to 2 K and heated again up to 350 K (red squares). were carried out and the closed-shell singlet [Yb II ] turned out to be the lowest by up to 10.6 kcal mol −1 with respect to the triplet state (3b); the open-shell singlet being even higher in energy at 12.3 kcal mol −1 . This is again in line with the magnetic measurement that indicates that the ground state of 3b is diamagnetic. Finally, the calculations were conducted on the interesting 3d complex. At the DFT level, the coordination of the iminopyridine is found to be even more endothermic (14.6 kcal mol −1 ) and the formation of the Yb III complex is endothermic by 3.3 kcal mol −1 from the Yb II one. Unlike the other cases, it is the coordination that appears to be complicated, it might not be the CASSCF calculations that would yield the answer. Indeed, at this level, the open-shell singlet is also found to be the ground state but with the triplet state almost degenerate (1.2 kcal mol −1 ) and the closed-shell singlet only 5.3 kcal mol −1 higher in energy. With such low energy differences between the states, it is hard to conclude on the nature of the ground state. For a family of metallocene-type Yb III complexes coordinated by radical-anionic diazabutadiene ligands, the existence of a reversible solvent mediated ligand-to-metal electron transfer has been formerly discovered. 5c,6 This phenomenon consists of the displacement of the radical-anionic diazabutadiene ligand by molecules of a coordinating solvent (THF, DME, Py) followed by electron transfer from the diazabutadiene radical anion to the ytterbium ion: as a whole, the process results in the oxidation of the ligand to the neutral diazabutadiene and the reduction of Yb III to Yb II (Scheme 3). In order to check whether this behavior applies to related coordination compounds featuring IPy ligands, the reaction of 3a with THF was carried out. Unlike diazabutadiene congeners, 3a remains paramagnetic in THF-d 8 solution and no redox replacement of the iminopyridine ligand by THF occurs (Scheme 4). In contrast to 3a, the addition of a stoichiometric amount of THF-d 8 to benzene-d 6 solutions of 3b-d resulted in the immediate replacement of the IPy ligands in the Yb coordination sphere by THF. The 1 H NMR spectra of 3b-d recorded in THF-d 8 /benzene-d 6 mixtures have unambiguously demonstrated the generation of (C 5 Me 5 ) 2 Yb(THF) n and free IPy ligands 2b-d (Scheme 4). This result highlights the different reactivity of complexes 3b-d from 3a with respect to the replacement of IPy ligands by THF most likely caused by the different nature of metal-ligand interaction in these complexes. In addition, it demonstrates that metallocene type Yb III complexes coordinated by chelating radical-anionic diazabutadiene and IPy ligands behave differently in the presence of a coordinating solvent, despite their similar nature. It can be inferred that such a different behavior between diazabutadienes and IPys is reasonably ascribed to the higher steric demand of the former class of ligands that is claimed to weaken the metal-ligand interaction. In 3a the energy of formation for Cp* 2 Yb II -THF is not sufficient to compensate for the energy loss for the cleavage of Coulombic interaction between the Yb III ion and the radical-anionic iminopyridine ligand. At the same time, the quick IPy displacement by stoichiometric amounts of THF in complexes 3b-d coordinated by a neutral IPy ligand is in line with the energies of coordination bonds in Yb II complexes with N-and O-containing ligands. Conclusions In this paper we have described a new example of sterically controlled metal-ligand electron transfer by reacting (C 5 Me 5 ) 2 Yb(THF) with IPy ligands featuring very similar electron accepting properties but variable denticity and steric Scheme 3 Solvent mediated ligand-to-metal electron transfer on a metallocene Yb III complex coordinated by a radical-anionic diazabutadiene ligand. Scheme 4 Solvent mediated ligand-to-metal electron transfer on coordination compounds 3a-d. Dalton Transactions Paper This demand. The reactions of (C 5 Me 5 ) 2 Yb(THF) with ligands 2a-d proceed with the displacement of the coordinated THF molecule by means of an IPy framework. The observed outcomes highlight an important effect related to the steric demand of the iminopyridine systems on both their coordination mode and on their ability to foster metal-to-ligand electron transfer phenomena. Thus, the less sterically crowded 2a coordinates the metal ion as a bidentate system and a metal-to-ligand electron transfer takes place with the formation of a trivalent ytterbium species coordinated by a radical anionic IPy ligand. With the bulkier 2b and 2c, potentially featuring as tridentate systems, the coordination to the metal ions takes place through the pyridine nitrogen and either O or S atoms of the heteroaryl substituents, only. For these neutral coordinative Yb II -complexes (3b and 3c) the imine nitrogen points away from the metal coordination sphere. For the bidentate 2d ligand, featuring steric hindrance similar to 2b-c, a monodentate coordination to the Yb II ion is accomplished through the pyridine nitrogen only. The reason which blocks metal-toligand electron transfer in compounds 2b-d is likely the impossibility of a close approach between the ytterbium center and the bulkier substituted IPy ligands as a consequence of a metal ion size decrease in the case of a Yb II /Yb III oxidation. We note that while this trend is clearly defined in solution by NMR and in the solid state by X-ray data, magnetic analysis in the solid state confirms it only for 3a-c. The former complex provides a magnetic response that is as expected for a multiconfigurational ground state with the largest contribution from the 4f 13 -radical configuration, while 3b and 3c are diamagnetic and thus consistent with the closed-shell singlet ground state configuration. On the contrary, the magnetic behavior of 3d turned out to be extremely complex, suggesting the instability of its structure on heating above 270 K. Unlike indenyl ligands cyclopentadienyl analogues are not prone to haptotropic rearrangements and the use of Cp* 2 Yb (THF) provided us with an opportunity to demonstrate clearly and unambiguously that bulkiness and the electronic properties of the aromatic carbocyclic ligands coordinated to the Yb II ion are also decisive for the occurrence of intramolecular metal-to-ligand electron transfer and the type of coordination adopted by IPy ligands. The bidentate coordination of less sterically demanding Ipy 2a allows for a metal-to-ligand electron transfer resulting in a shortening of the Yb-Cp* bonds and the formation of rather short Yb-N bonds (Yb(1)-N(1) 2.325(2) Å). The introduction of bulky substituents into the Py ring excludes the possibility of such structural changes associated with oxidation and blocks the metal-to-ligand electron transfer. Indeed the resulting IPy complexes 2b-d feature Yb-Cp* and Yb-N distances characteristic for Yb II species. Conflicts of interest There are no conflicts to declare.
8,288
2018-01-30T00:00:00.000
[ "Chemistry" ]
Antibacterial, Antioxidant Activities, GC-Mass Characterization, and Cyto/Genotoxicity Effect of Green Synthesis of Silver Nanoparticles Using Latex of Cynanchum acutum L Green synthesis of nanoparticles is receiving more attention these days since it is simple to use and prepare, uses fewer harsh chemicals and chemical reactions, and is environmentally benign. A novel strategy aims to recycle poisonous plant chemicals and use them as natural stabilizing capping agents for nanoparticles. In this investigation, silver nanoparticles loaded with latex from Cynanchum acutum L. (Cy-AgNPs) were examined using a transmission electron microscope, FT-IR spectroscopy, and UV-visible spectroscopy. Additionally, using Vicia faba as a model test plant, the genotoxicity and cytotoxicity effects of crude latex and various concentrations of Cy-AgNPs were studied. The majority of the particles were spherical in shape. The highest antioxidant activity using DPPH was illustrated for CAgNPs (25 mg/L) (70.26 ± 1.32%) and decreased with increased concentrations of Cy-AGNPs. Antibacterial activity for all treatments was determined showing that the highest antibacterial activity was for Cy-AgNPs (50 mg/L) with inhibition zone 24 ± 0.014 mm against Bacillus subtilis, 19 ± 0.12 mm against Escherichia coli, and 23 ± 0.015 against Staphylococcus aureus. For phytochemical analysis, the highest levels of secondary metabolites from phenolic content, flavonoids, tannins, and alkaloids, were found in Cy-AgNPs (25 mg/L). Vicia faba treated with Cy-AgNPs- (25 mg/L) displayed the highest mitotic index (MI%) value of 9.08% compared to other Cy-AgNP concentrations (50–100 mg/L) and C. acutum crude latex concentrations (3%). To detect cytotoxicity, a variety of chromosomal abnormalities were used, including micronuclei at interphase, disturbed at metaphase and anaphase, chromosomal stickiness, bridges, and laggards. The concentration of Cy-AgNPs (25 mg/L) had the lowest level of chromosomal aberrations, with a value of 23.41% versus 20.81% for the control. Proteins from seeds treated with V. faba produced sixteen bands on SDS-PAGE, comprising ten monomorphic bands and six polymorphic bands, for a total percentage of polymorphism of 37.5%. Eight ISSR primers were employed to generate a total of 79 bands, 56 of which were polymorphic and 23 of which were common. Primer ISSR 14 has the highest level of polymorphism (92.86%), according to the data. Using biochemical SDS-PAGE and ISSR molecular markers, Cy-AgNPs (25 mg/L) showed the highest percentage of genomic template stability (GTS%), with values of 80% and 51.28%, respectively. The findings of this work suggest employing CyAgNPs (25 mg/L) in pharmaceutical purposes due to its highest content of bioactive compounds and lowest concentration of chromosomal abnormalities. Introduction The origin of the word nano is the Greek noun "nano", meaning "dwarf". Thus, nanoparticles are considered to be the primitive form of structures with sizes in the nm range. Any collection of atoms bonded together with a structural radius of 1-100 nm can be considered as a nanoparticle [1,2]. Nanoparticles display completely new or enhanced properties related to particular characteristics of size, distribution, and morphology [3,4]. There are several methods for nanoparticles to be synthesized: physical, chemical, and biological methods [5]. The weakness of using physical and chemical approaches for nanoparticles production is related to the high costs and also requiring of hazardous chemicals, so a risk of toxicity to the environment will increase and the synthesized nanoparticles are thought to be harmful [6]. To avoid utilization of harmful chemicals and eradicate the production of undesirable or baneful products, attention was turned to improve a clean, stable, benign and environment-friendly green strategy to synthesize nanoparticles [7,8]. Synthesizing of nanoparticles through plants is a relatively valuable and more profitable manner competing with using of other biological identities [9,10], Phenols, alkaloids, tannins, flavonoids, and saponins, among others, are examples of reagents that act as reductants and stabilizers during the synthesis of nanoparticles that are obtained from plant extracts [11,12]. In addition to other plants, the Cynanchum genus' latex and leaf extract were employed to create silver nanoparticles (AgNPs) with antioxidant, cytotoxic, and anti-Gram-positive and anti-Gram-negative bacterial activity [13,14]. On the authority of the World Health organization (WHO), as many as 80% of the world's people trust in plant traditional medicine for their essential healthcare requirements [15], mainly based on healing with medicinal plants [16]. Medicinal plants can be defined as any plant comprising special compounds that can be used for therapeutic aspirations and drugs production in one or more of its organs [17]. Latex-producing plants have been reported as a valuable medical supply in several countries due to their representative latex constituents [18]. Although herbal medicines have great benefits, there are many complications such as possibility of reducing bioavailability and little oral immersion. Nanotechnology is the promising way to overcome these shortages as nanoparticles may enhance transferring of herbal drugs for better treatment [19]. The development of nanoparticles from medicinal plants gives great chances for the enhancement of therapeutic treatments [20]. Latex is a liquid with a milky feature involving very small droplets of organic matter scattered in an aqueous medium, and so it is considered as a natural colloidal suspension [18]. Laticifers are the reservoirs of plant latex [21,22]; as latexes are established to have a defensive purpose in plants, they may have strong antimicrobial activity and thus plants can provide a good source of antimicrobial compounds [23]. The bioactive chemicals in latex showed different biological activities such as antiproliferative, vasodilatory, antimicrobial, antiparasitic [24], proteolytic [25], insecticidal [26], anti-inflammatory [27], antioxidant [28], and anticancer activities [29]. Cynanchum acutum L. is a latex-producing plant with high medical importance belonging to the family Asclepiadaceae. The medical importance behind several other application prospects of Cynanchum species allows it to serve as an important taxonomic group in the Asclepiadaceae family [30]. Crude extracts of several parts from C. acutum are supposed to be useful for the treatment of ulcers, representing a functional anti-ulcer agent [31]. In addition, Estakhr et al., [32] confirmed that C. acutum ethanol extract (200 mg/kg) exhibits anti-inflammatory actions which are related to the dose. The antimicrobial and anti-inflammatory effects of C. acutum were also indicated [33]. In addition, several phar-maceutically essential compounds have been identified from C. acutum seeds and they were supposed to have been used as sources of new and useful anticancer chemical entities [34]. Several metals have been used to synthesize nanomaterials, such as copper, zinc, titanium [35], magnesium, gold [36], and silver, for biological activities; in addition, alginate is a polysaccharide used as catalyst support [37]. Specifically, silver nanoparticles have proved to be most effective due to their specific characteristics of chemical stability, excellent conductivity, and most importantly, antibacterial, antiviral, antifungal, and anti-inflammatory features [38]. It was demonstrated that the great promise of silver nanoparticles does not prevent the occurrence of unknown risks which have not been properly estimated prior to their huge industrialized employment [39]. Proposed toxicological effects of silver nanoparticles result from the numerous ways of exposure such as domestic wastewater and chemical manufacturing, or during environment remediation efforts and crop improvement [40], in addition to the ability of AgNPs to penetrate the systemic circulation and reach several organs [41]. The negative effect of silver nanoparticles has been confirmed in several research studies such as their pathological effect in the liver by altering of liver morphology [42,43] and inflammatory effect [44]. Further, it was demonstrated that AgNPs can disturb kidney function and increase creatinine levels [45,46]. The climbing vine C. acutum is indigenous to Asia, Africa, and Europe. It is a plant that frequently grows in Egypt and is referred to by locals as olliq, modeid, or libbein. Insecticidal, anti-diabetic, antioxidant, antibacterial, anti-cancer, anti-inflammatory, analgesic, and antipyretic properties have been attributed to the alcoholic extract of C. acutum leaves [47,48]. On the other hand, the extensive exposure to nanoparticles which proceeds by the way of water, nutrition, cosmetics, medications, and drug delivery devices can lead to a broad variety of toxicological results [49]. In several investigations, the importance of plant latex extract in stabilizing the biogenic particles and reducing metal ions to nanoelements was underlined. Using latex extract, nitrate (AgNO3) was reduced to AgNPs having antibacterial, antioxidant, and anticancer properties [50,51]. The main objective of the current research was to determine antioxidant activity, phytochemical analysis, and antibacterial activity of different concentrations of Cy-AgNPs compared to crude latex. In addition, the lowest cytotoxic and genotoxic concentration of Cy-AgNPs able to inhibit bacterial growth was estimated. Furthermore, this work aimed to study genomic template stability using SDS-PAGE and ISSR molecular markers for different concentrations of Cy-AgNPs compared to crude latex. Results 2.1. Characterization of the Synthesized Silver Nanoparticles 2.1.1. UV/Visible Spectroscopy UV-Visible spectroscopy was used to approve the reduction process of aqueous extracts by silver ions and the formation of silver nanoparticles. Silver nanoparticles started to be synthesized from C. acutum latex extract rapidly after 35 min of incubation. Under UV-vis spectrocopy, silver nanoparticles expressed the absorption band at λ = 432 nm, which indicates that the particle size of Cy-AgNPs was less than 100 nm [52] (Figure 1). Transmission Electron Microscope Analysis (TEM) Transmission Electron Microscopy (TEM) is a critical characterization technique for imaging nanomaterials to obtain quantitative estimation of particle size, size distribution, and morphology. Figure 2 showed TEM measurements of the synthesized nanoparticles from C. acutum latex and show the shape and size of the AgNPs. The greater number of the particles was spherical in shape; rare were irregular silver nanoparticles and their average size was 14.2 ± 0.84 nm. [54]. The observed bands at 673.32 cm −1 for C. acutum latex and AgNPs are due to C-H bands (aromatic) [54]. Phytochemical Analysis Bioactive components of C. acutum Latex and its different nanoparticle concentrations, such as total phenolic content, total flavonoid content, tannin content, and total alkaloid content, were estimated in Figure 4. The Cy-AgNPs (25 mg/L) had the highest content of bioactive compounds with values of 24.24 mg/g DW for phenolic content, 12.86 mg/g DW for flavonoid content, 11.53 mg/g DW for tannin content, and 75.77 mg/g DW for alkaloid content. With increasing the concentration of Cy-AgNPs, bioactive compounds decreased. GC-MS Composition of C. acutum Latex The GC-MS analysis results of C. acutum latex showed bioactive compounds as listed below in Table 1. In particular, lupeol, hexadecanoic acid, neophytadiene, octadecanoic acid, and phytol showed the highest percentages of constituents present in C. acutum latex, which were 15.36%, 10.72%, 9.15%, 8.78%, and 6.51% respectively. The chart of GC Mass for latex of C. acutum is presented in Figure 6. Antioxidant Activity (DPPH Scavenging Capacity (%)) The antioxidant activity for C. acutum crude latex and different concentrations from Cy-AgNPs (25, 50 and 100 mg/L) were estimated and illustrated in Figure 7. The highest activity of DPPH was presented in Cy-AgNPS (25 mg/L) with value 70.26 ± 1.32% followed by C. acutum crude latex (Cy 3%) with value 55.43 ± 1.76%. Activity of DPPH decreased with increasing the concentration of Cy-AgNPs, where the lowest activity was obtained for Cy-AgNPs (100 mg/L) with value 43.76 ± 1.02%. Antioxidant activity of C. acutum crude latex (3%) and its different AgNP concentrations using DPPH. Bars with different letters indicate significant differences between treatments at p ≤ 0.05. Data are expressed as the mean of three replicates ± SDs. Antibacterial Activity Different Cy-AgNP concentrations (25,50, and 100 mg/L) were tested for their in vitro antibacterial effects against bacterial strains (Bacillus subtilis, Escherichia coli, and Staphylococcus aureus). Figure 8 shows the inhibition zones of different Cy-AgNP concentrations compared to 10 µg gentamicin standard antibiotics as the positive control and the untreated experimental control. After 24 days of incubation at 28 ± 2 • C, the highest inhibition zone was 24 ± 0.014 mm for 50 mg/L Cy-AgNPs against B. subtilis compared to gentamicin (22 ± 0.15). The highest inhibition zones against E. coli and S. aureus were presented for 50 mg/L Cy-AgNPs with values 19 ± 0.12 mm and 23 ± 0.015 mm, respectively. Cytotoxic Effect Cytological effects of 3% C. acutum latex and its different concentrations of silver nanoparticles on the mitotic cell division of Vicia faba root tips. The cytotoxic effect was detected at mitotic indices (MI %), phases indices (PI %), and total abnormalities (Tab %) levels and illustrated in Table 2. The highest value of (MI %) was 9.08% at 25 ppm which, was the nearest value to MI% of control (10.70%). Compared to 3% crude latex, the MI% value was 7.98%, lower than 25mg/L and higher than 50 mg/L of Cy-AgNPs. Generally, Cy-AgNPs (100 mg/L) showed the lowest mitotic index value (4.04%). Crude latex treatment showed the value of chromosomal aberrations (25.96%). On the opposite side, 25 mg/L of Cy-AgNPs showed a significant decrease in chromosomal aberration concentrations (23.41%) compared to control and related directly with concentration, where with increasing nanoparticle concentration, the chromosomal aberrations percentage increased. The highest percentages of abnormalities were recorded at 100 mg/L (68.14%) and 50 mg/L (66.73%), compared to the control. Types of chromosomal aberration appeared in treated Vicia faba seeds are given in Figures 9 and 10. The micronucleus was recorded at interphase for all treatments and binucleated cells were observed only for the 50 mg/L Cy-AgNPs treatment. At metaphase the most common abnormalities were expressed as stickiness, non-congression, two groups, oblique, chromosome ring, fragmentation, star-metaphase, and disturbed metaphase. Late separation, bridge, and disturbed were recorded at anaphase stage. At telophase stage, bridge, diagonal, late separation, and disturbed were reported. Biochemical Study Using Seed Protein Profile Electrophoresis of the Treated Vicia Faba Seeds The protein profile in Figure 11 represents the banding patterns of the SDS-PAGE gel of treated V. faba seeds with 3% C. acutum latex and different concentrations of Cy-AgNPs. In total, sixteen bands are distinguished from the scanning of the seed protein gel of the treated V. faba seeds ranging between 5 and 100 KDa. All treatments caused disappearance of a band with molecular weight 17 KDa compared with the control. A band of molecular weight 19 KDa disappeared only from treated V. faba with Cy-AgNPs (100 mg/L); it can be used as a negative marker for this treatment. From the total thirteen bands there are ten monomorphic bands and six polymorphic bands, resulting in 37.5% polymorphism percentage among control and different treatments. The polymorphic bands divided into one unique and five non-unique bands as shown in Table 3. Molecular Analysis Using ISSR Marker ISSR analysis was operated to identify DNA alterations produced in V. faba cells treated with varied concentrations of Cy-AgNPs (25, 50 and 100 mg/L) and 3% latex relating to untreated sample (control). Eight ISSR primers were used and yielded banding profiles are illustrated in Figure 12. The highest percentage of polymorphism was recorded for ISSR 14 primer (92.86%) with thirteen polymorphic bands and the lowest was for ISSR 12 (30%) with seven monomorphic bands and three polymorphic bands (Table 4). Genomic Template Stability Percentage of genomic template stability as an indicator for the changes in SDS-PAGE and ISSR was calculated and presented in Tables 3 and 5. For SDS-PAGE, the highest percentage of genomic stability was recorded for V. faba treated with 25 mg/L Cy-AgNPs (80%) compared to control (100%). The percentage of genetic template stability was decreased by increasing the concentration of Cy-AgNPs; the percentage of GTS of treated V. faba with Cy-AgNPs (50 mg/L) showed genetic template stability percentage (73.33%) similar to GTS% for 3% C. acutum crude latex, where the lowest percentage of GTS recorded in Cy-AgNPs (100 mg/L) was 60% compared to control (100%). According to the molecular results in Table 7, the maximum percentage of polymorphic bands was 35 bands for the treatment with 100 mg/L Cy-AgNPs and the minimum 29 bands for crude latex 3% treatment. Percentage of genetic template stability (GTS) showed highest value in Cy-AgNPs (25 mg/L) was 51.28% compared to control (100%); with increasing concentration of Cy-AgNPs, the percentage of GTS decreased compared to the control to 15.38% and 10.26% for 50 mg/L Cy-AgNPs and 100 mg/L Cy-AgNPs, respectively. The treatment with 3% latex showed genomic stability was 25.64% compared to control. Discussion The green synthesis of silver nanoparticles depends on the reduction of silver ions by phytochemicals as the primary step in the generation of nanoparticles, and these phytochemicals also play a vital role in stabilizing and fixing the shape and size of the synthesized nanoparticles [55,56]. Silver nanoparticles started to be synthesized from C. acutum latex extract rapidly after 35 min of incubation, similar to the results of synthesized nanoparticles from addition of Ficus sycomorous latex to AgNo 3 [57]. UV-visible spectrophotometry is a useful technique that allows direct recognition and characterization of silver nanoparticles. Strong detected absorbance in the range 400-500 nm band known as surface plasmon resonance (SPR) resulted from the interaction between light and mobile surface electrons of silver nanoparticles [58,59]. Especially, it was supposed that recording of an absorbance band at the range of 400 nm to 450 nm represented an indicator to prove the reduction process of Ag + to metallic Ag 0 [60][61][62]. Prepared silver nanoparticles using C. acutum latex showed a plasmon resonance band at 432 nm similar to the green synthesized AgNPs using blackberry fruit extract which showed a broad absorption peak at λ = 435 nm [63]. Through Transmission Electron Microscopy (TEM), spherical silver nanoparticles with few irregular shapes were noticed and it was demonstrated that variability of the shape and size of nanoparticles synthesized through green approaches is very accepted [64,65]. The nanoparticles average size was 14.2 ± 0.84 nm. Thus, the C. acutum latex extract as a reductant yielded small AgNPs. This result is in parallel with green synthesized silver nanoparticles from Coriandrum sativum seed extract, which were in an average range of 13.09 nm [66]. The same outcome has been reported when Citrullus lanatus fruit extract was used to synthesize AgNPs with an average diameter of 17.96 nm [67]. The size and shape of nanoparticles play a critical role to be used in biotechnological and biomedical applications, and it was supposed that a smaller size is more preferred than a bigger size [68]. Dakal et al. [69] demonstrated that silver nanoparticles of spherical shape are characterized by better antimicrobial effect as it has higher surface to volume ratio to interfere with the cell walls of pathogens. The main feature of FTIR is to give an overview about the biochemical components without any disturbance in the biological sample [70]. The great matching between each latex spectrum and the AgNP spectrum of the same plant with a decrease in intensity and a slight shift in the position of peaks indicates the role of biomolecules in the formation and stabilization of silver nanoparticles. Biomolecules such as flavonoids, ketones, aldehydes, tannins, carboxylic acids, phenols, and proteins of the plant extracts are responsible for the production of AgNPs. The detected functional groups such as O-H and = C-H play a critical role in the reduction of silver ions [71]. It was reported that biological components interact with metal salts and mediate reduction processes of these functional groups [72]. The proteins could most possibly form a coat covering metal NPs (i.e., capping of AgNPs) for the prevention of agglomeration of the particles and stabilizing in the medium [73]. The detection of plant latex biomolecules in the biosynthesis of silver nanoparticles including OH and CO groups confirms their vital role in reduction and stabilization of NPs [74]. The antibacterial activity of Cy-AgNPs concentrations was determined against Bacillus subtilis, Escherichia coli, and Staphylococcus aureus. The highest activity with highest inhibition zone was recorded for Cy-AgNPs (50%). This effect against bacterial strains may be due to the antibacterial compounds binding to bacterial DNA after entering the inner cells through the membrane, according to a described mechanism of how the antibacterial peptides inhibit or destroy bacteria. This result is similar to research revealing that Canarium species' latex was capable of acting as an antioxidant, an antibiotic, an anti-inflammatory, and a blood sugar regulator in addition to these other functions [75]. Results revealed from GC-analysis showed that C. acutum latex had highest content of lupeol, hexadecanoic acid, neophytadiene, octadecanoic acid, and phytol. Lupeol in many papers showed the highest antioxidant, antimicrobial, antihypoglycemic, and antitumor activity [76]. Hexadecanoic acid was present in C. acutum latex and it was also found in many plants such as Scutellaria diffusa aerial portion (30%), Lycium chinense fruits (62.89%), and Prunella vulgaris L. flowers (70.0%) [77]. The presence of n-alkanes such as n-tetradecane (tetradecanoic acid), n-hexadecane (hexadecanoic acid), n-nonadecane, neicosane, and n-octadecane was found in the studied latex. He [78] claimed that some alkanes have effective antibacterial properties, particularly against Escherichia coli and Staphylococcus aureus. Plants are known to be rich in a large number of phytochemicals, which could be purified and used to cure some types of health-related diseases in addition to their nutritional value for producing dietary supplements and nutrients [79]. Evaluation of the phytochemical constituents of a medicinal plant could be considered as the most significant first step in the studies of medicinal plants [80] as it will allow great knowledge about the functional groups which enhance their medicinal properties [81]. All bioactive compounds from total phenolic, alkaloids, flavonoids and tannin content were found in Cy-AgNPs (25 mg/L) compared to C. acutum crude latex because NPs may differ from the bulk material and they can have improved bioactive features based on their sizes, shape, and structure [82]. In addition, NPs can induce reactive oxygen species (ROS) and secondary signaling messengers that lead to transcription regulation in plant secondary metabolism [83]. ROS and calcium ions (Ca2+) are important second messengers leading to the up-regulation of transcriptional regulators of secondary metabolites [84]. Estimation of cytotoxicity and genotoxicity has been recommended from ISO standards 10993-3 [85] in addition to 10993-5 [86] as an essential part of the evaluation process. Plants have been used to indicate environmental mutations and also demonstrate genotoxic agents [87]. Plant bioassays are preferred for being more simple, quick, efficient, and inexpensive. In addition, mutation behavior of plant cells is correlative to human and animal cells [88]. Plant models are approved to be perfect bioassays to estimate the probable genotoxicity of nanomaterials, being highly susceptible to nanotoxicity and possibly exposed to NPs by several ways such as soil, water, and air [89]. Cytotoxicity of silver nanoparticles synthesized using C. acutum latex was expressed through mitotic index value. The mitotic index was used as an indicator to estimate cell division frequency and has been a guideline to detect the cytotoxic effect of different agents [90,91]. By increasing silver nanoparticles' concentration, the mitotic index was decreased. The present conclusion was in agreement with Kumari et al. [92] and Patlolla et al. [39]. It was suggested that the interference effect of highest concentration of AgNPs on the mitotic activity resulted from a delaying of cells to enter S phase (DNA synthesis) and stoppage of G2 phase; by increasing toxicant treatment, it causes cell death [93,94], which results from high concentration of Cy-AgNPs (100 mg/L). Sobieh et al. [95] suggested that the decreasing of MI was the impact of nano silver, resulting from the effect of the test agent on the growth frequency by the ability to reduce or close off the construction of metabolites required for a normal sequence of mitosis. It was clearly observed that 3% crude latex treatment leads to an arrest of metaphase stage and abortion of anaphase and telophase stages. Metaphase arrest results from the incorrect attachment of the chromosomes to the spindle, causing inactivation of the anaphase promoting complex (APC), thus delaying the separation of sister chromatids and arresting the cell at metaphase stage and allowing anaphase to be omitted [96]. When comparing different concentrations of silver nanoparticles, anaphase and telophase stage percentages decrease with increasing concentration. The increase in metaphase index coupled with decrease in anaphase and telophase indices was related to spindle disturbance and it was supposed that the highest concentration of AgNPs had a considerable impact on spindle and therefore metaphase/anaphase transition [97]. Chromosome aberration percentage was found in the lowest concentration of Cy-AgNPS (25 mg/L) compared to Cynanchum crude latex; with increasing the concentration of Cy-AgNPs, the chromosome aberrations increased. This result showed the lowest concentration of Cy-AgNPs increased bioactive components to a limit, and with increasing the concentrations of nanoparticles, bioactive components decreased. So, chromosome aberrations may be decreased for the lowest concentrations of Cy-AgNPs. Any change in the structure of chromosome is expressed as a chromosomal aberration. There are several ways to breed changes in chromosome structure such as DNA cracking, blockage of synthesis and replication of DNA [98]. Results of this study showed that the chromosome abnormalities were indicated in all treatments. Latex treatment showed the lowest value of CA. On the other hand, it showed a significant increase with increasing AgNP concentrations. Higher concentrations of AgNPs and AlO 2 NPs increased the percentage of chromosomal aberrations in Allium sativum root tips compared to the control [99]. The highly frequency of mitotic abnormalities may be associated with the effects of nanosilver on mitotic spindles which change the position and coordination of chromosomes at several phases of the cell cycle; silver nanoparticles also fuse the chromatin fibers, which may be related to the chromosomes' forming stickiness and performing breaks that cause the loss of some chromosomal fragments [100]. The cytotoxic behavior of AgNPs and their ability to increase damages at chromosomes was previously shown in experiments [101]. Micronuclei were observed at interphase stage, which may be referring to spindle fibers' malformation [102] and it was observed previously as a genotoxic effect of nano-silver [103,104]. Microscopic examinations revealed that stickiness was a major abnormality observed at metaphase stage. Sticky chromosomes were highly recorded as a genotoxic effect of AgNPs in green pea root tips [105]. The ability of silver nanoparticles to enter through the plant structure and interrupt the chemical composition of the internal components affects cell division and damages it. The toxic action of nanoparticles can be described in two different ways. The first is chemical toxicity depending on the chemical composition and ability to excrete toxic ions; secondly, tension may result from the surface, size, and/or shape of the particles [106]. SDS-PAGE was previously used in several studies to estimate the reflection of environmental stress on protein profiles [107,108]. Vannini et al., [109] indicated that some proteins affected by AgNP exposure which make protein profiles a good choice for comprehensive studies aiming to explain the molecular mechanisms highlighting the effect of AgNPs on plants. Protein profiles of treated seeds of Vicia faba plants showed a great variation regarding the untreated control. Generally, any alteration in protein bands between treated sample and control including disappearance of some bands may be due to the mutational potential within the regulative genes that interrupt or delay transcription [110]. DNA fingerprinting provides several effective biomarker assays in the evaluation of genotoxicity [98,99]. The inter-simple sequence repeats (ISSRs) technique is considered to be the simplest and widely used marker among the polymerase chain reaction (PCR)-based molecular techniques [111]. ISSRs are DNA-based markers based on detection of polymorphisms in inter-microsatellite loci [112]. ISSRs were successfully used to estimate the effect of heavy metals in Hordeum vulgare and Pistia stratiotes in terms of DNA [113,114]. Assessment of genomic template stability (GTS) has been used to investigate many several types of DNA destruction and mutations in animals, plants and bacterial cells [115]. GTS has been calculated as a qualitative measurement to explore genotoxicity of silver nanoparticles [116,117] and zinc nanoparticles [118]. The obtained data showed changes in band pattern by variation in the number of newly appeared bands or loss of normal bands and band intensity (increase or decrease in the intensity of amplified bands). Genomic template stability using SDS-PAGE recorded the highest for the treatment with 3% latex and the lowest for 100 mg/L silver nanoparticles; a similar observation was detected using ISSRs. Genomic template stability is connected to the frequency of DNA destruction and also the capacity of DNA to recover [119]. GTS percentage was indicated to be correlated with variations in other criteria [120]. The present results revealed that GTS% values are compatible with cytological results. All studied parameters indicate that cytotoxic and genotoxic effects of silver nanoparticles are higher than those of crude latex and that Cy-AgNPs are suitable for use at their lowest concentration. Plant Material Milky latex of C. acutum L. was collected from Mansoura University campus, Dakahlia Governorate early in the morning. The green stems were split and the white milky latex was collected in sterile bottles ( Figure 13). The 3% aqueous solution of latex was prepared using distilled deionized water and stored in a freezer maintained at −4 • C until use. Green Synthesis of Silver-Latex Nanoparticles In a conical flask, 10 mL of latex (3%) was added and heated at 60 • C with continuous stirring for about 15 min using a water bath. Separately, 50 mL of AgNO 3 solution (1 mM) was heated at 60 • C also with continuous stirring for 15 min in a water bath. Secondly, latex solution was added to AgNO 3 solution and heated at 80 • C for 30 to 45 min and silver-latex nanoparticles were obtained gradually [57,121]. Characterization of Silver-Latex Nanoparticles UV-visible spectral analysis was performed by detecting of the optical density (OD) using a "T80" UV/VIS spectrometer (Bruker Corporation, Billerica, MA, USA). Measurements were performed at room temperature between 200 and 800 nm ranges. The baseline was established by using silver nitrate (1 mM) as a blank. Transmission electron microscopy (JEOL JEM-2100 instrument, (JEOL Ltd., Tokyo, Japan)) was utilized to explore the morphology and size of silver nanoparticles. The sample was equipped by bringing a drop of them on a carbon-coated copper grid and using a lamp to dry it. Fourier transform infrared (FTIR) spectroscopy measurements were used to confirm the AgNPs synthesis and also to estimate the possible bioactive components in the plant latex that enhance the reduction of the Ag + ions and play roles in stabilization of the synthesized nanoparticles [122]. Both crude latex and silver nanoparticle samples were ground to dry semisolid form and mixed with Kbr and analyzed using a Nicolet TM iS TM 10 FTIR spectrometer (Thermo Scientific, Inc., Waltham, MA, USA). The results were detected in the range of 4000-400 cm −1 at a resolution of 8 cm −1 at 25 • C. Phytochemical Analysis 4.4.1. Total phenolic Contents The total phenol components were evaluated using the Folin-Ciocalteu procedure improved by Wolfe et al. [123] and Issa et al. [124] (that included using of gallic acid as a standard). The total phenolic constituents in the latex samples were quantitated as equivalents in milligrams of gallic acid/dried plant latex extract in grams concerning the standard curve (y = 0.0062x, r 2 = 0.987). Total Flavonoid Contents Flavonoids in the studied taxa's latex were valued by a colorimetric estimation using aluminum chloride [125] and catechin as a standard. The calculated values of flavonoid constituents were quantified as equivalents of catechin in milligram per dried latex samples in grams regarding the standard curve (y = 0.0028 x, r 2 = 0.988). Total Tannin Contents Estimation of the total tannin components in plant latex was performed using a vanillin-hydrochloride assay [126,127] and the resulting values of the samples were quantified as equivalents of tannic acid in grams/gram dry sample. Total Alkaloid Contents Fifty mL of 10% acetic acid in ethanol was added to 1 g of the sample and covered, then allowed to stand for 4 h. After that, it was filtered and the sample was concentrated in a water bath. Concentrated ammonium hydroxide was then added on top wisely to the sample till the precipitation was finished. The solution was allowed to settle and the precipitate obtained, then it was washed using diluted ammonium hydroxide. Finally, it was filtered and dried to a constant weight [128] Acid digestion About 0.4 gm of each plant latex was taken to be digested using 8 mL of concentrated sulfuric acid in the presence of (2.14 gm) digestion mixture [1 kg potassium sulphate and 60 gm of mercuric oxide (red)] [129]. b. Atomic absorption spectrophotometer analysis The prepared aliquot mixtures were used to estimate the concentration of cadmium (Cd), cobalt (Co), cobber (Cu) and iron (Fe) using an atomic absorption spectrophotometer (Buck Scientific Accusys 211 series, USA) by an air/acetylene flame system. The concentration of metals in each latex sample was estimated in mg/L [130]. GC-MS of C. Acutum Latex Latex constituents of C. acutum were screened using GC-MS-QP2010 Ultra analysis equipment (Shimadzu Europa, Duisburg, Germany). The oven temperature was started at 50 • C, held for 3 min, then rose by 8 • C/min to 250 • C and held for 10 min. In electron impact mode, the spectrophotometer was used. The injector, interface, and ion source were maintained at 250, 250, and 220 • C, respectively. Helium served as the carrier gas for the split injection, which used a split ratio of 1:20 and a column SLB-5ms (silphenylene polymer, virtually equivalent to poly (5% diphenyl/95% methylsiloxane)) flow rate of 1.5 mL/min to inject a 1 µL diluted sample in n.hexane (1:1, v/v). The main single components were identified using WILEY and National Institute of Standards and Technology (NIST08) libraries based on their relative indices and mass spectra. Antioxidant Activity By observing the disappearance of DPPH at 520 nm, antiradical activity was quantified spectrophotometrically using a UV-visible spectrophotometer. The reaction mixtures for each treatment were made up of 3.9 mL of 0.1 mM DPPH dissolved in ethanol and 100 µL of supernatant. All treatments were incubated at room temperature for 30 min. Each treatment was measured three times. The sample without an antioxidant served as the control, while ethanol was utilized as a blank. The DPPH activity was expressed as a percentage of inhibition and calculated using the following Equation [131]: where AB = absorbance of control sample (t = 0 h) and AS = absorbance of a tested sample after the reaction (t = 1 h). Antibacterial Activity of Cy-AgNPs The antibacterial activity of C. acutum latex 3% extract and different Cy-AgNP concentrations (25 mg/L and 50 mg/L) against Bacillus subtilis, Escherichia coli, and Staphylococcus aureus bacterial strains in vitro were compared with 10 g gentamicin standard antibiotics per 5 mm paper disc, using the disc diffusion method [132]. Preparation of Silver-Latex Nanoparticles Immediately after synthesis and characterization of Ag-NPs, they were suspended in deionized water and dispersed using ultrasonic vibration (100 W, 30 KHz) for 30 min in order to prepare three different concentrations at 25 mg/L, 50 mg/L, and 100 mg/L. V. faba seeds were treated with different latex nanoparticle concentrations in addition to crude latex (3%); untreated samples (control) were operated using distilled H 2 O. Samples were coded per data of Table 6. After treatment of V. faba seeds for 24 h, the preparation of slides was demonstrated in Figure 14 and illustrated as follows: The root tips of bean seeds were fixed in glacial acetic acid/ethanol with ratio 1:3 (Carney's solution) and stored in a refrigerator for at least 48 h or until use. Roots were soaked in distilled water for 5 min for washing and then hydrolyzed in 1N HCl at 60 • C for 6-8 min. Next, the root tips were rinsed in water and stained by an aceto-orcein stain [133] for 2-4 h to prepare a slide. The dark stained root tips were erased in one drop of 45% acetic acid on a clean slide and squashed under a cover glass to disperse the cells. Electric microscope (Olympus CX 40) was used to record normal and aberrant cells which were registered in different stages of mitosis. Data Analysis The cytotoxic potential was studied by demonstrating of the mitotic index (MI), phase indices (PI), and total abnormality percentage at different phases of cell division. The data were statistically analyzed using t-tests in order to estimate the alteration among different treatments and the untreated sample [134]. Biochemical Study (Protein SDS-PAGE) Polyacrylamide gel electrophoresis in the presence of sodium dodecyl sulphate (SDS-PAGE) was operated to obtain proteins electrophoretic profiles of treated V. faba seeds in order to estimate the genotoxic effect of latex extract and different concentrations of its silver nanoparticles. The method for the discontinuous SDS-PAGE technique was based on that of Laemmli [135] and modified by Studier [136]. Molecular Study (ISSR Marker) Eight primers were tested to amplify the isolated DNA from treated V. faba. Table 7 shows the primers and their sequences. Extraction of DNA was done using EZ-10 spin Column genomic DNA minipreps kit handbook (plant) (BIO BASIC INC.). Estimation of Genomic Template Stability (GTS%) Genotoxicity was observed in the SDS-PAGE and ISSR profiles by recording disappearance of normal bands and appearance of new bands. Only clear and reproducible bands were observed in order to assess any disorder in DNA and demonstrate the genomic template stability percentage (GTS%). Polymorphisms recorded in the SDS-PAGE and ISSR profiles included disappearance of a normal band and appearance of a new band compared with the control profile [137]. The GTS% was calculated for each sample of treatments according to the formula of Sukumaran and Grant [55] as: where "a" indicates the polymorphic profiles in each sample and "n" is the number of total bands in the control. Conclusions This study was conducted mainly to investigate the effect of different concentrations of silver nanoparticles from C. acutum latex and its crude latex on biochemical and molecular DNA level and mitotic division using Vicia faba seeds. The reducing effect of the MI% was clearly observed by increasing Cy-AgNP concentrations (50 and 100 mg/L, where Cy-AgNPs (25 mg/L) treatment showed moderate decrease in MI% compared to C. acutum latex (3%) and control. Generally, all treatments showed increasing chromosomal abnormalities, but Cy-AgNPs (25 mg/L) expressed the lowest percentage, and by increasing the concentration of AgNPs, the percentage increased. Genomic template stability percentage (GTS%) by using biochemical protein SDS-PAGE and molecular ISSR markers showed the highest GTS% in the 25 mg/L Cy-AgNPs treatment (80% in SDS-PAGE and 51.28% in ISSR marker). Finally, this paper concluded that the 25 mg/L Cy-AgNPs have the highest content of bioactive constituents (TPC, TFC, tannins, and alkaloids) and showed lowest cytotoxicity and genotoxicity. The highest antioxidant activity using the DPPH method was reported in Cy-AgNPs (25 mg/L) (70.26 ± 1.32%). The highest antibacterial activity was for Cy-AgNPs (50 mg/L) against Bacillus subtilis, Escherichia coli, and Staphylococcus aureus. Chemical characterization of GC-MS revealed that n-alkanes such as tetradecanoic acid and hexadecanoic acid had the highest antimicrobial effect in addition to the presence of lupeol's effect on the antioxidant activity of the studied latex. The use of AgNPs in low concentrations increased MI%, which can stimulate plant growth and development. Increasing the use of Cy-AgNPs in high concentrations leads to the opposite result of increasing chromosomal abnormalities and reducing GTS%. Therefore, uses of nanoparticles must be under strict supervision by health authorities, with limits to concentrations to reduce risks to human populations.
9,034.2
2022-12-30T00:00:00.000
[ "Environmental Science", "Chemistry", "Materials Science" ]
Convexity at finite temperature and non-extensive thermodynamics Assuming that tunnel effect between two degenerate bare minima occurs, in a scalar field theory at finite volume, this article studies the consequences for the effective potential, to all loop orders. Convexity is achieved only if the two bare minima are taken into account in the path integral, and a new derivation of the effective potential is given, in the large volume limit. The effective potential has then has a universal form, it is suppressed by the space time volume, and does not feature spontaneous symmetry breaking as long as the volume is finite. The finite temperature analysis leads to surprising thermal properties, following from the non-extensive expression for the free energy. Although the physical relevance of these results is not clear, the potential application to ultra-light scalar particles is discussed. Introduction For a scalar theory with several degenerate vacuua, it is usually assumed that spontaneous symmetry breaking (SSB) occurs and that one specific vacuum is chosen. This is actually true for infinite volume, where the tunnel effect between different vacuua is completely suppressed. But for finite volume, even the slightest tunneling possibility between different degenerate vacuua should allow these to play an equivalent role at equilibrium, for the true vacuum of the dressed theory to be a superposition of the bare vacuua. It has been known for a long time that the effective potential is then convex [1], which is a consequence of its definition in terms of a Legendre transform [2]. It has been shown, from the early days of effective potential methods [3], that convexity cannot be achieved when quantisation is based on one vacuum only, if the bare potential has several degenerate vacuua [4]. The effective potential actually becomes convex as a consequence of the competition of the different non-trivial saddle points [5], and is thus a non-perturbative effect. Gauge fixing could also impose a specific vacuum for the scalar field, therefore avoiding convexity of the effective potential. Nevertheless, a construction of a convex effective Higgs potential is given in [6], where gauge fixing picks two points on the manifold of vacuua of the bare potential. It is shown that a convex effective potential can then be obtained from a linear interpolation between these two vacuua, to any loop order. The explicit form of the effective potential is not given though. An explicit construction of the convex saddle point effective potential (ignoring loop corrections) is given for the first time in [7], where the effective potential is derived as an expansion in the classical field, up to the fourth order, for a finite spacetime volume V (4) . In this work, the degenerate vacuua of an O(N)-symmetric scalar theory, with | φ vac | = v, all contribute to the saddle point approximation for the partition function. The path integral quantisation is then followed step by step, where all the quantities are expanded in either the source or the classical field. The resulting effective potential is a convex polynomial, which is suppressed by V (4) , as a consequence of an interpolation between the different bare vacuua. Therefore it becomes flat in the limit of infinite volume, and SSB is reached only in this limit, where the true vacuum is an arbitrary point of a flat N-ball with radius v. Although not studied in [7], Goldstone modes then arise, which should stay massless to all orders in perturbation theory. This is shown in [8], using an improved Conwall-Jackiw-Tomboulis effective action [9]. The present article (restricted to a single real scalar field) shows an alternative construction, which is not based on an expansion in fields, but in (v 4 V (4) ) −1 instead. This approach leads to an effective potential which is valid to all orders in the classical field, and whose Taylor expansion to the fourth order is consistent with [7]. The true vacuum of the (dressed) theory is located at φ = 0, and SSB does not occur as long as V (4) is finite. This result is first obtained in the saddle point approximation for the partition function, at zero temperature, and we show that one-loop corrections do not change the functional form of the effective potential, but only redefine the mass scale v. We show then that these results hold at finite temperature too, as long as one is below a critical temperature. Concerning the true vacuum of a theory, we note that a systematic construction of effective theories is done in [10], where tadpoles are removed consistently with the true vacuum. In the situation where the bare vacuua are not degenerate, one can also consider the famous problem of false vacuum decay [11], for which radiative corrections are considered in [12], and gauge invariance is shown in [13]. But these studies assume a time dependence of the ground state of the theory, whereas we consider here the equilibrium situation in the case of a symmetric potential. This article is structured as follows. The explicit construction of the effective potential at zero temperature, within the saddle point approximation, is done in section 2. The oneloop corrections are calculated in section 3, and the results are expected to be identical at higher order loops, up to a redefinition of the mass scale v of the theory. The extension to finite temperature is done in section 4, where the effective potential is suppressed by the three-dimensional space volume, as long as one stays below the usual critical temperature. This suppression holds in an interval defined by the temperature-dependent renormalised mass scale corresponding to the vev v. An important consequence is that the free energy is not extensive, as well as the entropy, and the latter happens to be a constant which can be simply interpreted. The conclusion of the article discusses potential physical applications of this volume-suppressed convex effective potential, which could be relevant in a cosmological context. The article ends with four Appendices: (i) Appendix A gives a general argument for convexity of the effective potential; (ii) Appendix B shows that the effective potential obtained from the Legendre transform, as defined in this article, and the Wilsonian effective potential are identical in the limit of infinite volume; (iii) Appendix C shows that bounce saddle points between the two bare vacuua do not play a role for the effective potential, if one starts with a symmetric bare potential; (iv) Appendix D treats the example of a cosine bare potential, for which it is shown that the saddle point effective potential is completely flat. This result is expected from the assumptions of convexity, periodicity and differentiability. Saddle point approximation We start from the bare potential and we are interested in the effective potential only, such that the source j appearing in the partition function is chosen as a constant. We note, in Euclidean metric, and the partition function Z[j] will be approximated by the sum over the dominant saddle points φ n . We will consider the uniform configurations only, since we show in appendix B that the "bounce" solutions, originally considered in [11] for the calculation of tunneling rates, are negligible. The uniform saddle points of the partition function are solutions of the equation and the number of these solutions depends on the source j [7]: • if |j| > j c , where j c = λv 3 /(9 √ 3), then eq.(3) has one real solution only, that we note φ 0 (j); • if |j| < j c , then eq.(3) has the three solutions, one of which is a maximum and the two others are the local minima relevant for the partition function In what follows we are looking at these two situations separately. large source For the situation where |j| > j c , the saddle point partition function is Since the source j is uniform, functional derivatives with respect to j are replaced by partial derivatives with respect to V (4) j δ δj where V (4) is the spacetime volume, such that the classical field is The saddle point effective action is then leading to the saddle point effective potential This expected result corresponds to the situation where φ c is outside the minima v 1 = v and v 2 = −v: the saddle point approximation does not modify the bare potential. Small source For uniform fields, we have seen that the relevant source is actually k ≡ V (4) j, which will allow to make an expansion in the small parameter ǫ ≡ (v 4 V (4) ) −1 . For a small source |j| < j crit , the saddle point partition function is given by where φ 1,2 are given by eqs. (4). One then expands the arguments of the exponentials in ǫ << 1 where n = 1, 2, and the partition function (10) is then Note that such an expansion in ǫ is not valid when the partition function is dominated by one minimum only: in the situation where k > 0 for example, the term of order ǫ can actually be dominant e −kv << O(ǫ), and it is only the sum e −kv + e kv which is always large compared to O(ǫ), whatever the sign of k is. Neglecting terms of order ǫ, the classical field obtained from eq.(12) is then and the latter relation can be inverted to obtain The saddle point effective action is then and the relation (14) gives The saddle point effective potential is finally obtained after dividing by the spacetime volume The latter potential is convex, and matches the bare potential at φ c = ±v, leading to an overall continuous saddle point effective potential. For small values of the field |φ c | << v, a Taylor expansion gives which was found in [7], apart from the constant shift − ln 2/V (4) , arising from an overall factor 1/2 in the partition function defined in [7]. In flat spacetime, this shift is not relevant, but in curved space time, if one imposes a vanishing vacuum energy in the bare potential, this shift induces a vacuum energy in the effective theory, for the true vacuum φ c = 0. One can note that the approach given in [7] is based on an expansion in the source up to the order j 4 , and therefore φ 4 c , but taking into account all the orders in (V (4) ) −1 . The approach adopted here, on the other hand, is based on an expansion in (V (4) ) −1 , which has the advantage to provide the resummation (17) to all orders in the classical field φ c . The effective potential (17) is universal in the sense that it depends on the bare vev v only, and not on the explicit form of the bare potential. The coupling constant λ of the original model (1) does not appear in the final expression (17), because it cancels out in the limit of large volume, for which the large parameter is λV (4) v 4 /24 ∝ λ/ǫ, as shown in [7]. Therefore, as expected, the present construction is not valid in the limit λ → 0. Finally, we also note that, although continuous at φ c = ±v, the effective potential (17) is not analytical at these points. This is because the limits φ c → ±v correspond to the transition between small and large source, where the saddle point approximation for the partition function is not sufficient. As explained in [13], in this situation there is an overlap of the wave functions corresponding to the two ground states dominating the partition function, which is then not well approximated by the sum of two independent terms. Maxwell construction and Wilsonian approach For finite spacetime volume, the true minimum of the system is at φ c = 0, as a result of a tunnel effect between the two bare minima. In the limit of infinite spacetime volume V (4) → ∞ though, the effective potential (17) is exactly flat between the bare vacuua, the true vacuum is not unique anymore, and SSB occurs. The vacuum of the system consists then in a superposition of the two bare vacuua, with a weighted average φ c which can be in an arbitrary position between the bare vacuua. This situation is similar to the so-called Maxwell construction in the study of the Van der Waals equation of state, where naive isothermal curves in the plane (P, V ) present a region of negative compressibility, which is not physical. The origin of the problem is that the Van der Waals equation of state describes only one phase of the fluid, and the solution to the problem is to split the fluid in two phases, liquid and vapour. There is then only one independent state variable, and since the temperature T is already fixed, the pressure is also fixed at the saturated vapour pressure P S (T ): the true isotherm trajectory features a plateau, avoiding the negative compressibility. This plateau is representative of the coexistence of two phases, and the classical analogy in the present case is the following [14]: the flat potential in the limit V (4) → ∞ corresponds to the coexistence of bubbles of different vacuua ±v, with arbitrary sizes. The effective potential defined in the present article corresponds to the one-particle irreducible -1PI -effective potential. It is identical to the Wilsonian effective potential in the limit of infinite volume only (see Appendix B), such that there are similarities and differences with the latter, few of which are discussed here. Convexity in the Wilsonian approach was originally obtained in [15] through the contribution of a non-trivial saddle point to the evolution equation, in the framework of the average effective action. Similarly, using a sharp cut off though, the contribution of a nontrivial saddle point in every infinitesimal blocking step lead to the explicit construction of the flat Wilsonian effective potential [16]. The Wilsonian approach consists in averaging over different microscopic configurations, with momentum typically larger than some scale k, and gradually build the effective potential in the infrared limit k → 0 (IR). In the present work, averaging over different microscopic configurations is the essential point at the origin of convexity, as explained previously, which reflects in a way the Wilsonian approach. But the analogy must be taken carefully, for the following reasons: (i) The Wilsonian running potential is convex in the limit k → 0 only; (ii) The convex Wilsonian effective potential (for k = 0) is flat for any space time volume, whereas the 1PI effective potential considered here, although convex for any volume, is flat for V (4) → ∞ only; (iii) The semi-classical construction of the partition function (10) averages over two classical vacuua, without the contribution of quantum fluctuations. On the other hand, the Wilsonian approach automatically involves quantum fluctuations above the two vacuua. In the present construction, convexity is already obtained at the semi-classical level, and quantum fluctuations only modify the mass parameter of the model (see next section). One-loop corrections (zero temperature) We show here that the overall picture is not modified by one-loop corrections, apart from corrections to the mass scale v. Higher order loop corrections would have the same effect, and the existence of the volume-suppressed part of the effective potential is expected to be an exact result. Large source In the situation of large source described in subsection 2.1 for the saddle point approximation, the one-loop corrections are obtained by considering quadratic fluctuations around the minimum φ 0 (j) and the usual steps of path integral quantisation lead to the known expression where M is an arbitrary mass scale defining the zero of the potential. But this expression is consistent as long as φ 2 c > v 2 /3 only, which corresponds to the inflexion points of the bare potential. It becomes singular when φ 2 c → v 2 /3, and contains an imaginary part for φ 2 c < v 2 /3, which is the sign that some wrong step was taken. The problem with the expression (20) when φ 2 c ≤ v 2 /3 is the cancellation of restoration force for quantum fluctuations ξ with momentum p µ satisfying p 2 − λv 2 /6 + λφ 2 0 /2 < 0 in the path integral (19), such that this partition function is not correct anymore. As shown in the next subsection, the problem is cured by taking into account fluctuations around both saddle points of the partition function. But the expression (20) can be used to calculate the one-loop correction v (1) to the mass scale v, since it is perturbatively away from the bare vev and thus must satisfy v (1) > v/ √ 3. We have then such that where Λ is an ultraviolet cut off. Although not obviously visible from the latter expression, quantum corrections are indeed perturbative: if the bare potential is written then v 2 = 6µ 2 /λ, such that the correction in Λ 2 /v 2 is actually proportional to λΛ 2 /µ 2 . As we will see below, the expression (22) is also obtained from the one-loop effective potential for small source. Small source We show here that one-loop corrections do not change the functional form of the effective potential (17), but only redefine the value of the mass scales v n . An important message is that there is no imaginary part generated by quantum corrections, because of the interpolation between the two vacuua. One-loop quantum corrections to the effective potential (17) are obtained after taking into account quadratic fluctuations around each saddle points As shown in the previous section, only the linear term in j is relevant for the first order in (V (4) ) −1 . We have which, compared to eq.(11), shows that the only change to the effective potential (17) is a redefinition of the mass scales One can conjecture that any higher order loop correction would have the same effect, with a different coefficient for the first order of the expansion in the source j. From the minima (4) we have and eq.(26) leads then to a result consistent with eq.(22) v (1) The overall one-loop effective potential is therefore continuous and convex: • For small fields φ 2 c < (v (1) ) 2 , the one-loop effective potential U (1) in (φ c ) is given by eq. (17), with the change v → v (1) ; • For large fields φ 2 c > (v (1) ) 2 , the one-loop effective potential U A convex one-loop effective potential therefore arises naturally from consistently taking into account both minima of the bare potential, from the very beginning of path integral quantisation. We finally note an apparent mismatch of the field-range for which to choose either the potential U c < v 2 , although in this range the true effective potential is U (1) in (φ c ) (we neglect in this discussion the corrections to the bare vacuua). The reason is that an effective potential which is both convex and differentiable can be obtained only if the volume-suppressed part U (1) in (φ c ) holds in the whole range between the two bare vacuua. This is confirmed by the Wilsonian approach, which takes into account non-trivial saddle points in each infinitesimal blocking step [15,16]. Finite temperature analysis At finite temperature T = β −1 , the one-loop analysis is similar, besides the fact that the integration over frequencies p 0 is replaced the summation over discrete Matsubara modes where p µ = (p 0 , k). The saddle point approximation involves only the classical theory, and is therefore identical to the one described in section 2, with the replacement In what follows we therefore directly go to one-loop corrections. Large source Following the steps described in section 3, the one-loop effective potential is, in the situation of large source and the one-loop correction v (1) to the mass scale v is given by, in the large-temperature regime λv 2 β 2 << 1, The summation over Matsubara modes is done using the known identity and leads to, up to corrections of order λv 2 β 2 , The first integral corresponds to the zero-temperature correction, whereas the second integral corresponds to the finite-temperature contribution, which is not divergent: We note that the zero-temperature correction is twice the one obtained in eq.(22), which is due to a different regularisation of the loop integral. Indeed, at finite temperature, there is no restriction on the amplitude of the Matsubara modes, whereas at zero temperature the integration over frequencies is restricted by the cut off. One then defines the renormalised zero-temperature mass scale v 0 by and the corresponding renormalised finite-temperature mass scale is thus given by, From the previous result, one can define the critical temperature at which v T → 0, and the transition to larger temperatures is discussed further down. We finally note that the temperature-dependent part of the effective potential (31) is, for high temperatures [17], where dots denote higher order terms in λ 1/2 βv. Small source The effective potential (31), derived for large source, is valid for |φ c | ≥ v T . As discussed in section 3, for |φ c | < v T , quantum corrections consist in replacing the mass scale v by its renormalised value, which is v T here, in the volume-suppressed effective potential (17). Replacing also V (4) by βV (3) , the effective potential for |φ c | < v T is then: One can note that the expression for the renormalised temperature-dependent mass scale v T can also be obtained for small source. Indeed, the finite-temperature one-loop correction (26) becomes and the summation over Matsubara modes leads to which is consistent with the result (35) (one must keep in mind that, in the perturbative context, v is large compared to quantum corrections). Zeroth order phase transition The effective potential features the volume-suppressed part (40) in the interval [−v T , v T ] as long as T < T C . As the temperature increases and reaches the critical temperature T → T c , this interval shrinks and vanishes: the effective potential (31) becomes then valid for all the values of the classical field. In both temperature regimes though, the ground state is φ c = 0, provided the volume V (3) is finite. The limit T → T c thus does not corresponds to a spontaneous symmetry breaking for the vacuum, but rather to a different scaling with the volume V (3) . More specifically, let us study the effective mass, the free energy and the entropy, defined in the ground state φ c = 0 (or vanishing source j = 0): where the Boltzmann constant is set to 1. The transition is of zeroth order, since the free energy F is discontinuous: • For T ≥ T c , the effective potential is given by the expression (31), and field fluctuations around the vacuum φ c = 0 see the effective mass with the expected form which vanishes in the limit T → T c . The free energy and the entropy of the ground state φ c = 0 are obtained from the hightemperature expansion (39), whose leading term gives the known expressions • For T < T c , the effective potential features the volume-suppressed part (40) in the interval [−v T , v T ]. The effective renormalised mass, defined above the true vacuum φ c = 0, is given by and diverges at the critical temperature, if the volume V (3) is finite. This divergence could be an artifact arising from the weakness of the saddle point approximation for |φ c | → v T , and a more detailed study would be necessary to investigate this limit. The free energy and the entropy, obtained from the potential (40), read in T (0) = −T ln 2 (48) S < = ln 2 , and are not proportional to the volume, as a consequence of the volume-suppressed form of the effective potential (40). The expression (48) for S is the Boltzmann entropy for a system with two degenerate microscopic states, which correspond to the two bare vacuua. Due to the expressions (48) under the critical temperature, both the pressure and the internal energy vanish in the ground state. Therefore, in the low temperature regime T < T c , the thermodynamical properties of the system are frozen, due to the specific form (40) of the effective potential, where only the parameter v T is modified by quantum corrections. Also, the effective potential (40) has been derived independently of a large-temperature assumption, such that its features should remain valid for all temperatures below T c . Conclusion: physical relevance? As we have seen, convexity arises from the interplay of the two degenerate minima of the bare potential, when both are taken into account in the definition of the partition function. This is the reason why the Coleman-Weinberg potential (20) is different, since it is based on the quantisation over one minimum only. The present article assumes a finite volume and tunnel effect between the two minima of the bare potential: quantisation leads then to a convex effective potential, without imaginary part and without SSB. But the order in which quantisation is done and the volume is taken to infinity is important: if one first assumes an infinite volume, then tunnel effect is completely suppressed and quantisation occurs above one minimum only. Convexity is then not a property of the effective theory, because the partition function is a partial one: it does not take into account the whole space of field configurations, and the proof shown in Appendix A is not valid. In this case, the partial partition function takes into account fluctuations above one ground state only, consistently with SSB. It is therefore not obvious to see in what physical situation the present construction can be relevant. Nevertheless, a potential application for the dynamical generation of ultra-light scalar Dark Matter [18] is proposed in [19]. In order to obtain a coherence length of the size of a typical galactic halo, the mass of these particles should typically be of the order 10 −23 eV. It has been shown in [19] that such a mass is provided by the effective potential (18), where v is the Higgs vev and V (4) = L 4 , with L the particle horizon at the time of the Electroweak phase transition. A common mechanism with the Higgs mechanism is therefore proposed, which could explain how such a small mass can arise from a typical Standard Model mass scale. The extension to finite temperature is planned for a future work. and corresponds to a 4-dimensional bubble of arbitrary radius R. In the thin bubble wall limit Rv √ λ >> 1, an approximate family of solution of eq.(60), parametrised by the radius R, is given by In this approximation, the bounce action is dominated by the kinetic term and reads which thus gives a negligible contribution to the partition function. As explained more generally in [11], the radius of the bounce solutions which minimises the bounce action is proportional to the inverse of the difference in energy ∆U corresponding to the two vacuua. The bounce action B is then proportional to (∆U) −3 , and becomes infinite in the symmetric limit we consider here, therefore not contributing to the saddle point approximation for the partition function. The bounce solution is actually invariant under spacetime translation, and one should sum over the positions of the centre of the bounce. As shown in [11], the contribution of n bounces, taking into account the different locations of the centres of each bubble, leads to the final contribution to the partition function of the form The latter expression does not take into account the case n = 0, and is therefore valid for one bounce at least. As can be seen, when ∆U → 0 and thus B → ∞, then Z b → 0: bounces do not play a role in the situation of a symmetric bare potential. Appendix D: Cosine bare potential We consider here the bare potential, which is twice differentiable, where M and f are two mass scales. This potential leads to a converging path integral, since it provides quartic restoration forces for large fluctuations |φ| >> (2N + 1)f . For finite N we follow the same steps as those shown in section 2.2 to find the classical field in terms of the source k = V (4) j. In order to recover the full cosine potential, we will then take the limit N → ∞, where it is shown that the effective potential becomes flat. The minima of the bare potential (64) are given by φ n = (2n + 1)πf , for the integers |n| ≤ N. The saddle point partition function is then, up to terms proportional to (V (4) ) −1 , The classical field is then given by which, in the large N limit reads φ c πf + 2s(N + 1) ≃ coth(πkf ) , s ≡ sign(k) , N >> 1 . Inverting this relation gives 2πf k = ln [2s(N + 1) which, in the limit where N → ∞ implies k = 0. Although the source is in principle a free parameter, we can see that the limit N → ∞ implies a strong constraint on the system, which shows that the one-to-one mapping between the source and the classical field is lost. From the generic relation ∂Γ ∂φ c = −k , we finally find, for all values of the classical field φ c , such that the effective potential corresponding to the bare potential (64) is a constant in the limit N → ∞.
6,983.2
2016-03-04T00:00:00.000
[ "Physics" ]
Processing of Flavor-Enhanced Oils: Optimization and Validation of Multiple Headspace Solid-Phase Microextraction-Arrow to Quantify Pyrazines in the Oils An efficient and effective multiple headspace-solid phase microextraction-arrow-gas chromatography-mass spectrometry (MHS-SPME-arrow-GCMS) analytical protocol is established and used to quantify the flavor compounds in oils. SPME conditions, such as fiber coating, pre-incubation temperature, extraction temperature, and time were studied. The feasibility was compared between SPME-arrow and the traditional fiber by loading different sample amounts. It was found that the SPME-arrow was more suitable for the MHS-SPME. The limit of detection (LODs) and limit of quantitation (LOQs) of pyrazines were in the range of 2–60 ng and 6–180 ng/g oil, respectively. The relative standard deviation (RSD) of both intra- and inter-day were lower than 16%. The mean recoveries for spiked pyrazines in rapeseed oil were in the range of 91.6–109.2%. Furthermore, this newly established method of MHS-SPME-arrow was compared with stable isotopes dilution analysis (SIDA) by using [2H6]-2-methyl-pyrazine. The results are comparable and indicate this method can be used for edible oil flavor analysis. Introduction Good natural flavor is one of the major drivers for consumers to purchase edible oils, especially in Asia. Flavor-enhanced edible oils are normally produced by high roasting temperature in mechanical pressing. Under high temperatures in the processing, a lot of flavor volatiles are created through various reactions, in particular the Mallard reactions. Pyrazines are a group of such volatiles that attract the most importance. A lot of studies have shown that pyrazines are formed by roasting, baking, or thermally processing through Maillard reactions and impart cocoa, peanut, roasted nut-like flavors to various foods [1][2][3]. The above pyrazines were also found to be aroma-active compounds in flavorenhanced rapeseed oils and roasted pumpkin seed oils [10,11]. Interestingly, almost the same group of pyrazines was found in various oils, and variation in the concentration of individual compounds made their flavor different from one to another. This is in line The aim of this study is to set up a novel and more accurate MHS-SPME-arrow-GC-MS method to quantify pyrazines in flavor-enhanced edible oils. This is the first time applying MHS-SPME-arrow to quantify flavor compounds in oils combined with internal standard (ISTD) while carried out in organic solvent for quantitative calibrations. There are three steps needed to develop the quantitative method based on MHS-SPME-arrow: (1) Optimizing the HS-SPME conditions for the maximum extraction efficiency and sensitivity improvement; (2) finding the appropriate sample loading suitable for MHS-SPME and clarifying the feasibility of SPME-arrow fiber in replacement of the traditional SPME fiber; and (3) validating the MHS-SPME-arrow method combined with the internal standard in solvent by selected conditions, in which the MHS-SPME-arrow and SIDA were compared using [ 2 H 6 ]-2-methyl-pyrazine. The refined pyrazine-free and flavor-enhanced rapeseed oils were obtained from the local supermarket, thus as the real samples of 3 flavor-enhanced oils, including peanut oil, sesame oil, and rapeseed oil. All oil samples were stored at 4 • C. Ethyl acetate was chromatographic grades. Preparation of Standard Solutions Standard solutions were prepared using ethyl acetate as the solvent. The combined stock solution of target analytes (13 pyrazines) and internal standard (3-methyl-pyridine) were prepared by weighing and storing in sealed amber glassware at 0 • C in the dark. The concentration of analytes (pyrazines and 3-methyl-pyridine) in stock solutions were approximately 1000 mg/L, and the standard calibration solution (approximately 25 mg/L) was prepared by diluting the stock solution accordingly. Sample Preparation A stock solution of the internal standard 3-methyl-pyridine (2000.0 mg/kg) was prepared by adding 20.0 mg of 3-methyl-pyridine to 10.0 g of ODO. Then, the stock solution of the internal standard was diluted to 50.0 mg/kg with ODO. The resulting solutions were stored at 0 • C in the dark. The resulting internal standard solution (50.0 mg/kg) of 50.0 mg was added to 1.0 g of the oil sample, homogenized by vortex mixer afterward. The oil samples with internal standard (50.0 mg) were weighed into a 20 mL headspace vial sealed with a PTFE/silicone septum screw-cap. The vials were placed in the autosampler tray for HS-SPME analyses. The procedure for stable isotope dilution assay was the same as above. MHS-SPME Conditions For the HS-SPME method, the oil samples were pre-incubated at 80 • C for 20 min with the agitation speed of 450 rpm to release the volatile compounds prior to extraction, and then the fiber was exposed in the headspace of the vial at 50 • C for 50 min for equilibrium extraction. After the extraction of volatiles from the oil onto the fiber, the analytes were thermally desorbed from the fiber in the injector port of the chromatograph for 80 s and transferred to the chromatograph column where volatile compounds were separated. Finally, the analytes were identified and quantified by a mass spectrometer. After extraction and desorption, the SPME fiber was conditioned at 230 • C for 3 min. In the MHS-SPME method, the oil samples were taken 4 times at equal time intervals (of about 70 min). GC-MS Conditions A Combi PAL ingenious sample handling system (Ingenious Lab, Zwingen, Switzerland) used as an autosampler was mounted into the gas chromatograph. The change of liquid injector tool and SPME-arrow tool were the robotic steps for the analytical process. The gas chromatograph system was an Agilent 8890 coupled with a 5977B mass spectrometer (Agilent Technologies, Paolo Alto, CA, USA). A DB-FFAP analytical column (60 m × 0.25 mm, 0.25 µm) from Agilent Technologies was carried out to separate analytes. The SPME fiber device, which was used as an injector, was desorbed at high temperatures in the gas chromatography injector port to transfer analytes to gas chromatography device. MSD conditions: MS was operated in EI mode (70 eV); acquisition was carried out in full scan and selected ion monitoring (SCAN&SIM) mode; and the selected ions were reported in Table 1. Other conditions include ion source temperature: 230 • C; quadrupole temperatures: 150 • C; and transfer line temperature: 280 • C. Data were collected and processed using MassHunter software (Agilent Technologies). Analytes were identified by retention time and selected ions, which are listed in Table 1. Quantification of MHS-SPME MHS-SPME is a process based on stepwise extraction of the analytes from the same sample, which can be seen as a combination of multiple headspace and headspace solidphase microextraction [31,32]. After the same sample was consecutively extracted 4 times, the total peak area of analytes (A T ) can be calculated by Equation (1): where i is the number of extractions, A i is the peak area of analytes in the ith extraction, A 1 is the peak area of analytes in the first extraction, β is a constant between zero and one (0 ≤ β < 1), which can be calculated from Equation (2): There is a linear relationship between ln A i and i − 1, where ln β is the slope of the linear plot and can be calculated from a limited number (3 or 4) of extractions. Thus, the total amount of analytes in the system can be quantified by ISTD. Calculation of ISTD The detector response factors were calculated by the ethyl acetate solution of pyrazines and the internal standard. A liquid injection mode was used in this process by GC-MS. The response factors were calculated before samples were analyzed by MHS-SPME. Statistical Analysis Results were analyzed using ANOVA carried out using SPSS Statistical Software 18.0 (SPSS, Chicago, IL, USA), and the confidence interval was taken as 95%. All figures were generated using Origin 9.0, Adobe Illustrator, and Adobe Photoshop. Optimization of HS-SPME Conditions The experiments were performed on the flavor-enhanced rapeseed oil. Variables such as type of SPME-arrow fiber, pre-incubation temperature, extraction temperature, and time were studied to optimize the performance of HS-SPME. The HS-SPME conditions were optimized on the basis of peak areas of analytes and performed three times repeatedly. Figure 1 shows that the influence of the type of the fiber coating, pre-incubation temperature, extraction temperature, and time on the total peak area of target pyrazines. Calculation of ISTD The detector response factors were calculated by the ethyl acetate solution of py zines and the internal standard. A liquid injection mode was used in this process by G MS. The response factors were calculated before samples were analyzed by MHS-SPM Statistical Analysis Results were analyzed using ANOVA carried out using SPSS Statistical Software 1 (SPSS, Chicago, IL, USA), and the confidence interval was taken as 95%. All figures w generated using Origin 9.0, Adobe Illustrator, and Adobe Photoshop. Optimization of HS-SPME Conditions The experiments were performed on the flavor-enhanced rapeseed oil. Variab such as type of SPME-arrow fiber, pre-incubation temperature, extraction temperatu and time were studied to optimize the performance of HS-SPME. The HS-SPME con tions were optimized on the basis of peak areas of analytes and performed three tim repeatedly. Figure 1 shows that the influence of the type of the fiber coating, pre-incu tion temperature, extraction temperature, and time on the total peak area of target py zines. The sensitivity and selectivity of the extraction method were determined by the pro erties of adsorbents on SPME fibers. The selection of a suitable fiber was the first step developing the HS-SPME method. The performance of SPME fibers for the target co pounds depends on the polarity of the analytes and the physical and chemical propert of coating types [16]. In this study, the extraction efficiency of the four commercial SPM The sensitivity and selectivity of the extraction method were determined by the properties of adsorbents on SPME fibers. The selection of a suitable fiber was the first step in developing the HS-SPME method. The performance of SPME fibers for the target compounds depends on the polarity of the analytes and the physical and chemical properties of coating types [16]. In this study, the extraction efficiency of the four commercial SPME fibers with different polar coatings was compared under the same conditions. The results showed significant differences among the various fiber coatings (p < 0.05) ( Figure 1A). PDMS/DVB/CAR, which is produced with a cross-linked coating and contains bipolar Life 2021, 11, 390 6 of 12 coatings thus that can be used for the extraction of polar and non-polar VOCs, had the best performance (p < 0.05). The single-phase of PDMS (non-polar) and PA (polar) showed the poorest performance. These results were consistent with previous studies [36]. Therefore, a PDMS/DVB/CAR (120 µm × 20 mm) fiber was chosen for the optimization of the HS-SPME conditions and further used in this study. After the type of coating was selected, the second important parameter was preincubation temperature. The role of the pre-incubation procedure is to volatilize the aroma compounds from the sample and build equilibrium between the sample and headspace. At the same pre-incubation time, the increase of temperature can promote the distribution of weak volatile components in the headspace and accelerate the building of equilibrium. However, higher temperatures may lead to the conversion and degradation of unstable substances. Five pre-incubation temperatures were studied: 20, 40, 60, 80, and 100 • C. The results ( Figure 1B) showed that the higher pre-incubation temperature indeed contributed to the volatilization and equilibrium of pyrazines. However, the total peak area was not significantly different between 80 • C and 100 • C. Thus, 80 • C for pre-incubation temperature was chosen as the SPME condition for further studies. Extraction temperature is an important factor affecting the extraction efficiency of aroma compounds. The increasing temperature is conducive to the release of aroma compounds from the matrix at the same extraction time. However, high temperatures may also cause the degradation of components in the food matrix and decrease the absorbent ability of SPME fibers. Five extraction temperatures were studied: 40, 50, 60, 70, and 80 • C. According to Figure 1C, the extraction temperature of 50 • C was selected. The effect of different extraction times (30,40,50, 60, and 70 min) was also evaluated at 50 • C. The optimum time was required to reach equilibrium in three phases: The fiber coating, the headspace, and the sample. SPME under equilibrium is an important condition for carrying out the MHS-SPME operation. Compared with extraction without equilibrium, extraction under equilibrium has better repeatability [32]. As shown in Figure 1D, 50 min was chosen as the optimal extraction time, which ensured extraction efficiency and established the equilibrium of pyrazines in three phases. On that basis of the above experiments, the PDMS/DVB/CAR (120 µm × 20 mm) fiber coating was chosen, and the optimal HS-SPME conditions were pre-incubation temperature: 80 • C; extraction temperature: 50 • C; and extraction time: 50 min. Amount of Oil Sample The basic operation of MHS-SPME involves repeating extraction several times from the same sample, while the peak area decreases exponentially with the number of extractions. Therefore, the amounts of the sample should not only meet the LOQs but also ensure the linearity between ln A i versus the extraction numbers (i − 1), where ln A i shows linear decay with the number of extractions (Equation (2)). In order to satisfy that requirement, 0.4 < β < 0.95 should be fulfilled, which indicates the slope of the linear plot ln A i versus i−1 must less than −0.0513 and more than −0.9163 [37]. Compared with the traditional SPME fiber, the SPME-arrow device has a larger sorbent phase and showed higher sensitivities and extraction amount [20]. In this study, the effects of seven sample amounts and two fiber types on β and coefficient of determination (R 2 ) were studied on flavor-enhanced rapeseed oils using the 20 mL HS vial. For MHS-SPME, two conditions should be met: (1) R 2 > 0.75, (2) 0.4 < β < 0.95, indicating that the ln A i was linearly decayed with the number of extractions. Figure 2A shows the sample amount meeting R 2 > 0.75 while Figure 2B meeting 0.4 < β < 0.95. As shown in Figure 2, traditional SPME fiber exhibited the most pyrazines that met requirements when the sample amount was 20.0 mg, while the SPME-arrow fiber showed the highest number at 20.0 mg and 50.0 mg samples. This was in line with the advantage of the SPME-arrow fiber that larger sorbent phase volume gives higher extraction capacity. When the sample amount was greater than 200 mg, almost no β satisfied the MHS-SPME applicable requirement, regardless if the traditional SPME fiber or SPME-arrow fiber was used. The reason might be that the headspace was saturated in the multiple headspace extractions. However, the low sample amount may lead to poor repeatability due to the mass of analytes closed to LOQs, which means that the SPME-arrow fiber was more suitable for the quantitation of pyrazines in flavor oils by MHS-SPME. Therefore, 50.0 mg was selected as the sample mass in the following study. Life 2021, 11, x FOR PEER REVIEW 7 of 12 fiber that larger sorbent phase volume gives higher extraction capacity. When the sample amount was greater than 200 mg, almost no β satisfied the MHS-SPME applicable requirement, regardless if the traditional SPME fiber or SPME-arrow fiber was used. The reason might be that the headspace was saturated in the multiple headspace extractions. However, the low sample amount may lead to poor repeatability due to the mass of analytes closed to LOQs, which means that the SPME-arrow fiber was more suitable for the quantitation of pyrazines in flavor oils by MHS-SPME. Therefore, 50.0 mg was selected as the sample mass in the following study. Validation of MHS-SPME-Arrow Method The analytical performance of the MHS-SPME-arrow-GC-MS was evaluated in terms of LOD, LOQ, inter-day precision, intra-day precision, and recovery, as shown in Tables 1 and 2. Validation of MHS-SPME-Arrow Method The analytical performance of the MHS-SPME-arrow-GC-MS was evaluated in terms of LOD, LOQ, inter-day precision, intra-day precision, and recovery, as shown in Tables 1 and 2. This method allows the analysis of a variety of different pyrazines, and the selected internal standard is pyridine with a similar structure to pyrazines. Therefore, good selectivity is a necessary requirement for the correct identification and quantification of all analytes. The identification and quantitation of target compounds, including internal standard, were based on the retention time and different selected ions, which are listed in Table 1. For the LOD and LOQ determinations, the refined rapeseed oil ("pyrazine-free") was used as the matrix to prepare standard solutions. The suitable concentration was estab- lished as LOD and LOQ of pyrazines with the signal-to-noise ratio (S/N) of pyrazines being 3.0 and 10.0, respectively [38]. As shown in Table 1, the LODs and LOQs of 13 pyrazines were in the range of 2-60 ng/g and 6-180 ng/g, respectively, indicating that MHS-SPME can be used in routine quantitation of pyrazines. To evaluate the precision of the method, five parallel experiments were carried out on commercial flavor-enhanced rapeseed oils to calculate the intra-day precision. The inter-day precision was determined by analyzing the same sample five times a week for three weeks. Both intra-day precision and inter-day precision were expressed as the relative standard deviation (RSD). The results showed that the RSD of intra-day and inter-day were both lower than 16%, verifying the good precision of the MHS-SPME-arrow method. Model experiments were also carried out to evaluate accuracy. Three spiked concentration levels in refined rapeseed oil were also studied by the standard addition method (SA). The results in Table 2 showed that the recoveries of pyrazines analyzed were in the range of 90% to 115% in all cases, indicating that the method was reliable and accurate within the concentration range of the model experiments. The stable isotope dilution analysis (SIDA) has proven to be very precise in the model experiments, even at a very low extraction yield [39]. The main reason is that SIDA uses the most suitable internal standard: Stable isotopes of the analytes, which can fully be recovered for the losses. This study compared the quantitative results of SIDA and MHS-SPME of flavor-enhanced rapeseed oil, using 2-methyl-pyrazine labeled 2 H 6 as the stable isotope internal standard. The concentration of 2-methyl-pyrazine was 1.21 ± 0.13 µg/g for SIDA and 1.43 ± 0.11 µg/g for MHS-SPME-arrow, respectively. There was no difference (p >0.05) by statistical analysis. Thus, the data showed that the MHS-SPME-arrow could be applied to the analysis of these pyrazines in oil samples. By this conclusion, the method has been set up after condition optimization and SIDA method verification. An efficient and effective MHS-SPME-arrow-GCMS analytical protocol is established and can be used to quantify the flavor compounds in oils. Analysis of Real Samples Flavor-enhanced oils are usually produced by the traditional pressing process, mainly including roasting, pressing, and filtering. Heterocyclic volatiles are commonly produced by the Maillard reactions among proteins, amino acids, and sugars during oilseed roasting [40]. Pyrazines are one group of the main Maillard reaction products and also give the main source of roasting-like aroma in flavor oils [6]. Flavor-enhanced sesame and peanut oils are the main representatives of such oils, while rapeseed oil also has a good market in China for its unique spicy and roasted flavor, occupying more than 30% of the entire Chinese rapeseed oil market [41]. For the quantification of pyrazines in the three market Life 2021, 11, 390 9 of 12 product samples, MHS-SPME-arrow-GC-MS were carried out as set up above. Prior to analysis of the samples, the standard calibration solution of pyrazines and 3-methyl-pyridine with known concentrations were analyzed to calibrate the detector responses. Taking 2-ethyl-5-methyl-pyrazine in flavor-enhanced peanut oil as an example, Figure 3 showed the change of peak areas for four successive extractions. An exponential decrease in the target compound with the number of extractions was observed. The average concentrations and standard deviations of pyrazines together with R 2 and slope of the linear plot ln A i versus i − 1, were shown in Table 3. mainly including roasting, pressing, and filtering. Heterocyclic volatiles are commonly produced by the Maillard reactions among proteins, amino acids, and sugars during oilseed roasting [40]. Pyrazines are one group of the main Maillard reaction products and also give the main source of roasting-like aroma in flavor oils [6]. Flavor-enhanced sesame and peanut oils are the main representatives of such oils, while rapeseed oil also has a good market in China for its unique spicy and roasted flavor, occupying more than 30% of the entire Chinese rapeseed oil market [41]. For the quantification of pyrazines in the three market product samples, MHS-SPME-arrow-GC-MS were carried out as set up above. Prior to analysis of the samples, the standard calibration solution of pyrazines and 3-methyl-pyridine with known concentrations were analyzed to calibrate the detector responses. Taking 2-ethyl-5-methyl-pyrazine in flavor-enhanced peanut oil as an example, Figure 3 showed the change of peak areas for four successive extractions. An exponential decrease in the target compound with the number of extractions was observed. The average concentrations and standard deviations of pyrazines together with R 2 and slope of the linear plot ln Ai versus i − 1, were shown in Table 3. Thirteen pyrazines were found in three oils. According to the discussion in Section 3.2, the optimal sample amount was taken as 50.0 mg. To apply the MHS-SPME method, the following requirements must be met: R 2 > 0.75 and −0.9163 < slope < −0.0513. The results showed that the R 2 of pyrazines in three samples was greater than 0.98, of which more than 90% was bigger than 0.99. All the slopes were in the range of −0.9163~−0.0513. It was shown that the ln A i decreased linearly with the number of extractions, and the linear relationship was good, fitting to the MHS-SPME-arrow method. There were significant differences in the concentrations of pyrazines in different types of oils (p < 0.05). The total concentrations of pyrazines in the three samples were, in order from high to low: Sesame oil, peanut oil, and rapeseed oil. The total amount of pyrazines in sesame oil was close to 14 times that of rapeseed oil. The pyrazines with the highest concentration were 2-methyl-pyrazine and 2,5-dimethyl-pyrazine in three oils. As described by previous studies [1,11], pyrazines may be the best indicator to measure roasted flavor intensity. It is now clear also that sesame oil has the strongest roasted flavor as the concentration of pyrazines is the highest. Conclusions In summary, a reliable MHS-SPME-arrow-GC-MS method combined with ISTD for the quantitation of 13 pyrazines in flavor oils was developed. The SPME-arrow fiber was found to be more suitable for the MHS-SPME method, and the PDMS/DVB/CAR fiber was selected. The highest efficiency was performed under selected HS-SPME conditions. The novel method was verified with the stable isotope dilution analysis method and showed high sensitivity and accuracy. Additionally, the pyrazines of three market product samples were analyzed and quantified using the new method, which proved that the MHS-SPMEarrow-GC-MS was suitable to quantify pyrazines in oils. In the future development, this method has great opportunities in the quantitation of other aroma active compounds such as alcohols and pyrroles. At the same time, as an absolute quantitative method to eliminate the matrix effect, this method also has an obvious limitation. That is that it requires multiple extractions for a single analysis, which means it takes a much longer time than the conventional HS-SPME-GC-MS. Therefore, there are some aspects for further improvement for this newly proposed method.
5,427.8
2021-04-26T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Neuroimmune Semaphorin 4A in Cancer Angiogenesis and Inflammation: A Promoter or a Suppressor? Neuroimmune semaphorin 4A (Sema4A), a member of semaphorin family of transmembrane and secreted proteins, is an important regulator of neuronal and immune functions. In the nervous system, Sema4A primarily regulates the functional activity of neurons serving as an axon guidance molecule. In the immune system, Sema4A regulates immune cell activation and function, instructing a fine tuning of the immune response. Recent studies have shown a dysregulation of Sema4A expression in several types of cancer such as hepatocellular carcinoma, colorectal, and breast cancers. Cancers have been associated with abnormal angiogenesis. The function of Sema4A in angiogenesis and cancer is not defined. Recent studies have demonstrated Sema4A expression and function in endothelial cells. However, the results of these studies are controversial as they report either pro- or anti-angiogenic Sema4A effects depending on the experimental settings. In this mini-review, we discuss these findings as well as our data on Sema4A regulation of inflammation and angiogenesis, which both are important pathologic processes underlining tumorigenesis and tumor metastasis. Understanding the role of Sema4A in those processes may guide the development of improved therapeutic treatments for cancer. Introduction Angiogenesis is a complex physiologic process which is tightly controlled by several proteins such as VEGF, FGF (fibroblast growth factor), PDGF (platelet-derived growth factor), angiopoietin-1 and -2, ephrin-B2, and others [1]. Under physiological conditions the blood vessels in adults are already formed and rarely branch or sprout [1,2]. However, when a blood vessel is damaged, a complex repair process is activated in which several types of cells and signals coordinate the functions of endothelial and muscle cells involved in repair [1][2][3]. There are important steps in angiogenesis: (1) protease production which includes matrix metalloproteases (MMPs), a desintegrin and metalloprotease domain (ADAMs), a desintegrin and metalloprotease domain with trombospondin motif (ADAMTs), cysteine proteases such as cathepsins, and serine proteases such as tissue plasminogen activator (tPA); (2) endothelial cell migration and proliferation; (3) vascular tube formation; (4) connections of newly formed tubes; (5) synthesis of a new basement membrane; and (6) incorporation of pericytes and smooth muscle cells [1,3]. In addition to pro-angiogenic stimuli named above, several angiogenesis inhibitors-such as angiostatin, endostatin, vasostatin, TIMP (tissue inhibitor of metalloproteinases), platelet factor-4, osteopontin, and others-halt angiogenesis by stopping a formation of new blood vessels or even promoting blood vessel removal [1,3]. These opposing stimuli tightly regulate vascular homeostasis. Angiogenesis is also a vitally important process for tumor development and progression [3]. In order to grow, a tumor needs oxygen and nutrients which are supplied by new blood vessels. These vessels can form by an influence of angiogenic factors made by tumor cells themselves or by other surrounding cells which are stimulated by tumor cells to generate such factors [3]. The inhibitors of angiogenesis have been long considered as clinically important cancer-fighting agents. Most FDA (Food and Drug Administration) approved and clinically-useful anti-cancer therapeutics with anti-angiogenic effects are based on either inhibition or blockade of VEGF and its receptors. These include axitinib (tyrosine kinase inhibitor selective to VEGF-R1, -R2, -R3) for renal cell carcinoma [4]; bevacizumab (anti-VEGF humanized Ab) for several types of cancer, including lung, colorectal and cervical cancers [5]; sunitinib (triple-blocker, Abs to VEGF-R2, PDGF-Rb, and c-kit) for gastrointestinal stromal tumor, pancreatic, and renal cancers [6]; and several others [1,3]. However, more recent studies have shown some alarming side-effects in patients being treated with these drugs [7][8][9]. These undesirable consequences include an acute aortic dissection in a patient with liver tumors after a sixth round of sunitinib [7] or a jaw necrosis after axitinib treatment of a patient with renal cell carcinoma [9]. In some cases, the use of bevacizumab in patients with prostate cancer led to a confirmed anti-tumoral activity without a concomitant improvement in survival [8]. Moreover, targeting just one pathway in angiogenesis-e.g., VEGF-could be insufficient to disrupt cancer angiogenesis as other VEGF-unrelated pathways would stay intact. In addition to that, VEGF itself acting in tissues induces the expression of other molecules which can express either pro-or anti-angiogenic qualities thus promoting or compensating its direct effects. As an example of the above scenario, we previously have shown that the lung tissue VEGF expression induced a local inflammatory response characterized, in part, by a formation of new blood vessels, lung resident cell activation, and their upregulated expression of several neuroimmune proteins [10,11]. Among those upregulated proteins in lung DC (dendritic cells) was a member of Class IV semaphorin subfamily Sema4A. Thus, VEGF-induced lung tissue alterations can be, at least in part, Sema4A-mediated. However, whether Sema4A acts as an antior pro-angiogenic factor, remains to be determined as currently available publications examining its function came to opposite conclusions. Sema4A-receptor pathways form a complex system of intracellular and extracellular signals which regulate different physiological and pathological tissue processes. For example, Sema4A regulates proper retina formation [29], correct guidance of hippocampal neurons [30], angiogenesis [18,31], and adaptive immune response [12][13][14]16,19]. On the other hand, the Sema4A pathways are dysregulated in different diseases such as retinal degenerative diseases (retinitis pigmentosa type 35 and cone-rod dystrophy type 10) [29], allergy [10,14,19,22,23], infectious [14,32] and autoimmune diseases [14,26,28], and certain types of cancer [16,33]. The individual impact of each Sema4A-receptor pair in disease pathogenesis and/or progression needs to be dissected separately for the whole picture of Sema4A impact to be envisioned. This could be done, first of all, in vitro by applying the receptor knock-out or specific receptor blocking techniques in cells of interest, and in vivo using individual Sema4A receptor-deficient mice and their inter-crosses in the experimental models of certain diseases. Sema4A and Anti-Angiogenic Therapy in Cancer Tumor progression and metastasis require a growth in local tumor angiogenesis where new blood vessels form in order to supply cancer cells with growth nutrients. Tumor cells themselves and tumor-associated stroma secrete angiogenesis-promoting factors such as angiopoietin-2, follistatin, G-CSF (granulocyte colony-stimulating factor), HGF (hepatocyte growth factor), IL-8 (Interleukin 8), leptin, PDGF-BB, PECAM-1, VEGF, and MMP-1, -2, -3, -7, -9, -10, -12, and -13 [34]. It has been shown that VEGF mRNA expression was mainly targeted to primary colorectal tumor cells whereas angiopoietin-2 and HGF mRNA expression was targeted to tumor-adjacent stromal cells [34]. Interestingly enough, recent studies have shown that many tissue-specific tumors can grow alongside the blood vessels without a formation of new ones [35], thus abating effects of anti-angiogenic therapies in such tumors. Nevertheless, several angiogenic factors such as VEGF-A, VEGF-B, angiopoietin-1, osteopontin, fibroblast growth factor, MMPs, and others currently serve as targets in cancer treatment with FDA-approved inhibitors which all are being used in conjunction with chemotherapy [1,3]. The main target for angiogenesis-based cancer therapy is VEGF [36]. Currently, there are several small molecule inhibitors and monoclonal antibodies targeting the VEGF-A pathway, with their side-effects analyzed and reported [36]. Bevacizumab (avastin, recombinant humanized monoclonal Ab to VEGF) is currently used for treatment of metastatic colorectal cancer, non-sqamous non-small cell lung cancer, glioblastoma, metastatic renal cell carcinoma, metastatic or recurrent cervical cancer (in combination with chemotherapy), platinum-resistant recurrent epithelial ovarian, fallopian tube, or primary peritoneal cancer in combination with chemotherapy. Cabozantinib (Cabometyx, Cometriq, a small molecule inhibitor of the tyrosine kinases c-Met and VEGFR2) and pazopanib (Votrient, an inhibitor of three VEGF receptors) are used to treat advanced renal carcinoma. This type of cancer is also treated by sorafenib (Nexavar, a small inhibitor of several tyrosine protein kinases, including VEGFR) which also demonstrates therapeutic effects toward un-resectable advanced hepatocellular carcinoma and progressive differentiated radioactive iodine-resistant thyroid carcinoma. More recently developed Zif-Aflibercept (Eylea, Zaltrap, VEGF-Trap, a hybrid fusion protein of VEGFR-1 and VEGFR-2 binding domains) is used for metastatic colorectal cancer that is resistant to an oxaliplatin-containing regimen. The key side effect of anti-VEGF therapy is interference with the normal angiogenesis process where wound healing is highly disrupted (either delayed or incomplete). Indeed, when patients with metastatic colorectal carcinoma were treated with bevacizumab (Avastin, anti-VEGF-A mAb) they showed impaired wound healing and postoperative wound complications [37]. The reported side effects for bevacizumab include sensory neuropathy, hypertension, fatigue, and neutropenia [36]. Neutropenia and hypertension were also reported for ramucirumab use in addition to an increased risk of pneumonia. Thus, hypertension is the most known side-effect of VEGF inhibition as the ability of VEGF to decrease a blood pressure is well-documented. Other reported problems include an increased risk of arterial thromboembolic events caused by a disturbed regenerative capacity of endothelial cells [36]. As Sema4A is downstream of VEGF-induced signaling in the lung tissues [10,11], its effects on angiogenesis and tumor progression were of significant research interest. The role of Sema4A in angiogenesis has been previously evaluated in vitro and in vivo using either Sema4A-Fc fusion protein, recombinant human Sema4A, or/and Sema4A −/− mice [18,38]. In developing mouse embryos, a co-expression of Sema4A and Plexin D1 in the intersomitic blood vessels was detected, suggesting the potential role of this ligand-receptor pair in vascular formation [18]. To evaluate such effect, the authors studied HUVEC (human umbilical vascular endothelial cells) migration in transwell chamber using VEGF alone or in combination with several semaphorins. They found that VEGF-induced cell migration was suppressed by Sema4A-Fc. Furthermore, Sema4A-Fc and Sema3E-Fc, but not Sema4D-Fc, inhibited VEGF-induced tubular structure formation by HUVEC in the in vitro angiogenesis assay ( Figure 1). Interestingly enough, Sema4D showed the opposite to Sema4A effect on HUVEC, although these two semaphorins share Plexin D1 receptor [39,40]. This suggests the potential competition for the receptor binding between two semaphorins. However, it has been reported previously that the binding sites on Plexin D1 are different for each individual semaphorin ligand [40]. Nevertheless, there is a possibility that the binding of one Sema4 molecule could induce Plexin D1 modification, leading to another Sema4 molecule binding site to be hidden or inaccessible. Another study, however, has shown that a pro-angiogenic effect of Sema4D on endothelial cells is mediated by a different plexin family member, Plexin B1 [41], which is also a binding partner for Sema4A [38,39] (Figure 1). The signaling events occurring in endothelial cells under Sema4D exposure were dependent on a COOH-terminal PDZ-binding motif of Plexin B1, which binds two guanine nucleotide exchange factors for the small GTPase Rho, PDZ-RhoGEF and LARG, and were mediated by activation of Rho-initiated pathways. The signaling events under Sema4A exposure have never been examined in details. induced cell migration was suppressed by Sema4A-Fc. Furthermore, Sema4A-Fc and Sema3E-Fc, but not Sema4D-Fc, inhibited VEGF-induced tubular structure formation by HUVEC in the in vitro angiogenesis assay ( Figure 1). Interestingly enough, Sema4D showed the opposite to Sema4A effect on HUVEC, although these two semaphorins share Plexin D1 receptor [39,40]. This suggests the potential competition for the receptor binding between two semaphorins. However, it has been reported previously that the binding sites on Plexin D1 are different for each individual semaphorin ligand [40]. Nevertheless, there is a possibility that the binding of one Sema4 molecule could induce Plexin D1 modification, leading to another Sema4 molecule binding site to be hidden or inaccessible. Another study, however, has shown that a pro-angiogenic effect of Sema4D on endothelial cells is mediated by a different plexin family member, Plexin B1 [41], which is also a binding partner for Sema4A [38,39] (Figure 1). The signaling events occurring in endothelial cells under Sema4D exposure were dependent on a COOH-terminal PDZ-binding motif of Plexin B1, which binds two guanine nucleotide exchange factors for the small GTPase Rho, PDZ-RhoGEF and LARG, and were mediated by activation of Rho-initiated pathways. The signaling events under Sema4A exposure have never been examined in details. interaction with corresponding NRP-1 or -2, whereas Sema3G-NRP-2 interaction leads to angiogenesis inhibition. Sema4A-Plexin D1 interaction demonstrated anti-angiogenic effects in vivo in one study [18]. Sema3A is a direct target gene for miRNA-362 and it functions as angiogenesis and metastasis inhibitor [42]. Sema3E-Plexin D1signaling also demonstrated anti-angiogenic effects, however, promoted cancer invasion and metastasis. Both class IV semaphorin molecules, Sema4A and Sema4D, functionally interact with Plexin B1, which is expressed on EC. Sema4D-Plexin B1 interaction has been shown to promote cancer-related angiogenesis, cancer invasion, and metastasis [41]. The role of Sema4A-Plexin B1 interaction in these processes is unclear. In addition to that, it is not known if Plexin B2, with which Plexin B1 forms a functional heterodimer [43], is also expressed and functions on EC. Currently existing and future cancer therapies based on the inhibition of individual components of the Sema-Plexin-NRP-VEGF complex have to take in account different cancer histotypes where distinct semaphorins and their either individual or cross-binding receptors are differently expressed and regulated. The in vivo effect of rSema4A on vascularization in chick embryos has proven its indispensable role in blood vessel formation [18]. Chorioallantoic membrane (CAM) assays were used to evaluate such effects where gelatin sponges were inserted into chick embryos for three days. When examined thereafter, pre-treated with rSema4A sponges contained lower numbers of preformed blood vessels as compared to isotype control-pretreated sponges thus again proving the inhibitory role of Sema4A in angiogenesis. Pre-treatment of HUVEC with siRNA specific for individual Plexin family members, such as Plexin B1, D1, and A1, before rSema4A exposure determined Plexin D1 as its functional receptor on endothelial cells which mediates its anti-angiogenic activity (Figure 1) [18]. All of the discussed above results define Sema4A as a potent anti-angiogenic molecule and pave the way to its evaluation in cancer immunotherapy. However, a recent research by Meda and associates [31] has shown a pro-angiogenic role of Sema4A ligating Plexin D1 (Figure 1) on macrophages and stimulating their migration, VEGF-A production, and VEGF-R1 expression. Moreover, this Sema4A-VEGF-A pathway has been shown to be involved in macrophage activation and recruitment during inflammatory processes such as the experimental models of peritonitis and cardiac inflammation. Thus, considering the opposite effects of Sema4A on endothelial cells and macrophages, the identification of additional mechanisms of its action should be an important focus of future research aimed to develop of Sema4A-based therapeutic strategies to target cancer angiogenesis. We previously reported that VEGF expression in lungs induces potent angiogenesis and edema formation [11]. Staining of mouse lung tissues with Lycopersicon esculentum lectin demonstrates a normal arrangement of blood vessels in the tracheas and intrapulmonary bronchi of wild-type mice. These blood vessels formed cascades with capillaries crossing between arterioles and venules. In contrast, we observed multiple endothelial sprouts, mostly arising from the venules, in VEGF transgenic mice as early as on day 3 of transgene expression induction. The vascular density (the percent of the airway covered with vessels) reached its maximum on day 7 and remained elevated for at least a month thereafter. The newly formed blood vessels were larger than the capillaries of the VEGF-unaffected control airways. The endothelial cells of these vessels were thin, had occasional fenestrations, and were enveloped by pericyte processes and basement membranes. Besides angiogenesis, we studied the effect of lung VEGF expression on local immune cells. We have shown that lung DC were activated by VEGF-A and upregulated of Sema4A and Plexin D1 expression [10]. Thus, for DC and macrophages, there is a positive feedback loop between VEGF-A and Sema4A which bind the corresponding receptors, Plexin D1 and VEGF-R1, and mediate this loop's signaling pathways. However, as it has been shown previously and stated in the Introduction, Sema4A uses different receptors on different cell types to regulate their activation and function. For example, it uses Neuropilin-1 to mediate mouse Treg cell's phenotype stabilization and function [16], Plexin B1 to induce such effect in human Treg cells (our unpublished observations), Tim-2 to co-stimulate mouse CD4 + T cells into Th1 phenotype [14,15], Plexin B2 for an optimal differentiation of CD8 + T cells [21], and ILT-4 to co-stimulate human CD4 + T cells into Th2 phenotype in vitro [19]. We did not detect Plexin B1 or Tim-2 expression on lung endothelial cells in mouse tissues either in steady-state or inflammatory conditions [10]. However, no such study was performed for human lung tissues. We analyzed the expression of Sema4A and Plexin D1 on human lung cancer tissue arrays using immunohistochemistry with corresponding Abs (Figure 2). We have found that blood vessels in cancer-associated inflammatory sights expressed both molecules (marked with red arrows on Figure 2). Thus, it is quite possible that Sema4A exerts its pro-or anti-angiogenic activity on pulmonary endothelial cells through Plexin D1 receptor. This statement, however, requires an extended focused testing. We analyzed the expression of Sema4A and Plexin D1 on human lung cancer tissue arrays using immunohistochemistry with corresponding Abs (Figure 2). We have found that blood vessels in cancer-associated inflammatory sights expressed both molecules (marked with red arrows on Figure 2). Thus, it is quite possible that Sema4A exerts its pro-or anti-angiogenic activity on pulmonary endothelial cells through Plexin D1 receptor. This statement, however, requires an extended focused testing. Figure 2. Immunohistochemistry of human lung cancer-adjoined tissue on BioChain arrays was performed as a four-step assay. Primary Ab for Sema4A (sc-46258) and Plexin D1 (E-13) were obtained from Santa Cruz Biotech. Streptavidin-HRP (Abcam) was used as a detection enzyme and DAB peroxidase substrate kit (SK-4100, Vector) was used for staining visualization. Biotynilated rabbit anti-goat IgG was used as a secondary Ab. Red arrows indicate marker expression on endothelial cells. Panels on the left represent ×20 magnification, panels on the right show a magnification of ×40. Figure 2. Immunohistochemistry of human lung cancer-adjoined tissue on BioChain arrays was performed as a four-step assay. Primary Ab for Sema4A (sc-46258) and Plexin D1 (E-13) were obtained from Santa Cruz Biotech. Streptavidin-HRP (Abcam) was used as a detection enzyme and DAB peroxidase substrate kit (SK-4100, Vector) was used for staining visualization. Biotynilated rabbit anti-goat IgG was used as a secondary Ab. Red arrows indicate marker expression on endothelial cells. Panels on the left represent ×20 magnification, panels on the right show a magnification of ×40. Sema3A, which shares NRP-1 receptor with Sema4A, was identified as a potential anti-cancer semaphorin with anti-angiogenic signaling (Figure 1) [44]. Sema3A expression was analyzed in vitro in human cancerous cells and tissues and in vivo in three different genetically engineered mouse models of carcinogenesis [44]. The anti-tumor effects of Sema3A were directed toward the pruning and remodeling of abnormal blood vessels, and increasing their coverage with pericytes, all of which led to a stable vascular normalization. Based on these activities, Sema3A was termed 'an endogenous angiogenesis inhibitor'. Moreover, the observed progressive decrease of Sema3A expression in endothelial cells starting from the pre-malignant lesions to actual tumors suggested its prognostic biomarker prospective for cancer progression. Another potential therapeutic target in anti-angiogenic tumor management is Sema3G (Figure 1) which also shares NRP-1 receptor with Sema3A and Sema4A [45]. The transcriptomic profiling of different tissues linked Sema3G expression to endothelial cells during angiogenesis and development what led to its term "a vascular semaphoring". Sema3G full length molecule (p87) has been shown to bind selectively NRP-2, whereas a processed by furin proprotein convertases molecule (p61) binds both, NRP-1 and -2. Unlike Sema3E, which inhibits angiogenesis by a NRP-independent binding of its receptor Plexin D1 [46], Sema3G-NRP-2 signaling positively affects angiogenesis [45] (Figure 1). NRP-1 is required as a co-receptor for VEGF165 signaling through its canonical tyrosine kinases VEGFR-1 and -2 [47,48]. Notably, NRPs serve as cell surface receptors for multiple semaphorin molecules with either pro-or anti-angiogenic effects. Therefore, currently existing and future cancer therapies based on the inhibition of individual components of the Sema-Plexin-NRP-VEGF complex have to take in account different cancer histotypes where distinct semaphorins and their either individual or cross-binding receptors are differently expressed and regulated. Previously published data suggested that other Class III semaphorin members, Sema3B and Sema3F, could act as tumor suppressors as they bind antagonistically to NRP-1 and NRP-2, and inhibit angiogenesis (Figure 1) [49]. More detailed examination of their actions had supported a suppressive role for Sema3B in lung and renal cancers [50] and for Sema3F in oral squamous cell carcinoma [51]. We were interested in defining Sema4A's effect on VEGF-induced lung vascularization and inflammation. The main question was whether Sema4A further deepens VEGF-induced lung pathologies acting as a pro-angiogenic and pro-inflammatory factor similarly to earlier described effects of fatty acid binding protein 4 (FABP4, adipocyte-FABP, aP2) [52], or whether it is produced as a compensatory protective molecule aimed to diminish or dampen VEGF-mediated tissue damages. FABP4 is an intracellular lipid chaperone which is induced in endothelial cells by VEGF exposure. It exhibits pro-angiogenic functions in vitro and in vivo by promoting endothelial cell proliferation, migration, survival, and morphogenesis. The generated VEGF tg/FABP4 −/− mice showed that FABP4 deficiency significantly reduced VEGF-induced airway angiogenesis and lung tissue inflammation. Sema4A and Anti-Inflammatory Therapy in Cancer We previously reported that lung VEGF-A expression induced local conventional DC (cDC) maturation and direction toward DC2 phenotype [11]. These VEGF-stimulated cDC upregulated Sema4A expression [10]. To assess the role of Sema4A in allergen-induced lung inflammation, we used OVA model of asthma in Sema4A −/− mice where we found an exaggerated lung allergic response as compared to WT mice [22]. This suggests that Sema4A is a suppressive molecule for the in vivo Th2 response. We next crossed VEGF tg mice [11] with Sema4A −/− mice [14] and have found that this semaphorin deficiency led to an increased inflammatory cell infiltration in the lungs of VEGF tg mice when transgene expression is turned on by doxycycline-containing water ( Figure 3). As we have shown previously, lung bronchial epithelial expression of VEGF transgene lead to an asthma-like phenotype with inflammation, parenchymal remodeling, increased vascularization, edema formation, mucous cell and myocyte hyperplasia and airway hyperreactivity [11]. The observed lung tissue inflammatory response and vascularization in our VEGF tg/Sema4A −/− mice were more pronounced than those found in transgenic mice alone (Figure 3). In addition, we observed higher local levels of Th2 cytokine IL-13 in VEGF tg mice with Sema4A deficiency ( Figure 3B). In fact, IL-13 was a signature Th2 cytokine, in contrast to unchanged levels of IL-4 and IL-5, upregulated in the Sema4A −/− lungs and spleens after allergen exposure [22]. Based on the well-established role of VEGF in angiogenesis and tumor pathogenesis, our preliminary data for the mouse models of experimental asthma in Sema4A −/− and VEGF tg/Sema4A −/− mice, and the discussed above publications on the Sema4A inhibitory role in VEGF-induced angiogenesis, we suggest that Sema4A may act as a tumor suppressor interfering at least with three critical pathways in tumor development, progression, and metastasis: (1) immune cell activation and function; (2) inflammation; and (3) local levels of Th2 cytokine IL-13 in VEGF tg mice with Sema4A deficiency ( Figure 3B). In fact, IL-13 was a signature Th2 cytokine, in contrast to unchanged levels of IL-4 and IL-5, upregulated in the Sema4A −/− lungs and spleens after allergen exposure [22]. Based on the well-established role of VEGF in angiogenesis and tumor pathogenesis, our preliminary data for the mouse models of experimental asthma in Sema4A −/− and VEGF tg/Sema4A −/− mice, and the discussed above publications on the Sema4A inhibitory role in VEGF-induced angiogenesis, we suggest that Sema4A may act as a tumor suppressor interfering at least with three critical pathways in tumor development, progression, and metastasis: 1) immune cell activation and function; 2) inflammation; and 3) angiogenesis. Based on all of the above, Sema4A is a suppressive molecule for both, allergen-induced and VEGF-mediated lung tissue responses, making it an attractive target for allergic disease immunotherapy. Indeed, when recombinant Sema4A protein was introduced into the allergic murine lungs, it significantly suppressed all features of an inflammatory Th2 response such as lung eosinophilia, mucus hypersecretion, proinflammatory, and Th2 cytokine production [22]. We and others have shown that Sema4A affects Treg cells in vitro and in vivo [16,22] as Treg cell local lung number decreases under inflammation by Sema4A deficiency [22]. Moreover, Sema4A acting through NRP-1 in mice [16] and Plexin B1 in humans (our unpublished observation) stabilizes Treg cell number and function. Therefore, Sema4A serves as a downregulatory molecule for allergic diseases suppressing allergen-dependent and -independent responses, in part, by upregulating Treg cell response. However, a recent article by Lu and colleagues [19] has demonstrated a costimulatory effect of Sema4A for T cell, especially Th2 cell, activation and function. Further studies are warranted to elucidate Sema4A-ILT-4 roles in different diseases including cancer. As a translational part of our research, we obtained human lung cancer tissue arrays (Z7020065, BioChain) and assessed them for Sema4A and corresponding receptor expression using commercially available antibodies. Tissue photomicrographs were taken using CoolSnap image capturing software (Roper Scientific Inc.) with Nikon Eclipse E400 (Japan) microscope. The tissue arrays constituted of: (1) adenocarcinoma, Stages I to III (Stage I: the tumor is only present in the lungs, Stage II: the cancer has invaded the lymph nodes, Stage III: the cancer has invaded other organs); (2) bronchioalveolar carcinoma; (3) papillary carcinoma; (4) squamous cell carcinoma; and (5) small cell lung cancer. We have found a low to absent Sema4A expression in bronchioalveolar, papillary, and small cell lung cancer but stage-dependent increased levels of Sema4A in adenocarcinoma and squamous cell carcinoma ( Figure 4). This observation supports a previous notion that the effects of different semaphorins on cancer progression are broad and context-dependent [40,53,54]. Cumulative recent findings define Sema3A as a potent anti-angiogenic and anti-malignant molecule in different types of cancer [42,44,55,56]. Initial study defined Sema3A as an endogenous angiogenesis inhibitor which expression in cancerous tissues was gradually declined with disease progression [44]. Moreover, recently published research showed that a downregulation of Sema3A expression promoted cancer metastasis [42]. The latter study focused on non-small cell lung carcinoma (NSCLC) where miRNA-362-5p overexpression was associated with Sema3A downregulation as opposed to their expression levels in normal tissues. Demonstrated direct Sema3A-miRNA-362 interaction affected NSCLC invasion, migration and colony formation. This study suggests that Sema3A is a direct target gene for miRNA-362 ( Figure 1) and it functions as angiogenesis and metastasis inhibitor. Another recent study associated multiple myeloma (MM) and leukemia with a low expression of Sema3A as compared to normal control [57]. Serum Sema3A concentration inversely correlated with the MM stage what makes this semaphorin molecule a prospective prognostic marker for a disease course. In parallel of a gradual replacement of healthy Sema3A-producing bone marrow cells with malignant cells, there was a concurrent increase in their VEGF expression what further complicated MM. The value of the Sema3A expression level in different types of cancer as a marker for a disease prognosis was discussed in a study designed to generate a safe tumor-suppressive Sema3A point mutant isoform [55]. For instance, this study showed that Sema3A binding NRP-1 can increase vascular permeability without an inhibition of tumor growth (Figure 1) [55]. Moreover, it demonstrated that main anti-angiogenic and anti-tumorigenic Sema3A activities were independent of NRP-1 binding but were the results of Sema3A-Plexin A4 signaling. Therefore, the efforts were made to design a Sema3A mutant which binds Plexin A4 but not NRP-1 [55]. Such a mutant, Sema3A_ Ig-b, was effective in vasculature normalization, tumor inhibition, slowing metastasis, and improving chemotherapy. . Immunohistochemistry of human lung cancerous tissue arrays (BioChain) was visualized via a four-step staining procedure using anti-Sema4A (sc-46258, Santa Cruz) as primary Ab, biotinylated rabbit anti-goat IgG (sc-2774) as secondary Ab, streptavidin-HRP (Abcam) as detection enzyme, and DAB peroxidase substrate kit (SK-4100, Vector) for visualization (100× magnification). The stages of cancer are shown in Roman numerals. Staining specificity was compared with goat IgG control Ab stain (not shown). Cumulative recent findings define Sema3A as a potent anti-angiogenic and anti-malignant molecule in different types of cancer [42,44,[55][56]. Initial study defined Sema3A as an endogenous angiogenesis inhibitor which expression in cancerous tissues was gradually declined with disease progression [44]. Moreover, recently published research showed that a downregulation of Sema3A expression promoted cancer metastasis [42]. The latter study focused on non-small cell lung carcinoma (NSCLC) where miRNA-362-5p overexpression was associated with Sema3A downregulation as opposed to their expression levels in normal tissues. Demonstrated direct Sema3A-miRNA-362 interaction affected NSCLC invasion, migration and colony formation. This study suggests that Sema3A is a direct target gene for miRNA-362 ( Figure 1) and it functions as angiogenesis and metastasis inhibitor. Another recent study associated multiple myeloma (MM) and leukemia with a low expression of Sema3A as compared to normal control [57]. Serum Sema3A concentration inversely correlated with the MM stage what makes this semaphorin molecule a prospective prognostic marker for a disease course. In parallel of a gradual replacement of healthy Sema3A-producing bone marrow cells with malignant cells, there was a concurrent increase in their . Immunohistochemistry of human lung cancerous tissue arrays (BioChain) was visualized via a four-step staining procedure using anti-Sema4A (sc-46258, Santa Cruz) as primary Ab, biotinylated rabbit anti-goat IgG (sc-2774) as secondary Ab, streptavidin-HRP (Abcam) as detection enzyme, and DAB peroxidase substrate kit (SK-4100, Vector) for visualization (100× magnification). The stages of cancer are shown in Roman numerals. Staining specificity was compared with goat IgG control Ab stain (not shown). In contrast to Sema3A tumor suppressor effects, Sema3E, which shares Plexin D1 receptor with Sema4A and Sema4D, consistently demonstrated the pro-tumoral effects [40,46,58]. Sema3E-Plexin D1 interaction promoted tumor growth and metastasis ( Figure 1) [46,58], and this ligand-receptor pair expression correlated positively with metastatic progression of colon, liver, and melanoma cancers [46]. Moreover, knocking down either Sema3E or Plexin D1 hampered a metastatic potential of several human cancer cells upon xenotransplantation indicating the importance of these molecules in metastatic process. Extensive analysis of Sema3E gene expression in human colon carcinomas demonstrated its 88% association with metastatic disease. Analysis of human breast cancer showed elevated Sema3E expression in metastatic breast cancer as well [58]. Sema3E knock-down in breast cancer cell lines triggered their apoptotic death which was rescued by rSema3E additions to cell cultures. The same effect was observed in cultures treated with a synthetic blocker of Sema3E-Plexin D1 interaction, SD1, which is a soluble recombinant protein containing Sema3E-binding Plexin D1 domain. Unexpectedly, two contrasting roles of Sema3E toward cancer have been demonstrated. It acts as an angiogenesis inhibitor by limiting tumor vessel density and as a metastasis promoter by stimulating tumor cell invasiveness, transmigration, and extravasation [46]. Interestingly, Sema3E molecule display Sema4A-like function in allergic asthma as an inflammation inhibitor [59,60]. Furthermore, similarly to the reported effects of recombinant Sema4A administrations to allergic lungs [61], intranasal instillations of Sema3E protected mice from allergen-induced airway inflammatory responses [59,60]. Therefore, certain functional parallels can be drawn between these two distinct semaphorins, Sema3E and Sema4A, but to this day it is unclear if their seeming functional likeness in some diseases-asthma, for example-is directly related to their signaling through Plexin D1. Given the many potential impacts of Sema4A on tumors, its detailed investigation will be beneficial for basic and clinical cancer research. Conclusions The presented and discussed here data show that Sema4A functions as a VEGF-opposing molecule in lung inflammation, however, its role in tumorigenesis and metastasis is not clearly defined. The currently proven association of Sema4A mutation with Familial colorectal cancer type X (FCCTX) was first reported in 2014 [33,34]. A more recent study connected the increased Sema4A expression in breast cancer tissues with disease progression [62]. Surprisingly, no new data for other types of cancer has been shown since then. Considering multiple receptors translating Sema4A effects into different cells, the individual and/or dominating receptor function needs to be detected and analyzed first. Therefore, additional functions of Sema4A are likely to emerge in the near future. Funding: The research was supported by the NIH/NIAID R21AI076736 grant. Svetlana P. Chapoval is supported by the NIH/NIAID R01AI122631 grant and by SemaPlex LLC.
7,315
2018-12-30T00:00:00.000
[ "Medicine", "Biology" ]
Numerical Investigation of Distributed Speed Feedback Control of Turbulent Boundary Layer Excitation Curved Plates Radiation Noise : The control of decentralized velocity feedback on curved aircraft plates under turbulent boundary layer excitations is numerically investigated in this paper. Sixteen active control units are set on the plate to reduce the vibration and sound radiation of the plate. The computational results from the two methods are compared to verify the accuracy of the numerical model. The plate kinetic energy and the radiated sound power under turbulent boundary layer and control unit excitations are analyzed. The influences of control unit distribution, plate thickness and curvature on radiated sound are discussed. Unlike a flat plate, the control of the lower-order high radiation modes of a curved plate under TBL excitations is critical since these modes predominate the sound radiations. The control of these modes, however, is sensitive to the ratio of the stiffness associated with the membrane tensions to the stiffness associated with the bending forces. This ratio implies that the plate curvature and the thickness play an important role in the control effect. When the plate is thinner and the radius is smaller, the control is less effective. Introduction The noise problem caused by the interaction between the turbulent boundary layer (TBL) pulsating pressure and aircraft side plates is one of the most representative problems in vibro-acoustics [1,2]. Many efforts carried out on the problem of TBL-induced structural noise can be summarized in three aspects. One is wavenumber frequency spectrum models quantifying the TBL excitations. Some famous semi-empirical formulations, such as Corcos [3], Efimtsov [4], Williams [5] and Chase [6,7], have been obtained by fitting a large amount of experimental data and statistical turbulence theory. The other one is about how to predict the vibration and radiated noise of a plate caused by TBL excitations. Graham [8,9] proposed a model to predict the TBL-induced noise for aircraft side and trim plates, in which the modal excitation terms are expressed analytically, and the advantages of different wavenumber frequency spectrum models induced by TBL are discussed. Liu et al. [10] predicted the TBL-induced noise of a stiffened plate using the receptance method. It was found that the stiffeners perpendicular to the direction of incoming flow have an obvious effect on the radiated noise. Rocha and Palumbo [11] investigated the sensitivity of sound power radiated by aircraft plates to TBL parameters, and discussed the findings by Liu [12] that ring stiffeners may increase TBL induced noise radiation significantly. Liu [13] further compared TBL-induced vibrations with the in-flight measured data of P180, where a simplified double integral for the calculation of the modal excitation term is provided. The third aspect is the passive methods for the control of the radiated noise. It has been reported that passive damping is always effective in controlling the vibration and noise caused by TBL. However, the reduction in vibration level is more significant in comparison with the radiated noise level, which implies that the radiation efficiency of the plate increases with increasing damping treatment. Kou et al. [14] described formulas to include the influence of structural damping on the radiation efficiency of finite and infinite plates. Thus, the phenomenon that the radiation efficiency of a plate increases with the increase in the damping treatment is explained. Kou et al. [15] also concluded that the modal averaged radiation efficiency increases significantly with the increase in the convection velocity below the hydrodynamic coincidence frequency, and the damping effect is more significant with the increase in the flow velocity. In addition to passive methods, active methods have great potential for the control of TBL-induced plate noise. Among them, the control strategy based on distributed velocity feedback has received much attention for acoustically or TBL-induced noise [16,17]. The simulation results given by Elliott et al. [18] and Jayachandran et al. [19] show that the distributed velocity feedback is unconditionally stable in a large gain coefficient range, which is a relatively robust control method. However, the force driver needs some large mass to generate the reaction force, and when a large force is required in the low frequency range, the force driver will be relatively large and heavy. In practice, it is more convenient to use piezoelectric patch actuators integrated with plates. Gardonio et al. [20][21][22] used piezoelectric patch actuators and acceleration sensors to analyze in detail the control effect of distributed velocity feedback control and the existence of optimal gain coefficient from theoretical and experimental perspectives. These works further show that the distributed velocity feedback control is easy to implement and the control effect is approximately optimal. Since it is usually not convenient to obtain the physical information of the structure, it is difficult to obtain the optimal gain coefficient. To solve this problem, Cao et al. [23,24] proposed the concept of the virtual absorption energy of piezoelectric sheets, which uses the maximum virtual absorption energy to obtain the best gain coefficient and is easier to measure compared to kinetic energy or acoustic radiation power. Distributed velocity feedback control is not only applicable to diffused sound field excitation but also to random excitation and TBL excitation. Rohlfing et al. [25] specified the mesh density of finite cells on the plate and investigated the effectiveness of negative feedback control of uniform and light sandwich panels under random excitation and TBL excitation. The control effects of a series of ideal speed negative feedback control circuits on a homogeneous plate and a lightweight sandwich plate are compared. Alouf et al. [26] developed a new active control mechanism for aircraft cabin windows using an active structural acoustic control strategy that provides a significant improvement in acoustic attenuation performance at low frequencies. The effects of voltage, actuator position and number on the sound transmission characteristics were analyzed. Yuan et al. [27] investigated the dispersive velocity feedback control of thin plates under TBL excitation based on the newer TBL semi-empirical model, and the results showed that the pre-stress effect and hydrodynamic overlap have a large effect on plate vibration, which has an important influence on the plate vibration acoustic performance and the selection of the number of control channels. Ma et al. [28] investigated the dispersive velocity feedback control of a ribbed plate using inertial actuators and discussed the effect of feedback gain and number of actuators on control performance, further demonstrating the existence of an optimal gain for dispersive velocity feedback control. Typical aircraft plates generally exhibit unidirectional curvature. A typical case is that when an aircraft plate is excited by TBL, the direction of air velocity is perpendicular to the curved direction of the plate. The sound radiation properties of curved and flat plates can be significantly different. As pointed out in reference [10], the curvature results in the convergence of resonance frequencies of the plate led by the interaction of bending forces and membrane tensions in the shell. The convergence not only increases the modal density of the curved plate around the ring frequency but also increases the sound radiation efficiency of these modes by shifting them to a relatively higher frequency. Although the active control of flat plates can be found in many works in the literature, there are few studies on the acoustic characteristics of active control of curved plates under TBL excitations. Graham [8] studied the induced noise of aircraft wall panels under TBL excitation, elucidating that the presence of panel membrane tension causes a shift in the lowest resonant frequency to high frequencies. Nourzad et al. [29] used inertial actuators to control the vibration and radiation of doubly-curved plates and analyzed the effect of curvature on the vibration response of doubly curved plates. In this paper, the control effect of a curved thin plate under TBL excitations is numerically investigated. Sixteen active control units are scattered on the plate, and each active control unit includes a piezoelectric actuator, an acceleration sensor, and a feedback actuator. The kinetic energy and radiated sound power of the plate are discussed in detail for different curved plate thicknesses, bending curvatures, and active control unit distribution. Mathematical Model and Theoretical Calculation The decentralized velocity feedback control of a simply supported plate through active control units is illustrated in Figure 1, where the plate can be flat and curved, with an air medium on either side of the plate and random TBL excitations on one side of the plate. Sixteen active control units are uniformly distributed on the rectangular plate. The elementbased model divides the plate into a series of small rectangular elements, the dimensions of which are l xe = L x /(4M), l ye = L y /(4N), where L x and L y are the length and width of the plate, respectively, and M and N are the highest number of calculated modes. The mass density of air is ρ 0 = 1.21 kg · m −3 , and the speed of sound is c 0 = 340 m · s −1 . The perturbations acting on the plate are assumed to be harmonics. For the sake of brevity, the time-harmonic term is omitted from the plural form of velocity and force, sȯ w(t) = Re{ẇ exp(jωt)} and f (t) = Re{ f exp(jωt)} are replaced byẇ and f , respectively. The modal summation method is used to solve the acoustic and vibration response of a simply supported rectangular thin plate under turbulent boundary layer excitation. The cross-power spectral density function of the velocity response of the plate at any two points r 1 and r 2 is defined by [13] S vv (r 1 , r 2 where ω is the angular frequency, S pp (s 1 − s 2 , ω) is the mutual power spectral density function of the TBL excitation at two points s 1 and s 2 , H(r, s, ω) is the frequency response function, s is the excitation point, and r is the response point, which can give by the form of modal summation as given below: where (m, n) is the number of modes in the horizontal and vertical directions. The characteristic function φ mn (r) is orthogonal and satisfies the same boundary conditions as the plate. For the rectangular simply supported plate, the characteristic function is expressed as the modal displacement W mn (ω) is given by Equation (4) W mn (ω) = 4 where m s is the surface density of the plate, A is the area of the plate and η is the loss factor. For the plate, the (m, n) th mode resonance frequency is for a curved plate, and the (m, n) th mode resonance frequency is [30] where D b is the stiffness of the plate, E is the Young's modulus, ρ is the density of the plate, and R y is the bending curvature of the plate in the y-axis direction. For calculating the response and radiation of decentralized velocity feedback control plates, element-based models are more commonly used [25]. It can be considered that each small rectangular unit on the plate has the same transverse vibration velocity, which is equal to the transverse vibration velocity of its center point v e (ω), and the transverse vibration velocity picked up by the velocity sensor is v c (ω). The above two variables can be written in vector form where R is the total number of small units divided on the plate, and S is the number of active control units. The force of the TBL at the center point of each small element on the plate as well as the force of each control point can be expressed as a vector The closed-loop velocity feedback block diagram is shown in Figure 2. Assuming that the system is linear, the response of the TBL excitation plate can be linearly superimposed with that of the active control unit excitation plate. Therefore, the transverse vibration velocity of the center point and control point of each small unit on the plate can be expressed as where Y ee is the velocity/force mobility matrix of the center point of the small element excited by TBL; Y ec is the velocity mobility matrix of TBL excitation to the control point. Both have the same form where Y ce is the velocity/moment mobility matrix of the center point of the small element excited by the piezoelectric plate; Y cc is the velocity mobility matrix of the piezoelectric sheet excited to the control point. Both have the same form where (x s , y s ) is the position of the piezoelectric chip. l xe and l ye are the length and width of each small element respectively, and the size of the piezoelectric sheet is l sx × l sy = 25 mm × 25 mm. When the feedback control unit acts, the speed of the control point is controlled, so the feedback control force is where h is the gain coefficient. Bringing Equation (13) into Equation (10), the velocity at the control point after feedback control is obtained as the control force in Equation (13) can be written as substituting it into Equation (9), the transverse vibration velocity of the center point of the small unit on the rear plate can be obtained as follows: where G ee (ω) is the velocity mobility matrix of the center point of the small unit on the plate after the control is applied, expressed as The response of the TBL excitation plate based on the model proposed by Corcos [3] has been widely used to describe TBL excitation. According to the Corcos model, the cross-power spectral density of the surface pressure excited by TBL along the x-axis is where U c is the convective velocity of TBL, where U c = 0.7 · U ∞ , U ∞ is the flow velocity of air. The values of parameters and in the equation are determined from experimental data and are dimensionless numbers. Here, γ 1 = 0.116 and γ 3 = 0.7 given by Finnveedn [31] are used. The r xi,j is the lateral distance between the center point of the ith cell and the center point of the jth cell, and the r yi,j is the longitudinal distance between the center point of the ith cell and the center point of the jth cell. The element-based model solves for the power spectral density of the vibration velocity as where A e is the area of each small cell. The kinetic energy power spectral density of the plate can be expressed as where m e is the mass of each small element, G ee (ω) is the velocity mobility matrix of the center point of the small element on the plate after the control is applied as mentioned above, tr[] represents the trace of the matrix, and S f f is the power spectral density matrix of each small element under TBL excitation, which is given by Equation (21): the A e is the area of each small unit, and the Φ pp is the high-power spectral density of the pulsating pressure in the turbulent boundary layer. The radiated sound power is given by Equation (22): the R rad is the radiation coefficient matrix [32], and the specific form is as follows: the k 0 = ω/c 0 is the number of waves in the air, but on the diagonal of the radiation matrix, because of r i,j = 0, the terms on the diagonal of the radiation matrix are meaningless. L'Hopital's rule can be used to solve it, namely lim x→0 sin x x = 1. Model Validation and Response and Radiation of Plates To validate the model, an aluminum plate with the same parameters and excitations as in reference [22] is considered. The length of the plate is 0.278 m, the width is 0.247 m, the thickness is 1.6 mm and the flow velocity is 225 m · s −1 . The physical property parameters of aluminum plate are shown in Table 1. The kinetic energy and radiated sound power of the plate due to TBL excitations are calculated and shown in Figure 3, where a comparison with the modal summation method is provided. In the frequency range calculated, the results based on the element model in this paper are only slightly lower than those of Gardonio [25], and in very good alignment with that of the modal summation method [13]. This is due to the fact that the self-power spectral density of the TBL pulsation pressure is ignored in the calculations. These results verify the correctness of the current model. When 16 active control units are uniformly distributed on the plate, the kinetic energy and radiated sound power of the plate reduce obviously with the increase in gain coefficient, as shown in Figure 4. In Figure 4, a comparison of different gain coefficients and different passive damping is also provided to show the equivalent damping mechanism of the velocity feedback control. In the frequency range of 50−1000 Hz, the control effects at gain coefficients of 5, 10 and 20 agree very well with the effects at loss factors of 12.5%, 23% and 45%, respectively. This reveals that the velocity feedback control can be regarded as a form of active damping, and the plate is heavily damped after the control. Next, an aluminum plate is considered, in which the length of the plate is 0.55 m, the width is 0.5 m, the thickness is 1 mm, and the loss factor is assumed as 0.01. To illustrate the influence of excitation position, the placements of two types of active control units are considered, as shown in Figure 5. The squares in Figure 5 represent the positions of active control units when the 4 rows and 4 columns are uniformly distributed, and the corresponding response and sound radiation of the plate are shown in Figure 6(a1,a2). The circles mark the excitation positions slightly off from the squares to avoid the influence of the nodal lines of modes, and the results are shown in Figure 6(b1,b2). Below 200 Hz, the kinetic energy and radiated acoustic power of the plate are well controlled when the excitation positions are different. As the frequency increases, the control effect of uniformly distributed active control units becomes less effective. The poorly controlled resonant frequencies in Figure 6(a1,a2) correspond to the plate modes (5, :), respectively. The active control unit positions correspond exactly to the nodal lines of these high order modes, making the control of these modes ineffective. When the excitation positions are slightly deviated from the modal node line, the control effect in the higher frequencies is significantly improved, as shown in Figure 6(b1,b2). Response and Sound Radiation of Curved Plates In comparison with that of a flat plate, it is well known that a curved plate will result in significant influence on the radiated sound around its ring frequency. Therefore, the effect of curvature on velocity feedback control results is of interest. In this section, several curved plates with different curvatures and thicknesses are considered, as shown in Table 2. Compared with the flat plate in Figure 6, the vibration and radiated sound of the curved plate are significantly different, as shown in Figure 7(a1,a2). When the active control is not applied, it is evident that the presence of curvature reduces the plate response at low frequencies, but the radiated sound power near the ring frequency increases significantly. In particular, due to the curvature, the mode (1,1) moves to the higher frequency and dominates the sound radiation. Similar to the plate, distributed speed feedback is better for control below 200 Hz. However, the control is no longer effective when the frequency is above 200 Hz. Especially for the mode (1,1), the radiated sound power does not decrease significantly with the increase in the gain coefficient. Even if the active control unit deviates slightly from the uniform distribution to avoid the influence of the modal nodal line elements, the control effect in the (1,1) mode is not satisfactory, as shown in Figure 7(b2). If the radius of curvature is increased, as shown in Figure 8 for the curved plate C2 and C3, the mode (1,1) is shifted toward lower frequencies, while the effect of control is improved. When the radius of curvature is 1 m, the sound power of the mode (1,1) can reduce by about 6 dB after control, and when the radius of curvature is 3 m, it can reach 13 dB. These results imply that the control effect is related to the radius of curvature. When the radius of curvature is large, then a curved plate is close to a flat plate, while when the radius of curvature is small, the control is not very effective for lower-order dominated modes. In addition, the results for curved plates with different thicknesses are shown in Figure 9. When the radius of curvature is 2 m and the thickness is 1 mm, the control effect for the mode (1,1) can reach about 8 dB, and when the thickness is 1.6 mm and 2 mm, the sound power of the mode (1,1) can reduce 9 dB and 14 dB, respectively. These imply that the control effect is also sensitive to the thickness of the curved plate. Figure 9. Kinetic energy and radiated sound power of the curved plate C4 and C5 for uniform distribution of actuators; (a1,a2) with a thickness of 1.6 mm, (b1,b2) with a thickness of 2 mm; (a1,b1) kinetic energy, (a2,b2) radiated sound power. To further explain the above control effect on the curved plate, the characteristic frequency of the curved plate described in Equation (6) can be rewritten as the first term in the second bracket is the stiffness corresponding to the bending forces, and the second term is the stiffness corresponding to the membrane tensions, i.e., Therefore, the equivalent total stiffness of the curved plate is and now, the ratio between the second term and the first term in Equation (26) is defined as the χ The ∆S pp,ω (1,1) indicates the control effect of the active control units at mode (1,1) and is defined as ∆S pp,ω(1,1) = S pp,ω(1,1) (h = 0) − S pp,ω(1,1) (h). (28) The existence of the membrane tension of a curved plate not only increases the radiated sound power of mode (1,1) significantly but also weakens the control effect. To describe this phenomenon, the variation of the control effect with χ and the gain coefficients h for the mode (1,1) is shown in Figure 10. In Figure 10a, the variation of χ is caused by the change in thickness, while in Figure 10b, the variation of χ is caused by the change in radius of the curvature. It can be concluded that the overall trend in control effectiveness decreases as the ratio χ increases. When χ is less than 70, the control is effective and mostly greater than 10 dB. When χ is larger than 70, the control is less effective, and when it is greater than 100, the control effect can be less than 5 dB. These results indicate that the control effect is sensitive to the ratio of stiffness related to the membrane tensions to stiffness related to the bending forces. Table 3 shows the χ values of the first 12 modes when the thickness is 1 mm, 1.6 mm and 2 mm, respectively. It can be seen that the χ values at modes (1,1) and (2,1) are significantly greater than those of the other modes, and this is caused by the stiffness related to the membrane tensions and expressed in Equation (24). Table 3 also shows that the χ values decrease as the thickness of the curved plate increases. The value decreases from 105.28 to 26.32 when the thickness increases from 1 mm to 2 mm. According to Figure 9(a2), the larger the value of χ, the less effective the active control is. When χ is 105.28, the control of the gain factor from 60 to 120 becomes progressively worse, which explains why the curved plate in Figure 7(a2) does not control well at higher gain coefficients. It can be seen that the existence of curvature makes the plate in the bending cross-section of the membrane stress, the membrane stress along the direction of plate bending uniformly distributed, increasing the stiffness of the bending plate in the low-frequency region, and the lower the frequency, the more significant the increase in stiffness caused by the membrane stress at the resonance frequency. The above study shows that the magnitude of χ has a large effect on the control effect of the active control unit of the curved plate. The modes corresponding to larger values of χ are more sensitive to changes in the gain coefficient and an optimal gain exists. To illustrate the optimal gain for controlling the sound power of the curved plate, Figure 11 shows a three-dimensional plot of the control effect versus frequency and gain coefficient. The left figure shows a three-dimensional view of the radiated sound power of the curved plate, and the right figure is the left view of the three-dimensional view showing the radiated sound power of the (1,1) mode. The radiated sound power of the curved plates shows a trend of decreasing and then increasing with the increase in the gain coefficient. For the mode (1,1), the curved plate with the 1 mm thickness has the best control effect at a gain of 58, and the maximum sound reduction is about 9 dB. The thickness of the 1.6 mm curved plate reaches its best at a gain of 140 and has about 16 dB sound reduction, and the curved plate with 2 mm thickness at the gain of 315 achieves its biggest sound reduction about 20 dB. The above results show that as the thickness of the curved plate increases, the corresponding optimal gain increases, and the control effect of the active control unit at the optimal gain is significantly improved. Table 3. χ as a function of thickness and curvature. Modal Order Number The Thickness of the Curved Plate 11. Variation of radiated sound power with frequency and gain coefficient, the 3D view on the left, left of 3D view on the right, sixteen actuators uniformly distributed, (a1,a2) with a thickness of 1 mm, (b1,b2) with a thickness of 1.6 mm, (c1,c2) with a thickness of 2 mm. Conclusions The control effects of distributed velocity feedback on flat and curved plates under TBL excitation are investigated. For the flat plate, the results show that the mechanism of distributed velocity feedback is equivalent to passive damping. The effective control frequency band can be significantly improved when the actuators deviate slightly from the uniform distribution. For the curved plate, even if the active control units deviate slightly from the uniform distribution, the control effect is not significantly improved at higher radiation (1,1) modes. The curvature and thickness of the curved plate have a large effect on the active control effect of the (1,1) mode, increasing the radius of curvature from 1 m to 3 m increases the control effect of the active control units from 6 dB to 13 dB, and increasing the thickness from 1 mm to 2 mm increases the control effect of the active control units from 7 dB to 14 dB. The changes in curvature and thickness have a significant effect on the ratio χ of the stiffness associated with membrane tensions to the stiffness associated with bending forces, and the control effect is sensitive to the magnitude of χ. When χ is less than 70, the control effect is mostly greater than 10 dB, when χ is greater than 70, the control effect is poor, and when χ is greater than 100, the control effect may be less than 5 dB. In the (1,1) mode, the value of χ decays as the thickness of the curved plate increases, while the optimal gain coefficient increases and the control of the radiated sound power is improved. The optimal gain coefficient for a 1 mm thick curved plate is 58 with a control effect of approximately 9 dB; for a 1.6 mm thick curved plate, the optimum gain coefficient is 140 with a control effect of approximately 16 dB; and for a 2 mm thick curved plate, the optimal gain coefficient is 315 with a control effect of approximately 20 dB. The element-based model enumerates the effects of different curvatures and thicknesses on the decentralized feedback control of curved plates, which is useful to guide the design of wall plates for a passenger aircraft cruising at high speed. Conflicts of Interest: The authors declare no conflict of interest.
6,666.8
2023-04-19T00:00:00.000
[ "Engineering", "Physics" ]
Whey Protein Isolate as a Substrate to Design Calendula officinalis Flower Extract Controlled-Release Materials The use of natural active substances and the development of new formulations are promising directions in the cosmetic and pharmacy industries. The primary purpose of this research was the production of microparticles based on whey protein isolate (WPI) and calcium alginate (ALG) containing Calendula officinalis flower extract and their incorporation into films composed of gelatin, WPI, and glycerol. Both swollen and dry microparticles were studied by optical microscopy and their sizes were measured. Water absorption by the microparticles, their loading capacity, and the release profile of flower extract were also characterized. The films were analyzed by mechanical tests (Young’s modulus, tensile strength, elongation at break), swelling capacity, contact angle, and moisture content measurements. The presented data showed that the active ingredient was successfully enclosed in spherical microparticles and completely released after 75 min of incubation at 37 °C. The incorporation of the microparticles into polymer films caused a decrease in stiffness and tensile strength, simultaneously increasing the ductility of the samples. Moreover, the films containing microparticles displayed higher swelling ability and moisture content compared to those without them. Hence, the materials prepared in this study with Calendula officinalis flower extract encapsulated into polymeric microspheres can be a starting point for the development of new products intended for skin application; advantages include protection of the extract against external factors and a controlled release profile. Introduction Whey is a by-product of cheese manufacturing from bovine milk.Whey proteins are the main protein component of ruminant milk after caseins, and they constitute 20% of all proteins in milk.Whey protein occurs in three main forms: isolate (WPI), concentrate (WPC), and hydrolysate (WPH).These fractions differ in the percentages of proteins, lipids, and carbohydrates [1].During the purification process, fat and lactose are removed from whey protein, yielding WPI, whose protein content is at least 90%.The main proteins consist of β-lactoglobulin, α-lactalbumin, glycomacropeptide, immunoglobulins, bovine serum albumin, lactoferrin, lysozyme, prosthetic peptones, and others [2].However, their content varies depending on the season and type of produced cheese [3,4], composition and type of milk [5], and the nature of the WPI purification process (e.g., membrane-separation, filtration processes, ion-exchange chromatography) [6,7].Exposure of whey proteins to elevated temperatures above 60 • C initiates structural changes in proteins, which lead to the formation of extensive hydrogel networks [8].Irreversible heat-induced gelation results from peptide denaturation and aggregation processes through covalent intermolecular bonds and other intermolecular non-covalent interactions, such as hydrophobic and electrostatic interactions [9].The pH of the solution and the ionic strength have a significant impact on the spatial structure of the protein and are thus of great importance during protein hydrogel formation [10].WPI exhibits wide functionalities due to its emulsifying, gelling, foaming, and water-binding properties [11,12].WPI is becoming an increasingly popular functional and active food ingredient because it is produced in very large amounts and demonstrates numerous health benefits to humans.WPI has been applied not only in the food industry, but also in the cosmetic and pharmaceutical industries and the preparation of biomaterials [13][14][15].WPI, as a dairy industry by-product, constitutes a relatively cheap and versatile material for various uses, such as encapsulation and thin polymeric film preparation [16][17][18][19].Microparticles are spherical particles intended to enclose various substances, such as extracts [20], drugs [21], vitamins [22], dyes [23], perfumes [24], etc., in a polymeric matrix, depending on their application.Different methods can be employed for the production of microparticles, such as emulsion [25], extrusion [26], coacervation [27], or spray drying [28,29].The main determinants for selecting the proper production method and wall material are the morphology and physicochemical properties of the microparticles and the type of encapsulated substance [30].The main advantages of encapsulation include the protection of enclosed substances from external factors and undesired reactions (e.g., oxidation or deactivation).Hence, encapsulation fulfills a dual function in that it simultaneously increases and maintains the stability of these substances.Further reasons for encapsulation are control and modification of the release rate of substances, separation of incompatible materials, as well as masking of organoleptic properties of substances such as color, taste, and odor [31,32]. To date, various research studies have been carried out to enhance the properties of thin polymer films by combining different polymers [33], adding plasticizers [34,35], or even microparticles [36,37].However, to the best of our knowledge, there is no report on the incorporation of Calendula officinalis flower extract into microparticles made from WPI and the modification of films by the addition of such microparticles.Calendula officinalis, also known as pot marigold, is an annually flowering plant belonging to the Compositae family.Although it is native to the Mediterranean and the Middle East, it is grown in many countries and sometimes grows as a wild plant.The composition of its extract is complex; it mainly comprises carbohydrates, lipids, terpenoids, carotenoids, and phenolic compounds, including phenolic acids, tannins, coumarins, and flavonoids [38,39].For this reason, Calendula officinalis preparations possess multiple activities, including antioxidant, antibacterial, antifungal, antiviral, anti-inflammatory, and wound healing activities [40,41]. The aim of the present study was the production of microparticles with Calendula officinalis flower extract and thin films using WPI and the investigation of their morphological and physicochemical properties.Microparticles were obtained from WPI and sodium alginate using an extrusion method and Ca 2+ as a crosslinking agent; conversely, films were fabricated using gelatin, WPI, and glycerol, and further modified by calcium alginate microparticles (ALG).The ultimate goal is the development of new, highly effective materials intended for skin application.The isolation of the pot marigold flower extract in microspheres will enable its release in a controlled manner.These types of materials can form the basis for the design of new cosmetics (such as cosmetic masks) or new carrier systems for dermatological applications. Characterization of Microparticles The appearance of dry and swollen microparticles based on WPI and ALG containing Calendula officinalis flower extract is shown in Figure 1.The morphological observations showed that the swollen microparticles were spherical in shape.They became less regular after drying.The swollen and dry samples possessed smooth surfaces.On the basis of optical microscope images, the appearance of samples appeared to be independent of their composition. optical microscope images, the appearance of samples appeared to be independent of their composition.The prepared microparticles were characterized by water absorption and loading capacity of plant extract and measurement of their sizes (Table 1).Based on the presented data, the diameter of the WPI/ALG microparticles decreased by a factor of more than two after drying.The size of the swollen samples was approximately 2250 µm.Moreover, the analysis demonstrated that the obtained microparticles revealed a high water absorption capacity.The prepared microparticles were characterized by water absorption and loading capacity of plant extract and measurement of their sizes (Table 1).Based on the presented data, the diameter of the WPI/ALG microparticles decreased by a factor of more than two after drying.The size of the swollen samples was approximately 2250 µm.Moreover, the analysis demonstrated that the obtained microparticles revealed a high water absorption capacity.The pot marigold extract release profile embedded in the microparticles based on WPI and calcium alginate in acetate buffer at 37 • C is shown in Figure 2. As one can see from Figure 2, the active substance encapsulated in the prepared microparticles was completely released after 75 min. The pot marigold extract release profile embedded in the microparticles based on WPI and calcium alginate in acetate buffer at 37 °C is shown in Figure 2. As one can see from Figure 2, the active substance encapsulated in the prepared microparticles was completely released after 75 min. On the basis of the analyses of the prepared microparticles, samples consisting of 4% WPI and 0.5% calcium alginate were selected for inclusion in polymer films. Mechanical Properties The values of Young's modulus, tensile strength, and elongation at a break during the stretching of dry films and films soaked in PBS buffer (pH = 5.7) are shown in Table 2.The film thickness was measured before testing.The thickness of the films without microparticles was approximately 0.16 mm, whereas samples containing microparticles displayed a thickness of 0.21 mm.The measurements revealed that the mechanical properties differed due to changes in the films' composition.Dry films composed of gelatin and glycerol had lower values of Young's modulus (497 ± 78 MPa) and tensile strength (29 ± 2 N), as well as higher elongation at break (17 ± 2), which indicates that they were more flexible and broke later than the samples containing WPI (669 ± 83 MPa; 33 ± 4 N and 4 ± 1%, respectively).Incorporating microspheres into both GEL and GEL/WPI films led to a slight decrease in the values of Young's modulus and tensile strength, while the values of elongation at break were slightly higher.Thus, the samples without the microspheres (GEL and GEL/WPI) were fractionally stiffer than those with the addition of microspheres (GEL + M(WPI 4% + ALG 0.5%) and GEL/WPI + M(WPI 4% + ALG 0.5%), respectively.Considering samples before soaking in PBS buffer, the highest Young's modulus (669 ± 83 MPa) and tensile strength (33 ± 4 N) values were displayed by samples composed of gelatin, WPI, and glycerol, whereas the lowest values were displayed by the film containing gelatin, glycerol, and microparticles; in this case, Young's modulus and tensile strength were 474 ± 62 MPa and 23 ± 3 N, respectively.On the basis of the analyses of the prepared microparticles, samples consisting of 4% WPI and 0.5% calcium alginate were selected for inclusion in polymer films. Materials Characterization 2.2.1. Mechanical Properties The values of Young's modulus, tensile strength, and elongation at a break during the stretching of dry films and films soaked in PBS buffer (pH = 5.7) are shown in Table 2.The film thickness was measured before testing.The thickness of the films without microparticles was approximately 0.16 mm, whereas samples containing microparticles displayed a thickness of 0.21 mm.The measurements revealed that the mechanical properties differed due to changes in the films' composition.Dry films composed of gelatin and glycerol had lower values of Young's modulus (497 ± 78 MPa) and tensile strength (29 ± 2 N), as well as higher elongation at break (17 ± 2), which indicates that they were more flexible and broke later than the samples containing WPI (669 ± 83 MPa; 33 ± 4 N and 4 ± 1%, respectively).Incorporating microspheres into both GEL and GEL/WPI films led to a slight decrease in the values of Young's modulus and tensile strength, while the values of elongation at break were slightly higher.Thus, the samples without the microspheres (GEL and GEL/WPI) were fractionally stiffer than those with the addition of microspheres (GEL + M(WPI 4% + ALG 0.5%) and GEL/WPI + M(WPI 4% + ALG 0.5%), respectively.Considering samples before soaking in PBS buffer, the highest Young's modulus (669 ± 83 MPa) and tensile strength (33 ± 4 N) values were displayed by samples composed of gelatin, WPI, and glycerol, whereas the lowest values were displayed by the film containing gelatin, glycerol, and microparticles; in this case, Young's modulus and tensile strength were 474 ± 62 MPa and 23 ± 3 N, respectively. Swelling Tests Figure 3 shows the swelling percentage ratios of films prepared from gelatin, WPI, and glycerol with and without the addition of WPI microparticles, which were conducted during 3 h of incubation in PBS buffer (pH = 5.7). Swelling Tests Figure 3 shows the swelling percentage ratios of films prepared from gelatin, WPI, and glycerol with and without the addition of WPI microparticles, which were conducted during 3 h of incubation in PBS buffer (pH = 5.7).The swelling took place at a constant rate.After 15 min, the protein-based films absorbed PBS buffer, increasing their weight by 260-280%.Three hours later, their weight increased up to 530% for the film based on gelatin and WPI and 580-590% for gelatin films and gelatin/WPI films containing microparticles. Contact Angle Results The results of contact angle measurements for diiodomethane (D) and glycerol (G) for protein-based films are presented in Table 3.It was impossible to measure the contact angles of films containing microparticles due to their high surface roughness.A polymeric film composed of gelatin and glycerol displayed higher contact angles for both liquids, glycerol and diiodomethane (75.8 ± 0.4° and 52.8 ± 1.4°, respectively), than the film containing WPI, gelatin, and glycerol (71.4 ± 0.4° for glycerol and 50.4 ± 0.8° for diiodomethane).The addition of WPI to the film composition led to a change in non-covalent forces between the first monolayer of film and liquid and, therefore, decreased contact angles.The results of moisture content after drying the samples in an oven at 110 °C to a constant weight are shown in Figure 4. Moisture content is a parameter connected with the volume occupied by water molecules in the microstructural network of the film.The swelling took place at a constant rate.After 15 min, the protein-based films absorbed PBS buffer, increasing their weight by 260-280%.Three hours later, their weight increased up to 530% for the film based on gelatin and WPI and 580-590% for gelatin films and gelatin/WPI films containing microparticles. Contact Angle Results The results of contact angle measurements for diiodomethane (D) and glycerol (G) for protein-based films are presented in Table 3.It was impossible to measure the contact angles of films containing microparticles due to their high surface roughness.A polymeric film composed of gelatin and glycerol displayed higher contact angles for both liquids, glycerol and diiodomethane (75.8 ± 0.4 • and 52.8 ± 1.4 • , respectively), than the film containing WPI, gelatin, and glycerol (71.4 ± 0.4 • for glycerol and 50.4 ± 0.8 • for diiodomethane).The addition of WPI to the film composition led to a change in non-covalent forces between the first monolayer of film and liquid and, therefore, decreased contact angles.The results of moisture content after drying the samples in an oven at 110 • C to a constant weight are shown in Figure 4. Moisture content is a parameter connected with the volume occupied by water molecules in the microstructural network of the film. According to the results, the film composition affects the samples' moisture content.The highest moisture content was observed for the film composed of gelatin, glycerol, and microparticles (19%).In contrast, the lowest moisture content value was displayed by a sample containing gelatin, WPI, and glycerol (12.5%).According to the results, the film composition affects the samples' moisture content.The highest moisture content was observed for the film composed of gelatin, glycerol, and microparticles (19%).In contrast, the lowest moisture content value was displayed by a sample containing gelatin, WPI, and glycerol (12.5%). Discussion The samples composed of 5% WPI and 0.5% ALG displayed the highest water absorption rate (approximately 830%).In turn, the lowest water absorption was displayed by the M(WPI 4% + ALG 1%) samples (approximately 670%).The loading capacity of Calendula officinalis flower extract into the variously formulated carriers was determined by the spectrophotometric method.As mentioned in other studies, the use of alginate alone leads to a low encapsulation efficiency [42,43].This is due to diffusion through the porous structure of the hydrogels.However, a combination of the alginate with proteins improves the encapsulation of the active substance [43,44].The results showed that the composition of the samples impacted the incorporation efficiency of the active ingredient.The largest amount of plant extract was entrapped in the M(WPI 5% + ALG 0.5%) microparticles (approximately 293 mg/g based on gallic acid).A lower content of calcium alginate in the microspheres is related to a higher loading capacity. Phenolic compounds have been encapsulated in both polymeric micro-and nanoparticles in order to control their release rate in various media [45,46].It is difficult to achieve the goal of targeted release.Therefore, the study of the behavior of the ALG/WPI microparticles containing pot marigold extract in an acidic environment is of great importance to gain a better understanding of its potential application in dermatology and cosmetics.The composition of the microspheres influenced the plant extract release rate.The microparticles composed of 0.5% calcium alginate showed a faster release rate of active ingredient than samples made from 1% of this polysaccharide.A two-stage release profile was observed for samples containing 1% calcium alginate.After 60 min, there was a rapid increase in release from these microparticles.In contrast, samples with 0.5% of polysaccharide exhibited a smooth release rate. As expected, the soaking of materials led to a significant decrease in Young's modulus and tensile strength values due to their hydration.The wet samples were significantly less stiff than the samples prior to soaking.The findings in the present study are consistent with other studies investigating the mechanical properties of protein-based films.The microstructural and physical properties of films composed of WPI and gelatin have been investigated [47].It was noted that WPI exhibited a more twisted network microstructure Discussion The samples composed of 5% WPI and 0.5% ALG displayed the highest water absorption rate (approximately 830%).In turn, the lowest water absorption was displayed by the M(WPI 4% + ALG 1%) samples (approximately 670%).The loading capacity of Calendula officinalis flower extract into the variously formulated carriers was determined by the spectrophotometric method.As mentioned in other studies, the use of alginate alone leads to a low encapsulation efficiency [42,43].This is due to diffusion through the porous structure of the hydrogels.However, a combination of the alginate with proteins improves the encapsulation of the active substance [43,44].The results showed that the composition of the samples impacted the incorporation efficiency of the active ingredient.The largest amount of plant extract was entrapped in the M(WPI 5% + ALG 0.5%) microparticles (approximately 293 mg/g based on gallic acid).A lower content of calcium alginate in the microspheres is related to a higher loading capacity. Phenolic compounds have been encapsulated in both polymeric micro-and nanoparticles in order to control their release rate in various media [45,46].It is difficult to achieve the goal of targeted release.Therefore, the study of the behavior of the ALG/WPI microparticles containing pot marigold extract in an acidic environment is of great importance to gain a better understanding of its potential application in dermatology and cosmetics.The composition of the microspheres influenced the plant extract release rate.The microparticles composed of 0.5% calcium alginate showed a faster release rate of active ingredient than samples made from 1% of this polysaccharide.A two-stage release profile was observed for samples containing 1% calcium alginate.After 60 min, there was a rapid increase in release from these microparticles.In contrast, samples with 0.5% of polysaccharide exhibited a smooth release rate. As expected, the soaking of materials led to a significant decrease in Young's modulus and tensile strength values due to their hydration.The wet samples were significantly less stiff than the samples prior to soaking.The findings in the present study are consistent with other studies investigating the mechanical properties of protein-based films.The microstructural and physical properties of films composed of WPI and gelatin have been investigated [47].It was noted that WPI exhibited a more twisted network microstructure (compared to the organized network of gelatin), which could improve the film's mechanical strength and reduce its ductility.During the mixing of WPI and gelatin, the particles' size may be reduced by the electrostatic attraction and hydrogen bonding (between the WPI amido and the gelatin carboxyl groups); hence, the formed chains could be thickened.Cao et al. evaluated the effect of soy protein isolate/gelatin ratio on the mechanical properties of composite films [48].They also attributed the changes in tensile strength to the pro-tein/protein intermolecular interactions determined by hydrogen bonds or by electrostatic interaction and/or by hydrophobic nature.The sequence of amino acid residues and the three-dimensional network influence these interactions.Pérez-Gago analyzed the influence of WPI denaturation time and temperature on the physical properties of WPI films plasticized with glycerol [49].They found that with increasing heat-denaturation time (from 5 to 20 min) and temperature (from 70 to 100 • C), Young's Modulus, tensile strength, and percentage elongation increased.This was attributed to the covalent disulfide bonding of the heat-denatured whey protein films created during the unfolding of globular whey protein, which resulted in stronger films that withstand greater deformations. It can also be noted that the addition of microparticles caused a decrease in the values of Young's modulus and tensile strength and a rise in elongation at break (for both dry and soaked films).This indicates that the addition of microparticles modified or disrupted the original structures of the polymeric matrix.The same observations have been reported in other papers [37,50].Gelatin films modified with papaya peel microparticles showed lower Young's modulus and tensile strength than the control sample due to lack of cohesion of residues with gelatin [36].The authors of the latter study emphasized the importance of the cohesion of the polymer matrix constituents as the predominant reason for the film's mechanical strength, causing a good interaction between the microparticles and the polymer matrix. All prepared samples also contained glycerol, a plasticizer that reduces the intermolecular hydrogen bonding while increasing the intermolecular spacing and mobility of biopolymer chains [51].It is assumed that the protein-protein interactions are being replaced by the polymer-plasticizer hydrogen bonds created by the plasticizer polar groups (−OH) [48].Therefore, these interactions may be affected by the plasticizers' molecular size, configuration, the total number of functional hydroxyl groups, and the selected polymer's compatibility.Glycerol has been found to be one of the most effective plasticizers.Due to its small size, it can penetrate more easily between the polymer chains and weaken the interaction between polymer chains, thus increasing the material's flexibility and extensibility [52]. The materials were not soluble in water, thus, it was possible to carry out the swelling measurements.The swelling degree is an indicator of the protein cross-linking degree.Swellability depends on the structure and properties of the solvent and the polymer, as well as the interactions between them [53].It can be seen that the films composed of gelatin and glycerol displayed slightly higher swelling properties than the films with the addition of WPI.Moreover, the films containing microparticles also displayed higher water uptake.The insolubility of WPI may cause lower water uptake by protein-based films due to the intermolecular disulfide bonds formed during the heat-denaturation process [54].Corresponding swelling ratios were observed in research performed by Amjadi et al. on WPI-based films containing nanoemulsions of orange peel essential oil for packaging purposes [55].They observed a swelling ratio of ~1000% after 24 h of immersion in water.Esteghlal et al. investigated how the physical and mechanical properties of gelatin/carboxymethyl cellulose (CMC) films are affected by the electrostatic interactions between the biopolymers [56].They found that the swelling properties are influenced by the mixing ratio and different pH values (swelling ratio ranged from 240 to 585%).Moreover, Cao et al. noticed that with increasing gelatin content in gelatin/soy protein isolate film, the degree of swelling capacity increased (from 400 to 950%) owing to the higher swelling properties of gelatin compared to the soy protein isolate (SPI) [48]. The surface free energies and their polar and dispersive components were determined using the Owens-Wendt method (Table 3).It can be seen that samples containing WPI had higher polar (6.8 mJ/m 2 ) and dispersive (28.5 mJ/m 2 ) components compared to the gelatin film (5.2 mJ/m 2 for polar and 28.0 mJ/m 2 for dispersive components).The surface free energies for gelatin and gelatin/WPI films were 33.2 mJ/m 2 and 35.3 mJ/m 2 , respectively.Based on the low value of polar components, it is concluded that both films possessed less-hydrophilic surfaces; however, the film surface of the sample containing WPI displayed a slightly higher polarity.This can be ascribed to the intermolecular interactions between gelatin and WPI, which interfered with the orientation of polar groups toward the film surface.Glycerol, as well as hydroxyl, amino, and carboxyl groups between two polymers, participated in the formulation of hydrogen bonding.Furthermore, WPI and gelatin molecules could form compact aggregates through electrostatic interactions [57,58]. Films containing WPI showed a lower moisture content than the samples without WPI.However, the introduction of WPI/ALG microparticles into polymer films led to higher moisture content.Other researchers have also made similar observations; Shams et al. evaluated the moisture content of WPI/gelatin films modified by nanoclay and orange peel extract.The control film had a moisture content of approximately 34% [59].The effect of glycerol, xylitol, and sorbitol on the physical properties of WPI films has also been investigated [60].It was observed that samples plasticized with glycerol (from 40 to 60% depending on the plasticizer/protein ratio) displayed the highest moisture content, whereas the addition of xylitol and sorbitol resulted in a moisture content of 15-20%.WPIbased films have also been reported to display a moisture content of ~16.5% [54], whereas films containing gelatin displayed a moisture content of ~14.5% [61]. Preparation of Microparticles Microparticles (M) consisting of WPI and calcium alginate (ALG) were prepared using an encapsulator (B-395 Pro, BÜCHI Labortechnik AG, Flawil, Switzerland).Microparticles from solutions with different concentrations of WPI and sodium alginate were prepared.WPI solution with a concentration of 4% or 5% and sodium alginate solution with a concentration of 0.5% or 1% were used.First, a mixture of WPI and sodium alginate with the addition of 0.5% marigold flower extract was prepared.The ingredients were mixed on a magnetic stirrer for an hour at room temperature and then left without stirring for 2 h to ensure complete hydration of proteins.After this time, the polymer solutions with plant extract were heated for 40 min at 80 • C to denature the proteins contained in the WPI.The resulting solutions were cooled overnight at room temperature [63]. The production of microparticles using an encapsulator started by transferring the WPI and sodium alginate solution containing Calendula officinalis flower extract to a pressure bottle.Then, the mixture was forced through a 1000 µm diameter nozzle and separated into droplets by an electrical field.The formation of microparticles took place in a bath with a crosslinker solution (0.5 M CaCl 2 ), which was continuously stirred to prevent the agglomeration of microparticles.The produced calcium alginate microspheres were kept in the bath with the crosslinking solution for 15 min.The collected microparticles were rinsed with distilled water and immersed in the extract. Imaging of Microparticles The appearance and sizes of the prepared microparticles were observed by optical microscope SMZ-171 BLED (Motic, Hong Kong, China) at a magnification of ×10.Imaging of swollen and dry polymer microspheres was performed.Drying of the samples lasted 72 h at room temperature.The images and diameters of the samples were recorded using Motic Images Plus 3.0 software. Water Absorption of Microparticles Each type of the obtained microparticles was weighed after drying for 72 h and immersion in phosphate saline buffer (pH = 5.7) for 2 h.The test was performed in triplicate for all microparticle types.The water absorption capacity (1) was defined as the ratio of the increase in weight (swollen microparticles) (W w ) to the initial weight (dry microparticles) (W d ), as follows: Loading Capacity of Microspheres The loading capacity of the microspheres was determined by quantifying the phenolic compounds contained in the Calendula officinalis flower extract enclosed in the microspheres.For this purpose, the spectrophotometric method with Folin-Ciocalteu reagent was used [64].The microspheres were weighed and immersed in 2 mL of 1 M NaOH for 1 h.After centrifuging the samples, the supernatant solution was collected.A total of 20 µL of sample with the extract was mixed with 1.58 mL of distilled water and 100 µL of Folin-Ciocalteu reagent was added.After 4 min, 300 µL of saturated Na 2 CO 3 solution was added.The mixture was incubated for 30 min at 40 • C to obtain a typical blue color.The absorbance was measured at a wavelength of 725 nm using a UV-Vis spectrophotometer (UV-1800, Shimadzu, Kyoto, Japan).The presented results were calculated based on gallic acid using the standard curve equation.Three measurements were made for each type of sample. In Vitro Release The release of extract entrapped in microparticles was also investigated by evaluation of phenolic content using a spectrophotometric method.Each type of microsphere was weighed and placed in acetate buffer (pH = 5.4).Samples were incubated at 37 • C. The solution was collected after 15, 30, 45, 60, and 75 min and the new portion of acetate buffer was added to the microspheres.Samples for measurement were prepared as in the previous Section 4.3.3,using the Folin-Ciocaltou reagent.Absorbance was measured at 725 nm with a UV-Vis spectrophotometer (UV-1800, Shimadzu, Kyoto, Japan) [65]. Preparation of Films with Microspheres The films were fabricated from gelatin, WPI, and plasticizer (glycerol) using a solution casting technique [66].The scheme of fabrication of gelatin/WPI/glycerol films with microparticles, as well as the preparation of WPI and calcium alginate microparticles, is presented in Figure 5. First, a solution of gelatin and WPI was prepared at concentrations of 4% and 2%, respectively, by mixing the ingredients on a magnetic stirrer for 1 h at room temperature.After this time, 2% (w/v) of glycerol was added and stirring was continued at 80 • C for 30 min.Then, a 5.5% suspension of the microspheres was added to the obtained solutions.After analyzing the prepared microparticles, the M(WPI 4% + ALG 0.5%) type was selected for location in films due to its smallest size in the swollen state.The mixtures were cast onto Petri dishes and allowed to dry at room temperature for 7 days.This matrix was denoted as GEL/WPI + M(WPI 4% + ALG 0.5%).The films were also prepared from a 4% gelatin solution following the procedure described above (GEL + M(WPI 4% + ALG 0.5%).For comparison, matrices without the addition of microparticles were prepared (GEL/WPI and GEL).The thickness of the obtained films was measured with a digital dial thickness gauge at a resolution of 0.001 mm (Sylvac, Yverdon-les-Bains, Switzerland). at 80 °C for 30 min.Then, a 5.5% suspension of the microspheres was added to the obtained solutions.After analyzing the prepared microparticles, the M(WPI 4% + ALG 0.5%) type was selected for location in films due to its smallest size in the swollen state.The mixtures were cast onto Petri dishes and allowed to dry at room temperature for 7 days.This matrix was denoted as GEL/WPI + M(WPI 4% + ALG 0.5%).The films were also prepared from a 4% gelatin solution following the procedure described above (GEL + M(WPI 4% + ALG 0.5%).For comparison, matrices without the addition of microparticles were prepared (GEL/WPI and GEL).The thickness of the obtained films was measured with a digital dial thickness gauge at a resolution of 0.001 mm (Sylvac, Yverdon-les-Bains, Switzerland). Mechanical Tests Mechanical properties of the prepared films with and without microspheres were studied using a mechanical testing machine equipped with tensile grips (EZ-Test SX Texture Analyzer, Shimadzu, Kyoto, Japan).Specimens with initial dimensions of 50 mm in Mechanical properties of the prepared films with and without microspheres were studied using a mechanical testing machine equipped with tensile grips (EZ-Test SX Texture Analyzer, Shimadzu, Kyoto, Japan).Specimens with initial dimensions of 50 mm in length and 4.5 mm in width were prepared by cutting with a dumbbell-shaped sharpener.The dry specimens and the specimens soaked for 5 min in PBS buffer (pH = 5.7) were examined.The prepared specimens were inserted between the machine clamps and stretched to break.The elastic modulus (Young's modulus, E) was calculated from the slope of the stress-strain curve in the linear region.The tensile strength and the elongation at break of the films were also determined.The measurements were carried out at a velocity of 2 mm/min.The results were recorded using Trapezium X software (version 1.4.5, Shimadzu, Kyoto, Japan).Five measurements were made for each type of film. Evaluation of Swelling Capacity The swelling ratio of the obtained films was tested by immersion in a phosphate saline buffer (PBS) at pH 5.7 for 3 h.The dry samples were weighed (W 1 ) and placed in PBS solution.The measurements were conducted after 15 min, 30 min, 1 h, 2 h, and 3 h.After each interval, the samples were removed from the phosphate saline buffer and reweighed (W 2 ) [67].The swelling degree of the films was calculated using the following Equation ( 2 The contact angles ( • ) of two liquids, diiodomethane (apolar liquid) and glycerol (polar liquid), on polymeric films were measured at constant room temperature (22 • C) using a DSA G10 goniometer equipped with a drop shape analysis system (Krüss GmbH, Wolfsburg, Germany).To obtain contact angle values, the average of five measurements was calculated.The surface free energy and its polar and dispersive components were calculated using the Owens-Wendt method [68]. Moisture Content The moisture content of the films with and without the addition of microparticles based on WPI and ALG was determined.Measurement of weight loss after drying in an oven at 110 • C was conducted to a constant weight [69].After removal from the oven, the samples were stored in a desiccator.The samples were analyzed in triplicate.The moisture content (MC, %) was defined as the initial weight (W i ) of each sample and the weight after drying (W d ) using the Formula (3) below: Statistical Analysis One-way ANOVA with Tukey's pairwise analysis was performed to statistically compare the results of microparticle (size, water absorption, and loading capacity) and film (mechanical properties and moisture content) characterization.GraphPad Prism 8 (Graph-Pad Software, San Diego, CA, USA) was used for all analyses.Data are shown as the mean ± S.D. for each experiment.p-values < 0.05 were considered significant.Statistically significant differences were marked with different superscript letters. Conclusions The focus of this study was to incorporate microparticles based on WPI and ALG containing Calendula officinalis flower extract into various WPI/gelatin-based films, mimicking a dermatological material for sustained, controlled delivery.Pot marigold extract was selected because of its beneficial antioxidant, anti-inflammatory, antimicrobial, and anti-viral properties.Microparticles consisting of 4% WPI and 0.5% ALG were incorporated into films.The WPI/gelatin-based films displayed enhanced mechanical strength, reduced ductility, slightly higher polarity, and lower moisture content compared to gelatin films.Furthermore, the microparticle-loaded samples demonstrated a higher capacity for water uptake and were less stiff than those without microparticles.The vital advantage of microparticles is the possibility to control the release rate of an active substance.The obtained results indicate the potential of GEL and GEL/WPI films modified with the addition of microspheres as material for cosmetic or dermatological applications.To confirm the functional properties and effectiveness of the films, measurements of skin parameters with the participation of volunteers are planned in the near future. Figure 2 . Figure 2. In vitro release of Calendula officinalis flower extract from microparticles (M) based on whey protein isolate (WPI) and calcium alginate (ALG). Figure 2 . Figure 2. In vitro release of Calendula officinalis flower extract from microparticles (M) based on whey protein isolate (WPI) and calcium alginate (ALG). Figure 3 . Figure 3. Swelling tests of the prepared films based on gelatin (GEL) and whey protein isolate (WPI) with and without microparticles (M) based on whey protein isolate (WPI) and calcium alginate (ALG).Different letters indicate a difference at p < 0.05.Therefore, the values labeled by one or more letters (a,b) indicate that variables in a column are statistically indistinguishable at p < 0.05 if they share at least one letter. Figure 3 . Figure 3. Swelling tests of the prepared films based on gelatin (GEL) and whey protein isolate (WPI) with and without microparticles (M) based on whey protein isolate (WPI) and calcium alginate (ALG).Different letters indicate a difference at p < 0.05.Therefore, the values labeled by one or more letters (a,b) indicate that variables in a column are statistically indistinguishable at p < 0.05 if they share at least one letter. Figure 4 . Figure 4. Moisture content (%) of the prepared polymer films based on gelatin (GEL) and whey protein isolate (WPI) with and without microparticles (M) based on whey protein isolate (WPI) and calcium alginate (ALG).Different letters indicate a difference at p < 0.05.Therefore, the values labeled by one or more letters (a-c) indicate that variables in a column are statistically indistinguishable at p < 0.05 if they share at least one letter. Figure 4 . Figure 4. Moisture content (%) of the prepared polymer films based on gelatin (GEL) and whey protein isolate (WPI) with and without microparticles (M) based on whey protein isolate (WPI) and calcium alginate (ALG).Different letters indicate a difference at p < 0.05.Therefore, the values labeled by one or more letters (a-c) indicate that variables in a column are statistically indistinguishable at p < 0.05 if they share at least one letter. Figure 5 . Figure 5.The preparation scheme of WPI and calcium alginate microparticles (a) and production of gelatin/WPI/glycerol films with microparticles (b). Figure 5 . Figure 5.The preparation scheme of WPI and calcium alginate microparticles (a) and production of gelatin/WPI/glycerol films with microparticles (b). Table 1 . Characterization of the prepared microparticles: their sizes, swelling ratio, and loading capacity of Calendula officinalis flower extract.The values with different superscript letters in a column are significantly different (p < 0.05). Table 1 . Characterization of the prepared microparticles: their sizes, swelling ratio, and loading capacity of Calendula officinalis flower extract.The values with different superscript letters in a column are significantly different (p < 0.05). Table 2 . Young's modulus, tensile strength, and elongation at break of the dry and soaked polymer films with microparticles (WPI 4% + ALG 0.5%) and without them.Different superscript letters indicate a difference at p < 0.05. Table 2 . Young's modulus, tensile strength, and elongation at break of the dry and soaked polymer films with microparticles (WPI 4% + ALG 0.5%) and without them.Different superscript letters indicate a difference at p < 0.05. Table 3 . The contact angles of diiodomethane (D) and glycerol (G), the surface free energy (γs), polar (γs p ), and dispersive (γs d ) components for polymer films based on gelatin and whey protein isolate (calculated by Owens-Wendt method).Different superscript letters indicate a difference at p < 0.05. Table 3 . The contact angles of diiodomethane (D) and glycerol (G), the surface free energy (γ s ), polar (γ s p ), and dispersive (γ s d ) components for polymer films based on gelatin and whey protein isolate (calculated by Owens-Wendt method).Different superscript letters indicate a difference at p < 0.05.
8,762.6
2024-05-01T00:00:00.000
[ "Materials Science" ]
Two-loop conformal invariance for Yang-Baxter deformed strings The so-called homogeneous Yang-Baxter (YB) deformations can be considered a non-abelian generalization of T-duality–shift–T-duality (TsT) transformations. TsT transformations are known to preserve conformal symmetry to all orders in α′. Here we argue that (unimodular) YB deformations of a bosonic string also preserve conformal symmetry, at least to two–loop order. We do this by showing that, starting from a background with no NSNS-flux, the deformed background solves the α′–corrected supergravity equations to second order in the deformation parameter. At the same time we determine the required α′–corrections of the deformed background, which take a relatively simple form. In examples that can be constructed using, possibly non-commuting sequences of, TsT transformations we show how to obtain the first α′–correction to all orders in the deformation parameter by making use of the α′-corrected T-duality rules. We demonstrate this on the specific example of YB deformations of a Bianchi type II background. Introduction and summary of results Yang-Baxter (YB) deformations were first introduced by Klimčik in [1]. It was later understood that they have the remarkable property of preserving integrability [2]. If one starts from an integrable sigma model and performs a YB deformation the resulting model is also integrable. This made people interested in applying them in string theory, which was done for the AdS 5 ×S 5 superstring in [3,4]. The YB deformation is based on an R-matrix for which there are two basic possibilities-R can solve either the classical Yang-Baxter equation (CYBE) or the modified classical Yang-Baxter equation (mCYBE). The former case is often referred to as homogeneous YB deformations and is the case we consider here. It was shown in [5] that these models typically have a Weyl-anomaly 1 unless the R-matrix is unimodular, i.e. its contraction with the structure constants of the isometry algebra of the original model vanishes R IJ f IJ K = 0. This is similar to the anomaly encountered in non-abelian T-duality (NATD) [8] on a non-unimodular group [9,10]. Indeed it was argued in [11] that homogeneous YB deformations should have a realization in terms of NATD and this was then proven in [12] (see also [13]). While the original YB deformations were defined only for sigma models of the symmetric space type, the realization of the homogeneous models using NATD meant that they could be defined for a general string Regarding the extension of these results to all orders in the deformation parameter η, it is natural to expect that one should just correct the undeformed metric and take Θ ij → Θ ij − α ′ R ijkl Θ kl in the expressions forG andB in (1.1) (and maybe Φ depending on the scheme). On 5 The unimodularity condition is sufficient but not necessary in general. Relaxing it one finds at order η, assuming B = 0, the necessary condition dK = 0 where K n = ∇mΘ mn . This is equivalent to ∇mk n I f I J K R J K = 0 which is in general weaker than the unimodularity condition k n I f I J K R J K = 0. The reason for this is that sometimes the anomalous terms generated by a non-unimodular R can be removed by a field redefinition [35] (see also [36]). Here we will take R to be unimodular for simplicity. 6 By a diffeomorphism it is possible to replace the last two terms in δGij by −∇iΘ mn ∇mΘnj − ∇j Θ mn ∇mΘni, see eq. (3.36). 7 Incidentally, this just amounts to changing the value of the parameter q in the scheme of Hull and Townsend [37]. top of this we need to extend the last term in the transformation of the metric, or find a scheme which removes it. One possibility would be the following. Consider the symmetric polynomial defined by ∞ n=0 η 2n P 2n (Θ, Θ, . . .) = det(1 + ηΘ) . (1.6) One may then correct the metric by the following expression Possibly with different coefficients in front of P 2n , or perhaps with det(1 + ηΘ) replaced by its square-root. Two-loop conformal invariance conditions The conditions for two-loop conformal invariance of the bosonic string sigma model were worked out in [38,39,40]. Following Hull and Townsend (HT) the conditions in their scheme are [37] 8 and the two-loop corrections are where Here we have set to zero the parameter q of [37]. Expansion in the deformation parameter In this section we expand the conditions for two-loop conformal invariance in powers of the deformation parameter η, and we find the explicit α ′ corrections for the background such that the conditions hold to the quadratic order in η. Here will not need to impose the equation for the dilaton. It is known that when the equations for G and B are satisfied the dilaton equation is satisfied up to a constant [37]. Since we assume the undeformed background to solve all the two-loop equations and since there is no way to introduce a constant at higher orders in η, 9 the dilaton equation will not add anything. 8 To go from their conventions to ours one sends Φ → 2Φ and H → 1 2 H. 9 The parameter η is always accompanied by Θ and it is not possible to construct a constant from a general Θ. First order in the deformation parameter At order η 1 we see, by looking at (1.3), that the metric is not deformed while 10 (3.1) Using this in (2.4) we find where we have used the lowest order equations (2.2). Using the two derivative Killing identity (A.2) we have Using this together with the identity (A.9) we find Taking into account the α ′ -corrections to the classical background, α ′ δG and α ′ δΦ, and the B-field at order η 1 , α ′ (δB) (1) , we have (3.5) In the case where the metric and dilaton do not receive corrections, δG = δΦ = 0, the terms in the second line vanish, and the terms in the first line also vanish provided we take (δB) In the general case the assumption that the corrected original background solves the two-loop equations implies that where we used the expressions for the variation of the Ricci tensor and Christoffel symbols (3.10) and (3.13). Using this it is not hard to see, noting that δΦ must respect the isometries, that the δΦ-terms cancel without any further correction to B. With a little bit more work one can show, using the fact that L k δG ij = 0, i.e. that the correction to the undeformed metric does not break any isometries, that all terms cancel if one takes (δB) (3.8) The first term is simply the correction induced by the correction to the undeformed metric, i.e. δ(B (1) ) ij = δΘ ij , which comes from the fact that the indices on Θ ij were lowered with the metric (note that the Killing vectors k m I , with an upper index, are not corrected by assumption). Thus we have proven that a two-loop Weyl invariant sigma-model remains two-loop Weyl invariant under a YB deformation, at least to first order in the deformation parameter. We now consider what happens at second order. Second order in the deformation parameter It is easy to see that at order η 2 the B-field equation, F B(2) 1,ij = 0, is trivially satisfied. For the metric equation we find Note that we choose to define all tensors to have lower indices, e.g. R ijkl , and then raise indices with the undeformed metric G ij . The last two terms do not involve the Riemann tensor and the calculations can be simplified somewhat if we remove them by shifting the metric and dilaton. Under a shift of the metric we have and so that in particular The variation of the Ricci tensor becomes (symmetrization in ij understood) From this expression we see that the last two terms in (3.9) can be canceled by shifting the metric and dilaton as (3.14) The two-loop contribution then becomes (symmetrization in ij understood) Here we have used the Bianchi identity for H and the lowest order equations of motion, which in particular imply Note that terms with two derivatives of H (1) indeed give something involving the Riemann tensor since they involve three derivatives acting on a product of two Killing vectors giving at least two derivatives on one Killing vector. Expressing all terms in terms of the basis defined in appendix B we have (symmetrization in ij understood) While the order η α ′ -correction toB in (3.8) contributes the terms (for the moment we assume that the undeformed background is not corrected) (symmetrization in ij understood) For the two-loop correction we therefore get 1 8 times To this we have to add the terms arising from the α ′ -corrections toG andΦ. We will ignore the corrections to the undeformed background until the end of the section. Consider the following possible α ′ -corrections to the metric at order η 2 (symmetrization in ij understood) Note that we could write also the second one in terms of Θ as but the above expression is more convenient for the following calculation. Using (3.13) and (3.10) these variations give rise to the terms where we used the identity (B.50) in calculating the last variation. Taking the following correction to the metric and dilaton (δG) (3.28) and using appendix B we are left with 1 8 times the following order α ′ terms Next we use the Yang-Baxter equation which, in terms of Θ, reads Hitting this with R ipmn ∇ p we get the identity Adding −4 times the RHS to our expression we are left with 1 8 times 33) The first two terms vanish if the original background does not suffer α ′ -corrections, while the last term can be canceled by shifting the dilaton. To summarize we have found that with the following correction to the metric and dilaton in the HT scheme at order η 2 , taking into account also (3.14), (symmetrization in ij understood) (δG) the deformed model is Weyl invariant at two loops provided the undeformed model is. The shift in the metric does not look particularly natural but it can be brought to a nicer form by noting that (symmetrization in ij understood) The first term represents a diffeomorphism and dropping it (note that the dilaton does not transform, v i ∇ i Φ = 0, since it is isometric) we find instead (symmetrization in ij understood) 11 (δG) We will now consider what happens when the undeformed background receives α ′ -corrections. Taking into account the lowest order correction to the metric and dilaton as well as the first order correction toB (3.8) we have (symmetrization in ij understood) Using (3.7) and the variations in (3.13) and (3.10) this becomes, after a tedious calculation, (3.40) The first term vanishes by the Yang-Baxter equation. Using the fact that k l I ∇ l k n J − k l J ∇ l k n I = f IJ K k n K and the YB equation (i.e. R IJ R KL f JK M antisymmetrized in ILM vanishes) this further reduces to where we have used first the YB equation, then the Jacobi identity and finally the unimodularity This shows that the only additional corrections that arise are the ones coming from correcting the undeformed metric inG (2) and Φ (2) so that (δG) This completes the proof that, at least to second order in the deformation and when B = 0, unimodular YB deformations preserve conformality at two loops. deformed limit η → ∞ [12], see also [13,14]. The simplest class of Yang-Baxter deformations -the "abelian" one -is related to just abelian T-duality, and is equivalent to doing TsT transformations [41,42]. In general, a Yang-Baxter deformation generated by Θ = k 1 ∧ k 2 where k 1 = ∂ x 1 and k 2 = ∂ x 2 are commuting Killing vectors, is equivalent to doing first a T-duality x 1 →x 1 , then a shift x 2 → x 2 + ηx 1 , and then a T-duality backx 1 → x 1 . Some "non-abelian" deformations are non-commuting sequences of TsT's [5,43]. The non-abelian nature is related to the fact that the order in which the TsT transformations are performed is important, as certain T-dualities would break the isometries that are needed to perform the other T-dualities in the sequence. In this section we want to exploit the relation to TsT transformations and combine it with the knowledge of the first α ′ -corrections of the T-duality rules, to obtain twoloop corrections for all Yang-Baxter deformations that are obtainable by TsT transformations, or more generically by a non-commuting sequence of them. This strategy allows us to obtain backgrounds at two loops that are exact in the deformation parameter η. Moreover, these tools can be applied to any starting background with isometries, and it is not needed to restrict to B = 0 as we assume in most of this paper. Because at each step all that we are doing is (abelian) T-duality and coordinate transformations, we are bound to preserve conformal invariance on the worldsheet to the very end, and we can check explicitly that the solutions we generate do solve the two-loop equations. This argument can be repeated also to higher orders in the α ′ expansion, and it is enough to conclude that all Yang-Baxter deformations that are obtainable by a generically non-commuting sequence of TsT transformations, do not break the conformality of the original model to all order in α ′ . At leading order in α ′ the T-duality rules are given by the Buscher rules [44]. At higher loops these rules get corrected in α ′ . We will use the α ′ -corrections to the T-duality rules derived by Kaloper and Meissner in [31]. The rules were obtained by carefully analysing the two-loop effective action of the bosonic string, and identifying the terms that are symmetric or anti-symmetric under the Buscher rules. The α ′ -corrections of the T-duality rules were then fixed by requiring that they give a symmetry of the full two-loop effective action, compensating for the antisymmetry of those terms. 12 Already at leading order in α ′ , the T-duality rules are more easily presented in terms of fields of a dimensional reduction, where we reduce along the direction that we want to T-dualize. We follow [31] and we rewrite the metric, Kalb-Ramond field and dilaton of the D-dimensional spacetime in terms of the following (D − 1)-dimensional fields Here we are assuming that we have brought the solution in a form such that the isometry we want to dualize is simply implemented by a shift of a coordinate, that we denote by x. We use Greek indices for the (D − 1)-dimensional spacetime. 13 We have introduced a (D − 1)dimensional metric g µν , and antisymmetric b µν , vectors V µ and W µ , and scalars φ and σ. Above we also used form notation V = V µ dx µ , W = W µ dx µ . In components, the relations to identify the fields of the dimensional reduction are It is also useful to notice that is gauge invariant. In terms of these new fields the Buscher rules are simply All other fields remain unchanged under T-duality at leading order in α ′ . In [31] Kaloper and Meissner derived the corrections to the T-duality rules in a particular scheme introduced by Meissner in [45]. We will call it the Kaloper-Meissner (KM) scheme. In order to apply the T-duality rules of KM to our case, we will therefore first need to implement the field redefinitions to go from the scheme of HT to that of KM. We can do so by combining the formulas given in [37] (see their equations (61) and (64)) relating the HT scheme to the Metsaev-Tseytlin (MT) scheme of [39], and those given in [45] (see his equations (3.7), (4.1) and (4.7)) to go from MT to KM. 14 The field redefinitions that we will use are 15 (4.5) Once we are in the scheme of KM we can use their α ′ -corrected T-duality rules [31] σ Indices are always raised/lowered using the (D − 1)-dimensional metric g µν , and the transformations are written using also the following definitions (4.7) In general, at higher loops, not only σ, V and W will change under T-duality. In fact, at two loops in the scheme of KM also b µν gets modified. 16 It is important to remark that already 14 The field redefinitions given in [45] relate the KM and the MT schemes only on-shell, but this is enough for our purposes, since we just want to make sure that we can generate solutions of the two-loop equations. 15 These are the redefinitions needed when we set the parameter q of [37] to zero. Different values of q would affect the coefficient of H 2 that appears in the redefinition of the dilaton. Importantly, the coefficient in front of H 2 ij that appears in the redefinition of the metric has the opposite sign compared to what one would expect from formulas in [37] or [45]. We have checked in various examples, some not included in this paper, that we must have the sign that we use here, as this is fixed by requiring that we want to have a solution of the two-loop equations after doing T-duality in the KM scheme and going back to the HT scheme. 16 In [31] the rules were given in terms of transformations of hµνρ. Here we rewrote them in an equivalent way as a transformation of bµν . before doing T-duality the fields will in general have an explicit α ′ -dependence. In particular, σ, V and W that transform according to (4.6) may in general depend on α ′ , and this must be taken into account already when implementing the leading order T-duality rules (the Buscher rules). One could in principle combine the T-duality rules of KM in (4.6) with the field redefinitions in (4.5), to obtain the α ′ -corrections of the T-duality rules in the scheme of HT. We will not do so here, as the scheme of KM appears to be the minimal scheme for what concerns the complexity of the corrections to the T-duality rules. In other schemes, all other fields of the dimensional reduction will in general receive α ′ -corrections. Therefore, to obtain Yang-Baxter deformations in the scheme of HT we will follow this strategy: 1. Start from a solution of the two-loop equations in the HT scheme. In general that implies finding α ′ -corrections for this initial solution. 2. Go to the scheme of KM using (4.5). Do TsT or sequences of TsT transformations, using the α ′ -corrected T-duality rules in (4.6). Go back to the scheme of HT using (4.5). We have worked out examples to test this method and obtain explicit results for α ′ -corrections of Yang-Baxter deformed models. This also allows us to relate to the results of section 3 that are perturbative in η. We will provide an example in the next section. Examples In this section we consider two particularly simple examples. Solvable pp-wave We start with the pp-wave background considered in [46] where 0 < k < 1 4 is a constant, m is another constant and d is the number of transverse dimensions. This background is known not to receive α ′ -corrections. This follows from the fact that the only non-zero component of the Riemann tensor is R +m+n = δ mn k(x + ) −2 . Consider the following four Killing vectors where we have defined the parameter They form a Heisenberg algebra of isometries with the only non-trivial Lie bracket [k 1 , k 2 ] = k 3 . From the discussion of R-matrices in [5] we see that we can consider the non-abelian rank 4 deformation where we introduced the parameter s to keep track of the contribution from the second term. We will show below that in this case this deformation is equivalent to the abelian one obtained by setting s = 0. First we construct the matrix The deformed background takes the form With the B-field and dilaton given bỹ One sees from this thatH = 4ην(x + ) 2ν−1 dx 2 ∧ dx 1 ∧ dx + , (5.9) which is independent of the parameter s. The fact that also Φ is independent of s suggests that it might be possible to remove the s dependence also from the metric. Consider the change of coordinates x 2 → x 2 + f and x − → x − + gx 2 + h where f, g, h are functions only of x + . One finds that the choice removes the dependence on s completely and reduces the background to the one obtained by the TsT with Θ = k 1 ∧ k 4 . (5.11) Explicitly, the metric is (5.12) From (1.4) we find the only correction to the deformed background is given by which can be canceled by a diffeomorphism δG ++ = ∇ + v + . In fact the change of coordinates 2x + (1+η 2 c 2 ) , x 1,2 → 1 + η 2 c 2 x 1,2 brings the deformed metric to the form Therefore this background is exact at two loops, as is easily checked directly, and possibly to all loops. Note that this is consistent with the proposed all-order extension δG in equation (1.7), since P 2n (∇ i Θ, ∇ j Θ, Θ, . . .) reduces to δ + i δ + j f (x + ) where f is some function. Because this example is somewhat trivial, we now move to a more interesting one where the α ′ -corrections are non-trivial. Bianchi type II background Next we consider the Bianchi type II background [47,48] (the α ′ -corrections to Bianchi type I were considered in [49]) The solution has three Killing vectors which again satisfy a Heisenberg algebra [k 1 , k 2 ] = k 3 . From now on we will simplify things by taking a = 0 and b = c = 1. The two-loop equations are not automatically satisfied, and we need to find α ′ -corrections for this background. It is convenient to introduce a new coordinate system {v, x, y, z} where v = e τ , since the metric then has a rational dependence on v We assume that the correction to the metric δG ij respects the isometries of the background. We turn on the diagonal components δG ii and δG 12 = −zδG 11 . We also allow for a correction to the dilaton δΦ that, together with δG ii , is allowed to depend only on v. The two-loop equation for the B-field is already satisfied. First it is simpler to solve the two-loop equation for the dilaton, because there only the correction δΦ contributes. One finds a second order differential equation where c Φ is a constant. Looking at the two-loop equations for the metric, one can find a linear combination of those equations that gives an algebraic constraint imposing δG 11 = 0. To find δG 00 , δG 22 , δG 33 , we first identify linear combinations of the equations that give first order differential equations for δG 00 and δG 33 , and we solve them obtaining results written in terms of δG 22 . These are then used to get a third order differential equation for δG 22 only, that we also solve. The final result is where α, β are parameters and we have introduced an additional flat direction w so that we can have a fourth Killing vector k 4 = ∂ w . If both α and β are non-zero, they can be reabsorbed by redefining w and the deformation parameter η. For simplicity we set α = 0, β = 1 and analyze the abelian deformation given by Θ = k 2 ∧ k 3 . (5.23) The Yang-Baxter deformation to lowest order in α ′ yields the following deformed background 17 17 We remind that in this paper we use the convention B = 1 2 Bij dx i ∧ dx j . It can be directly checked that these corrections indeed promote the background (5.24) to a solution of the two-loop equations up to the quadratic order in η included. We can obtain the first α ′ -correction exactly in the deformation parameter η if we follow the strategy outlined in section 4. The deformation generated by Θ = k 2 ∧ k 3 is equivalent to doing first a T-duality along x, then shifting y → y − ηx wherex is the dual coordinate to x, and then T-dualisingx back. We first start from the background given by the metric (5.19) and the α ′ -corrections (5.21). This background solves the two-loop equations in the HT scheme, and we need to apply (4.5) in order to find a solution in the KM scheme. Obviously, since the corrections in (4.5) are multiplied by an explicit power of α ′ , it is enough to use the uncorrected background to derive them, which simplifies the calculation. Because B = 0, we can in principle get a non-trivial modification only for the metric from the Ricci tensor, and for the dilaton from the Ricci scalar. But the Bianchi II background is also Ricci-flat, therefore it is the same in the KM scheme and in the HT scheme. The next step is that of identifying the fields of the dimensional reduction as in (4.1). Because we want to do T-duality along x here, we are taking x = x. This is a straightforward exercise, and instead of writing down all fields of the dimensional reduction, we only write those that can potentially change under the corrected T-duality rules These particular fields of the dimensional reduction happen not to depend on α ′ in this particular example. We then implement the α ′ -corrected T-duality rules of KM as in (4.6) and obtain the fields of the dimensional reduction after T-duality After T-duality the scalar σ does depend explicitly on α ′ . The explicit form of the two-loop background after performing this first T-duality along x is (5.28) In the T-dual frame the metric is diagonal (even to two loops) at the cost of having a nonvanishing B-field. We can now do the shift y → y − ηx, that here will have only the effect of modifying the metric. To perform another T-duality alongx we have to first repeat the identification of the fields of the dimensional reduction. We find in particular (5.29) At this point we can use again the T-duality rules of KM (4.6). After doing that we obtain the following background This is a TsT of the initial Bianchi II that solves the two-loop equations in the KM scheme. To go to the HT scheme we use again (4.5). Because of the deformation, now the dictionary to go to the new scheme is non-trivial, and the background in the HT scheme reads where While this is exact in η, it is interesting to expand it at quadratic order to compare with the perturbative results collected in in (5.25). We find that the two backgrounds are not identical, but are of course related by a gauge transformation of the B-field (dropping terms with dv ∧ dz) and by a diffeomorphism (sending v → v − α ′ η 2 v 2 (1 + v 2 ) −1 in the perturbative background) up to the quadratic order in η. When we want to work out a deformation generated by Θ = k 1 ∧ k 4 following the strategy of section 4, we first need to find a coordinate system in which k 1 acts as a simple shift of a coordinate. We can redefine 33) so that in the new coordinate system k 1 = −∂ z ′ . As should be clear from the discussion at the beginning of this section, the isometry generated by k 1 is not broken by α ′ corrections, therefore the metric will not depend on z ′ also at two loops. The deformation generated by Θ = k 1 ∧ k 4 can be obtained by doing T-duality w →w, then the shift z ′ → z ′ − ηw, and then T-duality backw → w. We will omit the explicit results for this particular deformation, since they involve very long expressions, and we have already presented our method in the previous deformation generated by Θ = k 2 ∧ k 3 . The interesting point is that we can combine these two TsT transformations. We can first do a TsT involving x and y corresponding to Θ = k 2 ∧ k 3 . At the end of this result the background is still invariant under isometries generated by k 1 and k 4 , and we can do a second TsT transformation involving z ′ and w, equivalent to Θ = k 1 ∧ k 4 . The composition of the two deformations is equivalent to the deformation given by Θ = k 1 ∧ k 4 + k 2 ∧ k 3 , as explained in [14]. The non-abelian nature of the deformation is related to the fact that if we had started from Θ = k 1 ∧ k 4 instead, we would have broken the isometries that we would need to perform the deformation with Θ = k 2 ∧ k 3 . As follows from the results of [14], in the maximally deformed limit η → ∞ we recover the non-abelian T-dual of the original Bianchi II solution, where the isometries dualized are those corresponding to the Killing vectors k 1 , k 2 , k 3 forming a Heisenberg algebra, and k 4 . By this argument it follows that non-abelian T-dual models related to this class of Yang-Baxter deformations remain conformal on the worldsheet to two loops. Because T-duality remains a symmetry of the string at higher orders in an α ′ -expansion, we can argue that this is true to all loops. Conclusions We have argued that (homogeneous) YB deformed string σ-models that are conformal at one loop remain conformal at two loops, 18 i.e. including the first correction in α ′ . We showed this to second order in the deformation parameter η for a generic unimodular deformation of a background with vanishing B-field. We also argued that using the α ′ -corrected T-duality rules of [31] one can verify this to all orders in the deformation parameter for the cases that can be built from TsT transformations, and we explained that this strategy can be used also for the non-abelian YB deformations that are equivalent to a non-commuting sequence of TsT transformations 19 . We exemplified our results in the case of a deformation of a Bianchi type II background. Our findings suggest that one-loop conformal YB σ-models should in fact remain conformal to first order in α ′ , and likely all orders. Since these models can be thought of as a generalization of non-abelian T-duality [11,12,14] (which can be recovered in an appropriate η → ∞ limit) our findings suggest that the same should be true for NATD. This was also argued recently from a different perspective in [28,29], studying renormalizability of a different type of integrable deformation of σ-models. 20 To test this idea one should start from a model which is conformal to all orders in α ′ and then deform it. A good candidate is therefore the unimodular deformation of AdS 3 × S 3 constructed in [36]. The fact that the α ′ -corrections, at least to second order in η, take a relatively simple form suggests that in the right scheme the two-loop corrections might have a simple, all order in η, form. This simple form for the corrections is also interesting in the special case of TsT transformations, and has, to our knowledge, not been noted before. If this remains true to higher orders in α ′ it could even help in determining the structure of higher α ′ -corrections to the target space equations of motion. This approach could be said to be an example of using O(d, d) symmetry to determine/constrain higher α ′ -corrections. We plan to address some of these questions in the near future.
7,667.8
2019-10-04T00:00:00.000
[ "Mathematics" ]
Review and Simulation of Counter-UAS Sensors for Unmanned Traffic Management Noncollaborative surveillance of airborne UAS (Unmanned Aerial System) is a key enabler to the safe integration of UAS within a UTM (Unmanned Traffic Management) ecosystem. Thus, a wide variety of new sensors (known as Counter-UAS sensors) are being developed to provide real-time UAS tracking, ranging from radar, RF analysis and image-based detection to even sound-based sensors. This paper aims to discuss the current state-of-the art technology in this wide variety of sensors (both academically and commercially) and to propose a set of simulation models for them. Thus, the review is focused on identifying the key parameters and processes that allow modeling their performance and operation, which reflect the variety of measurement processes. The resulting simulation models are designed to help evaluate how sensors’ performances affect UTM systems, and specifically the implications in their tracking and tactical services (i.e., tactical conflicts with uncontrolled drones). The simulation models cover probabilistic detection (i.e., false alarms and probability of detection) and measurement errors, considering equipment installation (i.e., monostatic vs. multistatic configurations, passive sensing, etc.). The models were integrated in a UTM simulation platform and simulation results are included in the paper for active radars, passive radars, and acoustic sensors. Introduction The use of UAVs (Unmanned Aerial Vehicles), or as they are commonly known, drones, has increased in recent years. Initially, these aircraft were used as military technology, especially for security and monitoring purposes, but today, many companies and private users are using UAVs in their daily lives. These nonmilitary drones are used by citizens for recreational activities, such as video recording or taking high-resolution photos, and by companies for observation, transportation, field monitoring, traffic monitoring, fire protection and border patrol, among many other uses [1]. In addition to their widespread use for actions such as those described above, UAVs can be hacked and used to commit crimes, such as espionage, smuggling or even attacks. For all these reasons, drone detection is necessary to check their presence near critical areas or infrastructures, and if a drone's behavior is appropriate and compatible with other air operations (of manned aircraft or other drones). There are many different technologies enabling drone detection, localization, and tracking, including cooperative and noncooperative sensors. This paper focuses on this second type of sensors. Over the past five years, significant research efforts have been made to detect and counter UAVs, and the main physical operating principles of the different technologies being used are clearly described in [2]. Noncooperative sensors include active and passive radar detection techniques, detection through UAVs radio frequency signals, detection by acoustics signals, image detection and detection by merging these techniques or data fusion. In this contribution, we go a little further in the analysis of these technologies. In addition to describing some of the most interesting literature proposals and commercial products in the state of the art, we define a collection of simulation models, usable for some of those technologies and expandable to others, to be potentially usable for: (a) Comparative assessment of potential systems deployment in a given position. (b) Analysis of integrated sensing solutions/data fusion approaches for C-UAS. (c) Analysis through simulation of the potential integration of the measurements from those sensors in UTM tactical chains, specifically to test the associated implications in their tracking and tactical services. In any case, the paper focuses on modeling the sensing processes for the different technologies, which would be a prerequisite for any of the previously described analyses. Finally, there are plenty of models of radar, RF, vision, and acoustic sensors. Here, we try to select, parameterize, and summarize those of real application for the detection of small drones in civilian applications (for UTM). The paper is structured as follows: In the second section of this paper, we describe in detail some of these sensing technologies, covering both the academic literature in the area and the fast-evolving commercial scenario. Meanwhile, the third section is devoted to deriving the simulation models of some of these sensors. This simulation models are to be incorporated in the UTM simulator described in [3]. The fourth section summarizes simulation results for some of the previous sensors, enabling a comparison of their main sensing features and performances, and finally, Section 5 concludes the paper, providing some insights on future work. Review of the State-of-the-Art Technology In this section, we summarize the different detection technologies. Sections 2.1-2.6 describe solutions in the literature and some of the commercial solutions (if available). In the case of Section 2.6, it is important to note that it focuses on the use of fusion approaches making use of different sensing technologies. Therefore, quite often, a commercial solution will be described in several of the following sections, once per sensor type, and again when talking about integrated sensing and fusion C-UAS systems. Finally, Section 2.7 includes a comparative summary of technologies requirements, expected performance and limitations. Active Detection Radars Radars have several advantages in detecting aircraft compared with other sensors in terms of weather independency, day and night operation capability, technology development, and capacity to measure range and velocity simultaneously. A big challenge with UAVs is that they have very small radar cross sections (RCS), and they fly at lower altitudes and lower speeds compared to larger aircrafts [4]. Regular radar systems typically aim to detect air targets of medium and large size (RCS larger than 1 m 2 ). In addition, due to its low speed, Doppler processing (Moving Target Indication/Detection) is not so effective. In the literature [5], there are several types of radar used for detection, tracking and classification of drones, such as mmWave Radar or Ultrawide-Band Radar, which can be classified in two main categories: active detection and passive detection radars. In this section, we focus on active detection radars, while the next section describes passive radars. Conventionally, there are two possible ways to increase the distance and azimuth resolution of active radar detection systems in the case of UAVs operations: using higher frequency carriers or utilizing multiple input multiple output (MIMO) beamforming antennas. To use shorter a wavelength, K-band, X-band and W-band frequency modulated continuous wave (FMCW) radars are specifically designed for UAV detection. The selection of carrier frequency for UAV detection radar should be higher than 6 GHz (K-band), as in [6], where it is verified the ability of radars to detect small, slow, and low-flying targets. There are two important factors to be considered for the use of radars to detect airborne threats: the target to be detected and the radar itself. When a radar is used to detect small and Another commercial solution, provided by Indra [13], called ARMS, includes another FMCW radar. Its main characteristics are detailed next, in Table 2. Table 2. ARMS radar specifications. Radar Ku-band, FMCW Scan 360 degrees/second Sectorized RF blanking Doppler and Clutter Map techniques True track report (position, course and speed) >2 km for smallest target of RCS = 0.1 m 2 , once per second X-Band alternative for longer ranges German company HENSOLDT has developed a drone detection system called Xpeller Counter UAV solution [14]. This solution can detect the potential threat through a radar system whose specifications can be seen in Table 3 (two different radar systems may be integrated). rate) and searching for additional targets in its coverage. Its specifications are detailed in Table 4. An alternative solution is the Ranger R8SS-3D from Flir [16], whose specifications can be seen in Table 5. RST enterprise has another radar solution to detect UAVs, and it is called Doruk: UAV detection radar [17]. Its basic functions are a low-altitude moving target detection over land and sea. It provides detection, classification, azimuth and range measures, RCS, radial velocity, heading and width of Doppler Frequency Spectrum of targets. Its main specifications can be seen in Table 6. Passive Detection Radars Passive radars do not require a specially designed transmitter. There are two types of passive radar, the single station passive radar, which exploits only one illumination source, and the distributed passive radar, which uses the existing telecommunications infrastructures as illumination sources to enhance the UAV detection. Typically, two different widespread signals are used: cellular systems and the digital video broadcasting systems. Passive bistatic radars (PBR) have a challenging problem in the detection of UAVs due to their low RCS [18]. Range migration (RM) occurs in the coherent processing interval, which makes it difficult to increase coherent integration gain and improve radar detection ability, although there are techniques to alleviate this problem. An example of single-station passive radar is the investigation presented in [19], where it is possible to localize small UAVs in 3D by exploiting a passive radar based on Wi-Fi transmissions. A demonstration of the capability of the radar to estimate the position of the target from the ground by exploiting multiple surveillance antennas is performed. In the case of distributed passive radar, a possible approach is the one proposed in [20], where the detection system uses reflected global system for mobile communications (GSM) signals to locate and track UAVs. Another example of distributed passive radar is the one presented in [21], where a fixed-wing micro-UAV using passive radar based on digital technology is detected using audio broadcasting signals up to a distance of 1.2 km. The experiment was achieved at a lower frequency of 189 MHz in the VHF band. The major disadvantage of passive radar is that a large amount of postprocessing effort or multiple receivers are required to obtain acceptable detection accuracy. Detection through UAS Radio Frequency Signals UAVs usually have at least one RF communication data link to their remote controller to either receive control commands (typically at 2.4 GHz) or deliver aerial images. In this case, the spectral patterns of such transmission are used for the detection and localization of UAVs. In most cases, software-defined radio receivers are employed to intercept the RF channels. To utilize the spectrum patterns of UAVs, three possible approaches are considered for drone detection in [22]. One of them is based on sniffing the communication between drone and its controller is a clear application of this approach. Another approach is the one explained in [23], where the frequency hopping spread spectrum signals from a UAV are extracted. According to these articles, it is possible to train a classifier for identifying unique RF transmission patterns from UAVs. Data traffic patterns are also an important feature to classify and identify UAVs. In [24], a UAV's detection and identification system, using two receiver units for recording the received signal strength resulting from the UAV was proposed. The system makes use of a novel machine learning-based for efficient identification and detection of UAVs. The system consists of four classifiers working in a hierarchical way. In the first classifier, the availability of the sample as UAV is checked, while the second classifier specifies the type of the detected UAV. The third and fourth classifiers handle specific vendors' drone types. The system detects UAVs flying within the area, and it can classify UAVs and flight modes of the detected UAV with an accuracy around 99%. Another UAV detection and identification approach is based on Wi-Fi signal and radio fingerprint, as presented in [25]. Firstly, the system detects the presence of a UAV, and features from RF signal are extracted using Machine Learning and Principal Component Analysis-derived techniques to extract RF fingerprints. The extracted UAV fingerprints are stored and used as training data and test data. The results of this approach are above 95% in indoor scenarios and above 93% in outdoor scenarios. The real scenarios are not controlled, so it is not so easy to pick up the RF signals, as there is interference in the environment. The following two studies have carried out their experiments with interference in the radio frequency band. The proposed method in [26] relies on machine learning-based RF recognition and considers that the bandwidth of the video signal and Wi-Fi are identical. The process consists of extracting 31 features from the Wi-Fi signal and the UAV video signal and then introducing them to the classifier. It is demonstrated that the proposed method can accurately recognize UAV video signal in the presence of Wi-Fi interference. The proposed method has a recognition rate greater than 95% in the 2 km outdoor experiment. On the other hand, a radio frequency-based drone detection and identification system under wireless interference (Wi-Fi and Bluetooth), by using machine learning algorithms and a pretrained convolutional neural networkbased algorithm called SqueezeNet as classifiers is explained in [27]. Different categories of wavelet transforms are used to extract features from the signals. From these extracted features, different models have been built. The experiment has consisted of the study of the performance of these models under different signal-to-noise ratio levels. The results had a correct detection accuracy obtained of 98.9% at 10 dB signal-to-noise ratio level. Next, we detail some commercial RF detection systems. DJI has created a system to detect their own drones. AeroScope [28] can identify them by monitoring and analyzing their electronic signal to gain critical information such as flight status, paths, and other information in real time. There are two types of AeroScope systems: stationary (designed for continuous protection of large-scale sites, up to 50 km range) and portable (designed for temporary events and mobile deployments, up to 5 km range). Dedrone provides a complete airspace security system [29], including RF sensors, able to detect and localize drones by their RF signals. There are two types of these sensors: the DedroneSensor RF-160 forms the basis of the sensor network and is used in initial risk analysis, whereas the DedroneSensor RF-360 can locate and track drones. The main characteristics of these sensors can be seen in Table 7. Finally, DroneShield provides the DroneSentry-X product [30], which is a portable device that is compatible with vehicles. It provides 360 • awareness and protection using integrated sensors to detect and disrupt UAVs moving at any speed. It has a nominal UAV detection range greater than 2 km, and it detects UAV RF signals, operating on consumer and commercial industrial, scientific, and medical (ISM) frequencies. Detection by Acoustic Signals An array of acoustic sensors can be employed to capture the sound, detect, and estimate the direction of arrival of sounds from sources such as UAVs. These arrays are deployed around the restricted areas and record the audio signal periodically and deliver this signal to the ground stations. The ground stations extract the features of this audio signal to determine the direction of arrival of the UAV. Conventionally, once the audio signal of UAV is received, the power or frequency spectrum is analyzed to identify the UAV. An example implementation of this type of UAV detection is explained in [31]. This paper shows how to estimate and track the location of a target by triangularization with two or more microphone arrays, in addition to how the UAV model can be obtained by measuring the sound spectrum of the target. In this report, a small tetrahedral array of microphones was used. The results show that the detection algorithm performs best with a 99.5% probability of detection and a 3% false alarm rate. On the other hand, the tracking algorithm often misses trajectories when other trajectories are present, and the elevation tracking is poor. Another example of UAV detection using acoustic signals is shown in [32]. In this work, the data collection equipment is composed of two individual microphone arrays in 16-X and 4-L configurations where the microphones are placed on the ground and mounted on metal spikes, while the elevated sensors are placed on tripods. These microphones are covered by six-inch-thick foam shields to protect them and limit the effects of wind. Once the signals have been captured by the arrays, they must be processed and analyzed. The data processing developed, as well as the analysis of the acoustic sensor arrays, has been tested by being used to detect and track the trajectory of UAVs at low altitude and tactical distances. This process operates best under benign daytime conditions and is approximately five times better at detecting noisier, medium-sized, gasoline-powered UAVs than small, electric-powered UAVs. In the literature, there are some machine learning (ML) approaches to classify the UAV from audio data. Support vector machine (SVM) is implemented to analyze the signal of an UAV engine and to build the signal fingerprint of UAV. The results show that the classifier can precisely distinguish the UAVs in some scenarios [33]. Another example of using deep learning methods to detect UAVs with acoustic signals is shown in [34]. In this paper, there is a comparison among Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs) and Convolutional Recurrent Neural Networks (CRNNs) using melspectrogram features. Here, the CNNs show the better performing results, achieving the highest average accuracy of 94.7%. In summary, machine learning presents an ability to recognize and locate the UAV. However, the nature of acoustic approaches limits the deployment and detection of UAV. In [35], a detailed study was conducted on how drone detection is performed by using acoustic signals, and it characterized how the microphone array in charge of capturing the sound signal should be organized. The geometry of the microphone array depends on the application to be carried out, although, when the desired signal can come from any angle, the best geometry is the circular array. The possible geometries studied were uniform linear array (ULA), uniform circular array (UCA) and uniform rectangular array (URA). In the array, it is important to know the number of microphones, which usually ranges from 4 to 16 microphones (in steps of two), and the distance between sensors, which usually ranges from 0.3 to 0.6 meters in increments of 0.05 meters. A commercial C-UAS solution from Dedrone enterprise is Dedrone DroneTracker [36], which is a multiple-sensor unit that may integrate an ultrasonic audio detector. Its specifications are shown in Table 8. Detection through Video/Images Vision-based UAV detection techniques mainly focus on image processing. Cameras and videos are used to capture the images of UAVs. Then, using artificial vision techniques, UAVs positions are estimated. A vision-based UAV detection approach is presented in [37]. This approach consists of an online recognition system for the identification of 3D objects. The system uses a black-and-white television camera to provide a 2D image on a digital computer. After obtaining the image on the computer, the next step is to remove the clutter from the image by means of a preprocess that provides a clean silhouette as well as its boundaries. At the time of the calculations, certain characteristics are obtained and are used to identify the objects, the position they occupy and their orientation in space by means of a recognition algorithm. A similar system is the one developed in [38], which makes use of classical vision algorithms. This system starts by taking the first image, which is used for initialization of the background estimation. Then a loop is started where the trajectories are predicted in the capture time for each new image taken by the cameras. All those pixels that are different from the background that was previously estimated are detected and form one or more blobs related to the current targets. These blobs are extracted using trajectory predictions, edge detectors and motion detectors. With blobs and an association process, one on more blobs are associated to each target, and in addition, the blobs within the association are used to initialize the tracks. Finally, each track is updated with its corresponding blobs, and the not-updated tracks are deleted. In contrast, nonconventional segmentation methods make use of neural networks to directly identify the appearance of UAVs. For example, in [39], the authors developed a system that is capable of detecting, recognizing, and tracking a UAV using a single camera automatically. For that purpose, a single Pan-Tilt-Zoom (PTZ) camera detects flying objects and obtains their tracks; once a track is identified as a UAV, it locks the PTZ control system to capture the detailed image of the target region. Afterward, the images can be classified into the UAV and interference classes (such as birds) by a convolution neural network classifier trained with an image dataset. The identification accuracy of track and image reaches 99.50% and 99.89%, respectively. This system could be applied in a complex environment where many birds and UAVs appear simultaneously. It is possible to detect UAVs from the cameras of other UAVs. An approach for online detection of small UAVs and estimation of their positions and velocities in a 3D environment from a single moving (on-board) camera is presented in [40]. The methods used are computationally light, despite the complexity of computer vision algorithms, so they may be used on UAVs with limited payload. This approach incorporates fast object detection using an AdaBoost-based tracking algorithm. Real-time performance with accurate object detection and tracking is possible, enabling the tracker to extract the position and size of an aircraft from a video frame. The detections are given to a multitarget tracker to estimate the aircraft's position and velocity in 3D. The effectiveness of this method has been proven with an indoor experiment with three quadrotors. In [41], a general architecture for a highly accurate and computationally efficient UAV-to-UAV detection and tracking algorithm from a camera mounted on a moving UAV platform was developed. The system is composed of a moving target detector followed by a target tracker. The moving target detector accurately subtracts the background from subsequent frames by using a sparsely estimated global perspective transform. The target tracker consists of a Kalmar tracker and was validated using public video data from multiple fixed-wing UAVs working in real time. Video surveillance has not yet been incorporated to our simulation models but is described here for completeness. Next, we describe two commercial PTZ cameras used for drone detection and tracking. On the one hand, there is Axis Q6215-LE PTZ Network Camera from Axis Communications [42], which is a camera with normal range. Its specifications can be seen in Table 9. On the other hand, there is Triton PT-Series HD Camera from FLIR Enterprise [43], which is a PTZ with very high range, whose specifications are detailed in Table 10. Indra also has a camera/optronic sensor to be integrated in its ARMS system. Some details on it are described next, in Table 11. Another company that markets this type of sensor is HGH USA, specifically with its product called Spynel Series [44]. Spynel is based on thermal imaging technology with a 360 • thermal sensor, which works day and night. Spynel can track targets over a long range and wide area. The specifications of each sensor model that exists in this product series can be seen in Table 12. Detection by Data Fusion Detection using a collection of these techniques is the ultimate way to detect UAVs. Data fusion, which is the process of integrating multiple data sources to obtain more consistent, accurate and useful information than that provided by any of the individual techniques explained below, has the advantaged to gain more informative and synthetic fused data than the original inputs. In the case of UAV detection, data fusion could be used to improve the performance of the UAV detection system, by overcoming or alleviating the problems and disadvantages of the individual sensors. However, data fusion should be conducted with great caution. The key problems to be solved can be referred to as data association, positional estimation, and temporal synchronization. Data association is a general method of combining data from different sensors by correlating one sensor observation with the other observations. This process should ensure that only measurements that refer to the same drone are associated. There are different ways to perform this process: one of them is by spatial synchronization, i.e., seeing that a pair of measurements from different sensors have very similar position values. The coordinate's changes, bias estimation and correction are sources of errors to be considered in this process. Furthermore, before making any kind of association, it is necessary to make a time synchronization so that all the measures refer to the same instant of time. The last problem faced by data fusion systems is filtering and prediction, for which they usually use common techniques such as Kalman filtering and Bayesian methods. A low-cost, low-power methodology consisting of a fusion of technologies linking several sensors is presented in [45]. This technology includes a simple radar, an acoustic array of microphones and optical cameras that are used to detect, track, and discriminate potential airborne targets. The multimode sensor fusion algorithms employ the Kalman filter for target tracking, and an acoustic and visual recognition algorithm is implemented to classify targets. The first element of the multimode sensor network is the radar, which is responsible for detecting targets that are approaching the area of interest. The second component is the acoustic microphone array, whose main objectives are to provide target arrival direction and target identification and classification and to mitigate false alarms. The last sensor is the optical system composed of infrared detectors to improve the resolution of targets. Results show that this sensor fusion is useful for detecting, tracking, and discriminating small UAVs. Another set of heterogeneous sensors combined with a sensor data fusion is proposed in [46]. This system is composed of a Radio Frequency (RF) sensor to capture the uplink and downlink communications of the UAV, an acoustic sensor searching for the rotor noise, a passive radar system using the cellular network and a multihypothesis tracking (MHT) system for the fusion of sensor data. Finally, in the case explained in [47], the system is composed of different range acoustic, optical and radar sensors. There is a combination of sensors of long-and short-range detection, the passive RF receivers detect the UAV's telemetry signals, and the camera and microphone sensors are used to increase the detection accuracy in the near field. Specifically, the system is composed of a 120-node acoustic array that uses acoustic signal to locate and track the UAV; 16 high-resolution optical cameras, which are used to detect the UAV in the middle distance; and MIMO radar (with three bands) to achieve remote detection in the long distance. The developed combination overcomes the drawbacks of each of the sensor types in UAV detection and maximizes the advantages of the sensors. At the same time, the system reduces the cost of large-scale sensor deployment. In this paper, we focus on the simulation of individual sensors, so we do not simulate these integrated solutions, which remains an area for future research, especially for the cases in which some of the sensors are controlled by the outputs provided by others. Regarding commercial solutions, some of them are based on integrating some of the previously described sensors. For instance, a commercial solution provided by Indra [13], called ARMS (Anti-RPAS Multisensor System), is a multilayer system ready to support the full C-UAS cycle, combining multiple types of sensors and countermeasures, ready to be deployed in different formats (fixed, mobile, portable) and designed to interact with complementary systems in to provide defense against UAVs threats. It is composed of a radar (described in Section 2.1), a jammer (to interfere with drone control or GPS navigation) and optronics (described in Section 2.5). HENSOLDT Xpeller Counter UAV solution [14] combines various types of sensors and effectors for protection against small drones. The sensors used to detect and identify are radars, electro-optics, rangefinders, and direction finders. Its radars were described in Section 2.1), and it also identifies the potential threats via visual confirmation with a multispectral camera. Meanwhile, Dedrone provides a complete airspace security system [29]. Different types of sensors may be connected to the DedroneTracker software. The sensors provided by Dedrone are RF sensors, radars, and cameras. Depending on the application, Dedrone has different radars [48] with different performances in the Dedrone platform, such as the Counter-Drone Radar from Echodyne [15] and the Ranger R8SS-3D from Flir [16], whose specifications were analyzed in Section 2.1. The last sensors integrated by Dedrone are PTZ cameras [49]. DedroneTracker system software has a video analysis capability, able to detect and locate UAVs in real time. Depending on the application, Dedrone can integrate one or more PTZ camera models with different performance levels. On the one hand, there is Axis Q6215-LE PTZ Network Camera from Axis Communications [42], On the other hand, there is Triton PT-Series HD Camera from FLIR [43]. They were described in Section 2.5. Another company to have its drone detection solutions analyzed in this paper is DroneShield [50]. It has a range of stand-alone portable products and rapidly deployable fixed site solutions. One of the most remarkable ones is the DroneSentry product [51], which is an autonomous fixed C-UAS system that integrates DroneShield's suite of sensors and countermeasures into a unified responsive platform. This product has as its primary detection method the RadarZero product [52], which is a radar, and/or the RfOne RF detector [53]. It has secondary detection methods such as the WideAlert acoustic sensors and DroneOpt camera sensor [54]. The main specifications of DroneSentry can be seen in Table 13. Comparative Analysis of UAV Sensing Technologies Next, we summarize the main properties of the described technologies to summarize the previous sections. The summary takes the form of Table 14. Counter-UAS Sensors Modeling Modeling and simulation tools are a useful alternative to test and assess the performance of complex systems in a cost-effective manner. Regarding the usage of such tools to evaluate UTM systems, authors have already proposed in [3] a simulation platform that aims to replicate drone operations and complex scenarios. The objective of the platform is to easily perform system-level evaluations of UTM. To do so, the platform simulates the required input information for UTM systems both in preflight (operation definition submission for authorization) and in-flight phases (telemetry messages from drones or tracks from surveillance networks). Thus, starting from a user-defined simulation scenario (which might include the occurrence of unexpected events or contingencies), the platform is able to replicate the behavior of the actors involved in a drone operation. Then, it forwards the required data streams to the UTM system under evaluation and can retrieve the resulting output information to carry out tests and generate evaluation metrics. This operation is schematically represented in Figure 1. contingencies), the platform is able to replicate the behavior of the actors involved in a drone operation. Then, it forwards the required data streams to the UTM system under evaluation and can retrieve the resulting output information to carry out tests and generate evaluation metrics. This operation is schematically represented in Figure 1. The platform follows an agent modeling approach where the behavior of drones, ground control stations, surveillance networks and communication networks linking all agents is individually modeled. The complete behavior of the overall scenario arises from the autonomous interaction of these individually modeled agents. The environment in which drones operate is also simulated including terrain, weather, or airspace constraints. With this approach, the platform can currently simulate drone trajectories or effects such as navigation errors, communication disturbances (i.e., latencies, package losses…) and drone detection from sensors. A model-agnostic, extendable microservices-based architecture has been used to implement the platform, as depicted in Figure 2. The architecture allows for defining multiple simulation models for each agent that can be easily implemented and simultaneously simulated. The simulation of each agent is isolated within a separate microservice so that modeling changes in each service do not affect the rest of the The platform follows an agent modeling approach where the behavior of drones, ground control stations, surveillance networks and communication networks linking all agents is individually modeled. The complete behavior of the overall scenario arises from the autonomous interaction of these individually modeled agents. The environment in which drones operate is also simulated including terrain, weather, or airspace constraints. With this approach, the platform can currently simulate drone trajectories or effects such as navigation errors, communication disturbances (i.e., latencies, package losses . . . ) and drone detection from sensors. A model-agnostic, extendable microservices-based architecture has been used to implement the platform, as depicted in Figure 2. The architecture allows for defining multiple simulation models for each agent that can be easily implemented and simultaneously simulated. The simulation of each agent is isolated within a separate microservice so that modeling changes in each service do not affect the rest of the platform. It also provides utilities to define replicable simulation scenarios where the simulated agent's specification can be defined together with the selected model to carry out their simulation. A set of simple simulation models for each agent was initially provided, as described in [3]. Particularly, a simplistic technology-agnostic model for noncooperative sensors was already provided. This model just considered a maximum range for each sensor following a pass-not pass approach. It also included a constant additive gaussian noise to model detection inaccuracies. The models proposed in this section for different technologies aim to improve that simplistic model by designing more accurate models that are based on the inner operation of each sensor type. Measurement simulation models are proposed in this paper for the following sensors: active radars, passive radars, and microphone sensors. By integrating these enhanced models into the existing platform (which can be easily done by modifying the preexisting surveillance network simulation service), it is possible to assess the performance of those sensors in realistic scenarios. Simulation scenarios defined for the platform not only consider the number of drones, their trajectories and the distribution of surveillance sensors; they also allow for simulating emergent effects from the interaction of sensors with other agents. For instance, the simulator is also able to simulate the network used by sensors to forward information to a UTM system and how it affects track reporting periodicity, latencies, etc. To summarize, the models proposed in this section will enhance the capabilities of the preexisting simulation tool, but they will also benefit from the integration in such platform for assessing the performance of surveillance sensors. A set of simple simulation models for each agent was initially provided, as described in [3]. Particularly, a simplistic technology-agnostic model for noncooperative sensors was already provided. This model just considered a maximum range for each sensor following a pass-not pass approach. It also included a constant additive gaussian noise to model detection inaccuracies. Active Radar The models proposed in this section for different technologies aim to improve that simplistic model by designing more accurate models that are based on the inner operation of each sensor type. Measurement simulation models are proposed in this paper for the following sensors: active radars, passive radars, and microphone sensors. By integrating these enhanced models into the existing platform (which can be easily done by modifying the preexisting surveillance network simulation service), it is possible to assess the performance of those sensors in realistic scenarios. Simulation scenarios defined for the platform not only consider the number of drones, their trajectories and the distribution of surveillance sensors; they also allow for simulating emergent effects from the interaction of sensors with other agents. For instance, the simulator is also able to simulate the network used by sensors to forward information to a UTM system and how it affects track reporting periodicity, latencies, etc. To summarize, the models proposed in this section will enhance the capabilities of the preexisting simulation tool, but they will also benefit from the integration in such platform for assessing the performance of surveillance sensors. Active Radar Two different types of active radars have been modeled, quasi-monostatic radars and MIMO radars. Quasi-Monostatic Radars This radar will be simulated using a power model from the radar equation. It will be assumed that the separation between transmitter and receiver is small compared to the distance to the target. In this first approximation, it is assumed that the radar can eliminate the clutter by doppler filtering. It is also assumed that the predominant noise is thermal. Its calculation now depends on the surrounding conditions and not only on the bandwidth. The main characteristics are: The typical expression for the radar equation of a quasi-monostatic radar is simpler than that of a microwave radar since the free-space propagation losses are included within the ground-wave propagation losses. The radar equation is: where: • S /N signal noise relation in the detector • P av average power of the system • G T transmit antenna power gain • G R receiver antenna power gain L s power losses of the radar system • N 0 system noise To obtain the elevation gain, the elevation width is considered, and to obtain the antenna gain, the azimuth shaping is considered. In this case and considering that the beamforming is conducted only in azimuth, the array gain is estimated approximately as 360 • divided by the beamwidth. The system losses depend on many factors such as the antenna feed or the construction of the processing. A loss factor of around 4 dB has been given as a typical value. The cross section for these frequency bands depends on the target size, and it is modeled as constant for all angles (0.01-0.1 m 2 ). The reception noise is predominantly thermal noise due to the frequencies being used. where k is the Boltzmann constant, T 0 is the Earth temperature (typically 290 • K) and F a a noise factor with a typical value of 5 dB. Once the SNR has been calculated, the detection, false alarms and measurement position must be generated. The detector is assumed to be a CA-CFAR so it is assumed that the target behaves as a Swerling I between scans and the noise residual has a Gaussian distribution [56]. The threshold of the CFAR is obtained with the following expression: where α is the CFAR threshold factor, N is the number of CFAR cells and PFA is the false alarm probability. The probability of detection (PD) is calculated according to the following expression corresponding to a CA-CFAR and a Swerling I target in Gaussian noise: The generation of whether there is detection or not is completed by generating a uniform random variable and comparing it with the probability of detection: On the other hand, several false alarms per lap will be generated and output at each scan of the space. The average number of alarms per lap is calculated with the following expression: A binomial random variable with mean N alarms is generated for each scan. The positions corresponding to the N false alarms are then generated uniformly in azimuth, elevation and distance. The position of each alarm is generated as: If there has been detection, the measurement position is calculated assuming the quasi-monostatic configuration and adding to the true position of the aircraft errors in the radial direction and tangential to the direction of view from the receiver. It is assumed that the optimal distance, elevation, and azimuth estimators are being used. The expressions of their errors are given below. MIMO Radars This simulator models a MIMO radar with spatially separated antennas at high frequencies (X, Ku, K or Ka). Each radar unit will have three transmitting antennas and one receiving antenna placed with the central transmitter. Since the antennae are widely separated and will view the target from different angles, echo coherence is not expected. Therefore, incoherent integration processing is performed, since the coherent has no gain. The main characteristics are: In this case, to obtain the target echo power at the receiver, it has been added the echo powers of each of the three receivers. It is assumed that, in MIMO radar, before the incoherent integration of the signals from all transmitters, the possible clutter is coherently eliminated, but in this model, clutter is not considered. Therefore, the power received from a target will be obtained as the sum of the power received from each transmitter. where: • P av_i average power of transmitter i • G Txi power gain of transmitting antenna i • G R receiver antenna power gain F p_Txi_Rx propagation factor in the propagation path ith-transmitter-target-receiver The average signal-to-noise ratio per echo for each transmitter is calculated by adding the target powers from each transmitter and dividing by the number of transmitters and dividing by the noise power. Transmitting antennas are assumed to be uniformly patterned in coverage in the horizontal plane and to distribute their power to uniformly illuminate the scanned area from their respective positions. Transmitter gains are specified as a factor depending on the azimuth width of the scanned area. The gains are as follows: The propagation factor, being free space, is assumed to be 1. The system losses are assumed to be 2 dB due to Doppler filtering envelopes and CFAR detection losses. Assuming that the images of the three transmitters are integrated incoherently, it can be assumed that it is integrated a CFAR reference of three times the reference of a single system. If it is assumed for each radar a 10-cell reference, it will have 30 reference cells. For a PFA of 1e-4, this means a loss of less than 1 dB [57]. The cross section for these radars depends on the target size, so it is specified by an equal constant for all angles (0.01-0.1 m 2 ). These radars operate at high microwave frequencies, and consequently, the predominant noise is thermal noise. Therefore, the noise power is obtained as: where f N_total is the receiver and antenna noise figure (it has been taken a typical value of 5 dB). The bandwidth is not shown since it is assumed that the receiver uses a matched filter, and for the calculation of the ( S N ) ratio, the integration time has already been included in the radar equation. The detection will be calculated after integrating the echoes from the three transmitters in an incoherent way by making the appropriate corrections according to the relative positions of the transmitters. After filtering, the square of the envelope is found, and the one coming from the three transmitters is integrated. The expression of the detection threshold is obtained according to the following expression [58]: where P(Y b , N) represents the incomplete gamma function of order N (the number of transmitters), and Y b is the detection threshold for the specified PFA. The threshold is calculated by clearing the above equation using the inverse function of the incomplete gamma function. The probability of detection (PD) is calculated according to the following expression, corresponding to the integration of the echo from all transmitters: where S N is the average signal-to-noise ratio per transmitter, calculated with the radar equation. These expressions are for the integrator without CFAR. The effect of CFAR has been included through a term in the power losses. The generation of whether there is detection or not is decided by generating a uniform random variable and comparing it with the probability of detection as in the quasi-monostatic radar. On the other hand, a few false alarms per lap and their positions will be generated as in the quasi-monostatic radar. Finally, if there has been detection, the measurement position is also calculated as in the quasi-monostatic radar. Passive Radar Passive radars will be simulated using a power model from the radar equation, which includes a multipath propagation model. These radars are of multistatic type. The predominant noise will be the direct transmitter-receiver signal interference, which will be considered in the model. The main characteristics are: • Radar cross section dependent on drone size. The radar equation for a passive radar is implemented in several steps. The first is to calculate the received signal power of the target echo before the correlator for an opportunity transmitter. where: • P av average power of transmitter • G T transmitting antenna power gain • G R receiver antenna power gain The ratio ( S N ) is calculated at the correlator output, where the signal will have a gain equal to the square of the product bandwidth and propagation time. The interference powers (noise and correlator side lobes of different signals, including the target) have a gain equal to the product of bandwidth and integration time. The S N ratio to be used in the CFAR is computed with the following expression: where P N_clutter , P N_signal and P N_elec are the clutter, direct signal and electrical noise powers at the correlator input, respectively (these are calculated in the corresponding sections); T is the integration interval and B is the bandwidth (these parameters are specified for each opportunity signal). Transmitting antennas are assumed to have uniform pattern in coverage in the horizontal plane. The transmit gain is specified as a parameter of the transmitter. This gain, if omnidirectional broadcasting is assumed, will be around that of a half-wave dipole (2.15 dBi). As the transmitter power is usually given in apparent radiated power, which considers the gain of the transmitting antenna over the half-wave dipole, 2.15 dBi is the gain over the isotropic that we will assume of the transmitting antenna. For the receiving antenna, a circular array is assumed that generates a number N of beams covering 360 • . The above gain expression will be used for signals within the main beam. For signals entering through the secondary lobes (in the case of passive radars, it will consider the direct signal entering through the secondary lobes of the antenna), it is considered that the signal suffers a constant gain equal to the level of the secondary lobes of the antenna. The propagation factor, being free space, is assumed to be 1. System losses, for passive radars, are 2 dB due to the correlator windowing to reduce the secondary lobes in distance and doppler and another 2 dB due to the clutter elimination system and direct signal. The cross section for these radars depends on the target size, so it is specified by an equal constant for all angles (0.01-0.1 m 2 ). The noise of a passive radar is composed of three main components: the radio noise; the self-interference of the signal itself due to the secondary lobes of the cross-correlation function; and the residual of the secondary lobes of the direct signal autocovariance function. In this model, clutter power is assumed to be zero. The power of the direct signal arriving at the receiver is calculated using the propagation equation and applying the attenuation that a typical direct signal canceller can provide. After the correlator, the echo of the direct signal appears at zero distance and is eliminated. What remains is the residue of the secondary lobes of the ambiguity function spread over the entire Doppler-distance space. The reception noise in this type of band is the antenna noise (human noise + galactic noise + atmospheric noise). This noise predominates over the thermal noise. Finally, the power of the radio noise can be obtained as: f N_total = 10 F a /10 − 1 + L l L a f Rx (31) where F a is calculated as in quasi-monostatic radars, L l represents the transmission line losses (typically 0.5 dB [57]), L a represents the resistive losses in the antenna (typically 0.5 dB [57]) and f Rx represents the receiver noise figure (typically 4 dB [57]). In this first approximation, it has been estimated that there is no clutter. Once the signal-to-noise ratio is obtained, detection and false alarms are generated. This generation is the same as that of the quasi-monostatic radars, so the explanation of its procedure can be seen in that section except elevation since these passive radars do not calculate target height. The measured position will be generated by adding to the actual position a random variable with standard deviation the noise variance. The measurement accuracy is calculated in local radar coordinates centered on the receiver (y-axis north, x-axis east). First, the covariance matrix of the measurement is calculated in local Cartesian as [59]: where σ R and σ θ are the standard deviations, which are calculated as in the quasi-monostatic radar; R is the bistatic distance (R 1 + R 2 − R b ) and R b is the baseline distance (transmitter-receiver). Finally, the output positions are found by generating a 2D Gaussian random variable correlated with the previous autocorrelation matrix: Microphone Sensor and RF Sensor Microphone sensors and RF sensors are modeled following an azimuth model in which, from the power of the signals sent by the drones, either acoustic or radio frequency, the signal-to-noise ratio at the input of the sensor is calculated, and from the detection, the azimuth and distance to the sensor are obtained. These sensors have a certain sensitivity, and, depending on the signal-to-noise ratio, there will be detection or not. The main characteristics are: • Surface propagation losses over land. • Parameters adapted to drone detection (integration times of the order of minutes The first is to calculate the received signal power of the target echo before the correlator. where P average is the noise power of the drone. D C is the directivity correction, Att is the attenuation and L p is the system power loss. The ratio ( S N ) is calculated at the correlator output. There will be the interference powers of noise and correlator side lobes of different signals, including the target. The S N ratio is computed with the following expression: S N = P target P N_clutter + P N_signal + P N_elec (38) In this case, a multisource scenario is assumed, so the directivity correction factor is set at 3 dB. Once the source, the powers and their basic definitions have been characterized, we proceed to obtain the calculation of the effects that produce attenuation. The attenuation in real environments for the propagation of a wave is defined by the following equation: The waves emitted by a drone are those of an omnidirectional source, since it propagates in all possible directions, so the waves emitted are spherical waves whose power level coincides at the same distance from the source. As this distance increases, the wave energy is distributed over a larger and larger area, so that each time this distance is doubled, the power level decreases by a factor of 6 dB theoretically, so the geometric divergence attenuation is: A div (dB) = 20· log 10 (R) + 11 (40) where R is the distance in meters between the drone and the sensor. Atmospheric absorption is the attenuation due to nitrogen, oxygen and carbon dioxide during wave propagation as it travels a specific distance to the receiver. where α dB km is the atmospheric attenuation coefficient, which depends on the following parameters: the frequency of the wave, the ambient atmospheric temperature, the relative humidity of the air and the ambient pressure. Since there are already estimated tables from which these values can be obtained and for the frequencies of these waves, this coefficient takes values of the order of 1 × 10 −3 to 1 × 10 −2 . Ground attenuation is mainly due to waves reflected by the ground surface interfering with the propagation of the main wave from the source to the receiver. This attenuation occurs when the source or receiver is close to the ground surface. This model uses an equation that allows for obtaining the ground effect attenuation in a simpler way because its operation is specified only for long distances and with porous or mixed surface. As the source gets closer, this attenuation tends to disappear. where h m is the average height of the propagation path above ground in meters, and R represents the distance from the drone to the receiver, also in meters. An object should be considered as a shielding obstacle (barrier) if: it meets a surface density of at least 10 kg/m 2 , it has a closed surface with no large cracks or gaps and the horizontal dimension of the object perpendicular to the line connecting transmitter-receiver is greater than the wavelength. As the simulator is going to operate in real spaces that are filled with objects, it is assumed that the barrier losses are 3 dB. Finally, there may be other types of attenuation such as those due to foliage or housing. Losses of 3 dB are assumed. The reception noise is assumed to be the microphone noise (human noise + atmospheric noise + natural interferences). This noise predominates over the thermal noise. Thus, the power of the audio noise is assumed to be a constant (P N_elec ). Once the signal-tonoise ratio is obtained, detection is generated. Logistic regression was used to determine the probability of detection. If the signal-to-noise ratio exceeds the sensitivity of the sensor there is detection, the model used is as follows: The generation of whether there is detection or not is confirmed by generating a uniform random variable and comparing it with the probability of detection: Later, false alarms generation and the generation of measurement positions is performed. These generations are the same as those of the quasi-monostatic radars, so the explanation of their procedures can be seen in that section. It should be noted only azimuth measurements are obtained (through the measurement of the angle of arrival), as range measurement from acoustic signals would demand the performance of multistatic/trilateration procedures. Counter-UAS Simulation Results The previous models have been implemented and integrated in the simulator described in [3]. This integration allows us to use the proposed models in realistic drone scenarios to assess and compare the performance of different sensors. Particularly, a scenario is proposed in this section where an area of interest (i.e., a critical facility) is to be surveilled with different C-UAS sensors. The simulated scenario is represented in Figure 3, where the area of interest to be protected is depicted in red. A surveillance solution using radars and eventual microphone sensors is proposed. This solution is complemented with the microphone sensor that works in shorter distances. The proposed sensors (depicted as markers in Figure 3) are: • Quasi-monostatic Radar (also named Active Radar in figures): A quasi-monostatic radar has been installed in the middle of the critical infrastructure with a quasimonostatic configuration. The values taken to model the sensor refer to some of the commercial radars detailed in the state-of-the-art section. The radar has an instrument range of 10 km, a minimum azimuth of −180 • and a maximum of 180 • , 32 receiving beams and 10 m resolution. The average power transmitted is 500 W. The minimum time between explorations is 0.06 sec (the dwell time). The minimum frequency is 8 GHz, and the maximum frequency is 12 GHz. Finally, the reception beamwidth in azimuth is 2 • , and the beamwidth in elevation is 6 • . • Passive Radar: A passive radar is also installed in the center of the protected facility. It works in conjunction with a hypothetical transmitter of opportunity (i.e.: DVB-T transmitter) located at around 20 km from the passive receiver. The transmitter has enough power to support an instrument range of around 10 km. The minimum azimuth is −30 • , and the maximum is 30 • . The resolution in distance is about 20 m, and the resolution in azimuth is 2 • . A scan time of 1 second has been assumed since it is a system with simultaneous space exploration without mechanical antenna movement. The antenna has a gain of 2 dBi, a secondary lobe level of 22 dB and a signal cancellation level of 60 dB. Finally, the carrier frequency is 600 MHz. • MIMO Radar: A MIMO radar is also installed in the critical infrastructure. It has the following instrumental coverage (minimum azimuth: −180 • , maximum azimuth: 180 • , maximum range: 10 km): It is a medium/short surveillance system with one receiver (located in the center) and three transmitters (located in the area perimeter), which receive simultaneously through 32 receiving beams. The power transmission of each transmitter is 2 kW. The maximum scan rate is 1 s. The resolution in distance is about 20 m, and the resolution in azimuth is 3 • . The antenna has a gain of 11.6 dBi, a secondary lobe level of 13 dB and a signal cancellation level of 40 dB. Finally, the carrier frequency is 15 GHz. • Microphone sensor: A microphone sensor is also simulated in this scenario located in the center of the critical area. It is an array composed by eight microphones separated 0.5 m from each other. The sensitivity of the array is 32 dB, and it has an instrumental range of 1 km. The minimum azimuth of the sensor is −180 • , and the maximum azimuth is 180 • . Although not frequent, violent and hostile acts against critical infrastructures have already occurred and are documented. The scenario tries to assess the alert distance in case of an aerial attack and the positioning accuracy provided by each of the sensors in the described surveillance system. The simulated attack is to be conducted by a terrorist group that intends to infiltrate by air using an off-the-shelf, affordable, and small drone such as a DJI Phantom 4 (estimated RCS of 0.01 m 2 and a noise level of around 80dB). The departure place is located around 7 km away from the critical infrastructure. From there, the drone will try to make a direct approach at maximum speed following the direction depicted as a black arrow in Figure 3). Although not frequent, violent and hostile acts against critical infrastructures have already occurred and are documented. The scenario tries to assess the alert distance in case of an aerial attack and the positioning accuracy provided by each of the sensors in the described surveillance system. The simulated attack is to be conducted by a terrorist group that intends to infiltrate by air using an off-the-shelf, affordable, and small drone such as a DJI Phantom 4 (estimated RCS of 0.01 m 2 and a noise level of around 80dB). The departure place is located around 7 km away from the critical infrastructure. From there, the drone will try to make a direct approach at maximum speed following the direction depicted as a black arrow in Figure 3). This scenario (i.e., drone trajectory, sensors' location) has been easily represented and executed in real time using the simulation platform. After running it, the plots generated by each of the sensors were retrieved for analysis. These plots are shown in Figure 4, where the plots corresponding to actual drones and the false alarms are represented. To represent the plots from the microphone sensor (where only angular information is available), the actual distance of the drone is used. This scenario (i.e., drone trajectory, sensors' location) has been easily represented and executed in real time using the simulation platform. After running it, the plots generated by each of the sensors were retrieved for analysis. These plots are shown in Figure 4, where the plots corresponding to actual drones and the false alarms are represented. To represent the plots from the microphone sensor (where only angular information is available), the actual distance of the drone is used. False alarms are filtered out in Figures 5 and 6 for each of the four sensors to facilitate the analysis. There, the alert distance provided by each of the sensors can be compared. As expected, radar-based sensors have a greater range than the microphone-based one, which only detects the drone in very close proximity. Within the radar-based sensors, active radar provides consistent detections from the beginning of the trajectory, whereas passive and MIMO sensors provide consistent results from a range of 3 and 4 km, respectively (as can be derived from plot density in Figure 7). It can also be checked that detection probability (related to the number of detections) increases as the distance to the sensor decreases. This result is the expected one, as detection probability increases with the SNR, which also increases as the distance to the sensor is reduced. The positioning errors are depicted in Figure 7 for the angle error and Figure 8 for the distance error (microphone sensor is not included here). It can be observed that the magnitude of both type of errors decreases as the drone approaches the sensor receiving location. This can be explained, once again, by the dependency between the computed error and the SNR. More detailed performance measures could be obtained, both in terms of detection and accuracy (i.e., PD vs. range or angle/range error standard deviations vs. range). In the case of monostatic radars/RF sensors and acoustic sensors, it is possible to derive this relation by considering PD/accuracy dependency with SNR, which, in turn, depends directly on range. However, for multistatic sensors, passive radars and distributed sensors in general, the relations are much more unlinear, and the results are very scenario dependent. In these cases, the relative locations of the emitters, receivers, etc. have important impact on the results. False alarms are filtered out in Figures 5 and 6 for each of the four sensors to facilitate the analysis. There, the alert distance provided by each of the sensors can be compared. As expected, radar-based sensors have a greater range than the microphone-based one, which only detects the drone in very close proximity. Within the radar-based sensors, active radar provides consistent detections from the beginning of the trajectory, whereas passive and MIMO sensors provide consistent results from a range of 3 and 4 km, respectively (as can be derived from plot density in Figure 7). It can also be checked that detection probability (related to the number of detections) increases as the distance to the sensor decreases. This result is the expected one, as detection probability increases with the SNR, which also increases as the distance to the sensor is reduced. The positioning errors are depicted in Figure 7 for the angle error and Figure 8 for the distance error (microphone sensor is not included here). It can be observed that the magnitude of both type of errors decreases as the drone approaches the sensor receiving location. This can be explained, once again, by the dependency between the computed error and the SNR. More detailed performance measures could be obtained, both in terms of detection and accuracy (i.e., PD vs. range or angle/range error standard deviations vs. range). In the case of monostatic radars/RF sensors and acoustic sensors, it is possible to derive this relation by considering PD/accuracy dependency with SNR, which, in turn, depends directly on range. However, for multistatic sensors, passive radars and distributed sensors in general, the relations are much more unlinear, and the results are very scenario dependent. In these cases, the relative locations of the emitters, receivers, etc. have location. This can be explained, once again, by the dependency between the computed error and the SNR. More detailed performance measures could be obtained, both in terms of detection and accuracy (i.e., PD vs. range or angle/range error standard deviations vs. range). In the case of monostatic radars/RF sensors and acoustic sensors, it is possible to derive this relation by considering PD/accuracy dependency with SNR, which, in turn, depends directly on range. However, for multistatic sensors, passive radars and distributed sensors in general, the relations are much more unlinear, and the results are very scenario dependent. In these cases, the relative locations of the emitters, receivers, etc. have important impact on the results. Conclusions and Future Work This paper reviews some of the current technologies used for the noncollaborative detection and tracking of UAVs and proposes a collection of simulation models, composed by integrating preexisting models of radar and acoustic sensing and by adapting them to
14,806.4
2021-12-28T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Metabolic Dysregulation in Hepacivirus Infection of Common Marmosets (Callithrix jacchus) Chronic hepatitis C has been associated with metabolic syndrome that includes insulin resistance, hepatic steatosis and obesity. These metabolic aberrations are risk factors for disease severity and treatment outcome in infected patients. Experimental infection of marmosets with GBV-B serves as a tangible, small animal model for human HCV infection, and while virology and pathology are well described, a full investigation of clinical disease and the metabolic milieu is lacking. In this study six marmosets were infected intravenously with GBV-B and changes in hematologic, serum biochemical and plasma metabolic measures were investigated over the duration of infection. Infected animals exhibited signs of lymphocytopenia, but platelet and RBC counts were generally stable or even increased. Although most animals showed a transient decline in blood glucose, infection resulted in several fold increases in plasma insulin, glucagon and glucagon-like peptide 1 (GLP-1). All infected animals experienced transient weight loss within the first 28 days of infection, but also became hypertriglyceridemic and had up to 10-fold increases in adipocytokines such as resistin and plasminogen activator inhibitor 1 (PAI-1). In liver, moderate to severe cytoplasmic changes associated with steatotic changes was observed microscopically at 168 days post infection. Collectively, these results suggest that GBV-B infection is accompanied by hematologic, biochemical and metabolic abnormalities that could lead to obesity, diabetes, thrombosis and atherosclerosis, even after virus has been cleared. Our findings mirror those found in HCV patients, suggesting that metabolic syndrome could be conserved among hepaciviruses, and both mechanistic and interventional studies for treating HCV-induced metabolic complications could be evaluated in this animal model. Introduction Hepatitis C virus (HCV) causes chronic hepatitis leading to fibrosis, cirrhosis and hepatocellular carcinoma in 80% of infected individuals [1]. About 2.8% of the world's population is infected with HCV with associated mortality approximating 500,000 deaths per year [2][3][4]. In addition to liver disease-related mortality, HCV-infected patients are prone to type 2 diabetes and cardiovascular disease [5,6]. A myriad of metabolic aberrations including elevated triglycerides, elevated fasting glucose and abdominal obesity can exacerbate the development of metabolic syndrome, which in turn leads to cardiovascular disease and type 2 diabetes mellitus [7]. Numerous studies have reported the association of HCV and its role in insulin resistance, hepatic steatosis, atherosclerosis and other metabolic aberrations that have been specifically described as HCV-associated dysmetabolic syndrome (HCADS) [8-10]. These metabolic aberrations especially steatosis, have been identified as predictors of poor treatment outcome for interferon-based therapy in chronic HCV infection in the early 2000s [11][12][13][14]. In the current era of directly acting antivirals, the impact of metabolic disorders on treatment outcome has not been well studied. A better understanding of the dysmetabolic milieu in HCV-infected patients will be helpful in attaining improved sustained virological response rates followed by successful HCV eradication. HCV also has a direct role in inducing metabolic dysfunctions. HCV core protein interferes with insulin signaling pathways, thus inducing insulin resistance in the infected patients [15,16]. The expression of HCV non-structural protein 5A (NS5A) in human hepatoma cells lead to upregulated gluconeogenic and lipoegenic gene expression, which in turn favors the development of insulin resistance and metabolic syndrome [16]. In infected hepatocytes, internalized HCV disrupts the host lipid metabolism for its own replication and assembly, leading to hepatic steatosis and non-alcoholic fatty liver disease (NAFLD)/non-alcoholic steatohepatitis (NASH) [9]. Several pathways have been reported to describe HCV mediated lipid dysregulation in a genotypic specific manner. These include hepatic fat accumulation by activation of SREBP-1 and 2, impairment of peroxisome proliferator-activated receptor expression, inhibition of MTP activity and promotion of de-novo lipid synthesis [17][18][19]. Insulin resistance predates steatosis development, which in turn aggravates steatosis leading to a inflammatory liver microenvironment. This results in activation of cell stress pathways, formation of inflammasome and further hepatocellular injury. Along with liver and pancreas, adipose tissue, acting as an endocrine organ also regulates lipid and glucose metabolism. Dysfunctional adipose tissue is associated with imbalanced production of pro-inflammatory adipokines including adiponectin, monocyte chemoattractant protein-1 (MCP-1), visfatin and others, all contributing to local and systemic metabolic dysregulation [20][21][22][23][24]. A state of chronic, low-level inflammation is associated with the metabolic syndrome, either underlying or exacerbating it, predisposing chronic patients to the risk of developing hepatocellular carcinoma and cardiovascular complications such as atherosclerosis [25][26][27][28]. Animal models play an important role in understanding the pathogenesis and immunology of infectious agents, and chimpanzees were formally the primary model for HCV and played a critical role in elucidating the natural history of the disease [29-31]. However, limitations due to ethical and cost reasons have led to a generalized reduction in use of chimpanzees in biomedical research. Marmosets are a promising surrogate nonhuman primate model for HCV due to their high degree of homology with humans immunologically [32]. Most importantly, GB virus-B (GBV-B) belonging to the same family and genus as HCV causes an analogous disease to HCV in new world monkeys, including marmosets [33][34][35][36][37]. In addition to their immunological similarity, marmosets have similar suites of body composition, alterations in glucose and lipid metabolism as observed in humans and other nonhuman primates [38][39][40][41]. They are more prone to developing insulin resistance, diabetes mellitus, NAFLD and obesity and are used as models for the same conditions [38][39][40]. Collectively, the increasing knowledge of marmoset immunology and metabolic pathways, limited size and cost, and availability of cross-reactive reagents makes marmosets an attractive animal model. Ethics statement and animals Six common marmosets (Callithrix jacchus) were used for this study and were housed in BSL2 biocontainment facilities at the New England Primate Research Center in accordance with the guidelines of the local institutional animal care and use committee, and the Department of Health and Human Services (DHHS) Guide for the Care and Use of Laboratory Animals. The Harvard University IACUC approved all procedures prior to study. All animals were socially housed and enrolled in the NEPRC environmental enrichment program designed to provide mental and sensory stimulation and promote development of behavioral and logical skills using varied stimuli (i.e., foraging devices). Blood draws consisted of no more than 1% of body weight and not more than 3 ml. Post-blood draw analgesics were administered at the discretion of the veterinarian. Animals were fed a commercial new world nonhuman primate diet, which was supplemented with fruits, vegetables, eggs and nuts. Water was available ad libitum. Additional information on the animals used in this study is found in Table 1. Animals were weighed at weekly intervals for the first 4 weeks followed by monthly weight measurements until day 168 post-infection (pi). Sequential blood draws were performed pre-and post-GBV-B inoculation during morning hours for every time point. For all procedures animals were sedated with ketamine (40-50 mg/kg). Animals were sacrificed at 168 days pi with IV administration of an overdose (>>50 mg/kg) of pentobarbital verified by auscultation with a stethoscope. Evident post mortem pathology was recorded by the attending veterinarian and pathologist. Virus inoculum All animals were inoculated IV with 1.0 x 10 3 to 4.0 x 10 3 virus copy equivalents of uncloned GBV-B virus stock as described [42]. Blood chemistry Whole blood was also collected at indicated time points and sera were collected at monthly intervals. At pre-infection and monthly time points 0.5 to 1ml (based on body weight of the animal) of whole blood and 1 ml of clotted blood was collected; 0.5ml of blood was collected at day 7, 14 and 21 pi. Blood cell counts were performed by automated analysis (Hemavet HV 1700FS instrument), and biochemical analyses were performed by standard veterinary diagnostics (IDEXX, Grafton MA). The medians and ranges of serum enzymes and analytes of animals at baseline (day 0) are shown in Table 2. Statistics Statistical significance of difference was determined by non-parametric Kruskal-Wallis test followed by Dunn's multiple comparison post-test or paired Students t test using GraphPad Prism 6.0 software. Differences between the mean ranks of different time points compared to the mean rank of day 0 were considered significant when the p value was less than 0.05. Correlations between metabolic and biochemical factors were analyzed by Spearman correlation test using GraphPad Prism 6.0 software. Clinical presentation of GBV-B infection Six marmosets were infected with GBV-B and the clinical course of the disease was studied longitudinally over 168 days pi. Viral loads were analyzed in the plasma of infected animals by real time PCR [42]. Post-infection, there was a reduction in percent body weight coinciding approximately with peak mean viremia (Fig 1). Up to 5 to 10% loss in body weight was observed in all animals when compared to baseline values (median = 0.43 kg; range = 0.303-0.496 kg). A return to normal body weight and expected weight gain only occurred after viral clearance. Hematological changes in GBV-B infection Reduction in the total white blood cell count in all animals was observed at day 28 and at later time points (Fig 2A). Similarly significant loss in lymphocyte count was observed in all animals except 282-06 at days 28 and 56 pi ( Fig 2B). Interestingly, platelet counts increased significantly at days 28, 140 and 168 pi ( Fig 2C). RBC numbers reduced, although not significantly coinciding with the leukocyte counts at day 28, but not at later time points ( Fig 2D). No significant differences were observed in eosinophils, basophils and monocytes (data not shown). Biochemical changes indicative of tissue inflammation Sera at baseline (day 0, Table 2) and time points' pi were analyzed for several biochemical parameters. As observed in other reports [44][45][46], serum enzymes such as alanine aminotransaminase (ALT), aspartate aminotransaminase (AST) and alkaline phosphatase (ALP) indicative of hepatitis were elevated in infected animals although they were not correlated with viral loads [42]. Gamma glutamyl aminotransferase (GGT), another serum enzyme associated with liver damage in marmosets [39], was elevated by at least 1.5 to 2-fold in all infected animals when compared to day 0 values (Fig 3A). Decreases in blood urea nitrogen (BUN) were observed at all time points (Fig 3B). Creatine Kinase (CK) was elevated by more than 2-fold than baseline values in all animals at different time points (Fig 3C) suggesting in addition to liver damage, more generalized activation or cardiovascular damage. No significant changes were observed in levels of albumin, globulin, total proteins and electrolytes such as calcium, phosphorous, chloride, potassium and sodium (data not shown). Dysregulated glucose metabolism in GBV-B infection Serum glucose and plasma hormones involved in glucose metabolism were monitored at monthly time points post infection. A transient but consistent decline in blood glucose was observed by day 28 pi, returning to normal levels by viral clearance (Fig 4A). Based on sample availability, plasma from only three animals-228-07, 16-07 and 27-09 were analyzed for hormones involved in glucose metabolism by Luminex assay. The major pancreatic hormones, insulin and glucagon were increased at multiple time points in all three animals (Fig 4B & 4C). GIP and GLP-1 are incretin hormones produced by the gut. While GLP-1 was elevated at days 28, 56 and 168 at significant levels ( Fig 4D) in the three animals, GIP was increased in 228-07 and 16-07 but not in 27-09 ( Fig 4E). Lipid dysfunctions in infected animals Altered lipid profiles that drive steatosis and insulin resistance have been previously associated with HCV infection [47][48][49]. In GBV-B-infected marmosets, serum cholesterol levels were generally not variable (Fig 5A), whereas elevated triglycerides were observed at later time points exceeding 400mg/dL in animals 228-07, 16-07 and 27-09 ( Fig 5B). Triglyceride concentrations more than 400mg/dL is considered as hypertriglyceridemia in marmosets [38]. Interestingly, the animal 228-07 had high triglyceride levels even at baseline, which further increased by 2 to 3-fold at later time points. Adipocytokines are cytokines secreted by adipose tissues and are associated with obesity and insulin resistance. PAI-1, visfatin, resistin and leptin are some of the adipocytokines that were analyzed in the plasma of 228-07, 16-07 and 27-09. Increases in PAI-1 as high as 10-fold or more were observed in all three animals at most time points of the study (Fig 5C). Resistin was elevated in animals 228-07 and 16-07 and reached significant levels at day 168 ( Fig 5D). Interestingly, resistin levels correlated with triglyceride levels (r = 0.759, p<0.001) in the plasma of infected animals. Visfatin was increased in 27-09 at day 28 pi, and in 16-07 at later time points (Fig 5E). No major changes were observed in leptin levels ( Fig 5F). Pathological changes towards hepatic steatosis in infected animals At day 168 pi all infected animals were necropsied and gross pathology was noted. Table 1 lists the various anomalies detected in internal organs of infected marmosets. The most common observations included enlarged liver, spleen and lymph nodes and mottled kidneys. Previous observations from our lab confirmed the presence of hepatitis and liver fibrosis in GBV-B-infected animals by histopathological evaluation [42]. Further examination for steatosis revealed cytoplasmic vacuoles with peripheral nuclei indicating macrovesicular steatosis in the liver of infected animals at day 168 pi compared to normal animals (Fig 6). Quantitative evaluation of liver histopathology identified varying levels of steatotic changes in all infected animals-ranging from mild steatosis in 123-10 to severe steatosis in 27-09 (Table 1). Discussion Previous studies including our own observations have shown that GBV-B induces acute hepatitis in marmosets and tamarins with elevated serum enzymes such as ALT, isocitrate dehydrogenase and glutamate dehydrogenase [42,44-46]. However, the hematological, biochemical and metabolic changes over the course of GBV-B infection have not been described in detail. In this study, marmosets infected with GBV-B were assessed for changes in both hematological and serum metabolic parameters from onset of infection until after viral clearance. Many flaviviruses infect hematopoietic cells which, as a general feature of this family of viruses, can lead to neutropenia, bone marrow hypocellularity and abnormal megakaryocyte formation [50]. Thrombocytopenia in combination with leukopenia and hemolysis secondary to HCV infection has been reported [51]. Interestingly, although virus was cleared in most animals by days 28 or 56, recovery of lymphocyte counts was observed much later. However, leukopenia and decreased RBC levels observed in most animals at day 28 could be the result of repeated phlebotomy within the first month of infection. Although thrombocytopenia has been specifically associated with chronic HCV [52-55], platelet counts were not reduced in GBV-B infection. A probable reason for the difference in platelet count between the two hepaciviruses could be the generally acute nature of GBV-B infection which might be insufficient in duration compared to chronic HCV to induce thrombocytopenia. Dysfunctions of glucose metabolism and insulin resistance are common features in chronic hepatitis due to the major role(s) the liver plays in glucose metabolism. Indeed HCV directly interrupts signaling in the insulin receptor substrate-1 pathway through its core protein [56]. Most infected animals showed evidence of an acute hypoglycemia, but also exhibited elevated levels of insulin, glucagon and GLP-1 (Fig 4). Hyperinsulinemia has been demonstrated in chronic HCV infection [57,58], and both hyperinsulinemia and hyperglucagonemia can be found in cirrhotic patients [59]. GIP is elevated in type 2 diabetes mellitus and in impaired glucose tolerant patients [60,61] while GLP-1 improves insulin sensitivity in mice and humans [62]. However, the changes of glucagon, GLP-1 and GIP hormones in viral hepatitis induced insulin resistance are not completely clear [63,64]. Other than insulin resistance, the metabolic complications most commonly associated with chronic HCV infection include hepatic steatosis and dyslipidemia [65][66][67][68]. Patients with hepatic steatosis had significantly increased serum triglycerides than HCV-infected patients without hepatic steatosis [69], and severity of liver injury has been correlated with low cholesterol levels [70][71][72]. Marmosets infected with a HCV/GBV-B chimera demonstrated pathological changes in liver including lymphocyte infiltration, hepatic edema, cholestasis and ultrastructural changes such as lipid droplets indicative of fatty liver degeneration [73]. In this study, infected animals showed accumulation of lipid in hepatocytes as evidenced by steatosis in liver and additionally hepatomegaly (Table 1), thus recapitulating NAFLD in humans. Further, necrotic hepatocellular structures, inflammatory cell infiltration and mild levels of fibrosis in liver were observed in the same cohort of animals [42] indicating progression of liver injury towards NASH. This data is in accordance with the marmoset model of NAFLD and NASH where elevated serum GGT and triglycerides were suggested as useful biochemical markers of liver dysfunction [39,74]. Obesity and insulin resistance are independent negative predictors for SVR in chronic hepatitis C patients undergoing combination therapy [75]. Adipose tissue derived cytokines or adipocytokines are reported to play an important role in the development of obesity derived insulin resistance [76,77]. Resistin, an adipocytokine was higher in NAFLD patients with moderate/severe liver fibrosis than patients with mild fibrosis [78]. Hyperresistinemia has been reported in Chronic HCV patients [48,79] and IL-8 and resistin levels predicted severe/moderate fibrosis in HCV infected patients [79]. At day 168 pi, significant hyperresistinemia was observed in infected animals. In addition, association between resistin and triglyceride levels indicates that viral hepatitis could drive insulin resistance and lipid dysregulation. PAI-1, another adipocytokine, was elevated in infected animals 2-fold up to 30-fold. PAI-1 levels were associated with triglyceride levels in chronic HCV patients [49]. Elevated PAI-1 levels have been associated with diabetic nephropathy [80,81]. Renal complications such as albuminuria, cryoglobulinemia-induced glomerulonephritis and chronic kidney disease have all been associated with HCV infection [82][83][84]. Abnormal changes in BUN were observed in some animals and could indicate renal disease. Acute experimental liver damage in cirrhotic rats also induced renal dysfunction with increases in serum creatinine, bilirubin, and BUN levels [85]. Collectively, these data suggest that GBV-B, like many other viral infections, could induce kidney dysfunction, but it is also important to point out that marmosets have a propensity to develop spontaneous benign glomerulonephropathy [86][87][88]. Thus, further studies will be necessary to determine if kidney disease is a true complication of GBV-B infection. Further, a general state of chronic inflammation is associated with these metabolic aberrations [25][26][27]. In all GBV-B infected animals, CK was elevated up to 18-fold above baseline. Multiple studies have founded elevated CK and myositis associated with HCV infection [89][90][91][92]. This is further supported by the gross pathological observations of hepatomegaly, splenomegaly, kidney lesions and peripheral lymphadenopathy-indicative of an underlying state of inflammation caused by GBV-B infection and similarity to HCV disease. In summary, despite the relatively low numbers of animals evaluated in this study, our data suggest that GBV-B infection in marmosets could induce significant clinical and metabolic dysregulation as evidenced by hypertriglyceridemia and potential pre-diabetic manifestations similar to disease found in HCV infection of humans. The most notable similarities were signs of systemic inflammation, liver damage, dysfunctional lipid and glucose metabolism, all of which could influence progression of disease and/or response to treatment. Although the measurement of non-fasting plasma glucose is a caveat in this study, several pre-diabetic parameters such as adipocytokine dysregulation in combination with lipid dysregulation indicates a risk of diabetic manifestation. In addition, steatotic changes in liver confirmed triglyceride accumulation leading to fatty liver degeneration. Further, these metabolic aberrations were seen in animals even after virus was completely cleared, which could be indicative of permanent liver damage resulting in progressive dysregulation of metabolic pathways irrespective of virus replication. Therefore, GBV-B infection, regardless of duration, might induce hematologic and metabolic dysfunctions in marmosets similar to those observed in HCV, but the full mechanism of hepacivirus-induced metabolic disease will require further study in additional animal cohorts. Nonetheless, the similarity of disease between the marmoset model and humans could help to better understand the role(s) metabolic dysfunction plays in disease pathogenesis as well as evaluations of these disease complications when considering treatment.
4,529
2017-01-13T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
ARMC10 regulates mitochondrial dynamics and affects mitochondrial function via the Wnt/β‐catenin signalling pathway involved in ischaemic stroke Abstract Mitochondrial dynamics has emerged as an important target for neuronal protection after cerebral ischaemia/reperfusion. Therefore, the aim of this study was to investigate the mechanism by which ARMC10 regulation of mitochondrial dynamics affects mitochondrial function involved in ischaemic stroke (IS). Mitochondrial morphology was detected by laser scanning confocal microscopy (LSCM), and mitochondrial ultrastructural alterations were detected by electron microscopy. The expression of mitochondrial dynamics‐related genes Drp1, Mfn1, Mfn2, Fis1, OPA1 and ARMC10 and downstream target genes c‐Myc, CyclinD1 and AXIN2 was detected by RT‐qPCR. Western blot was used to detect the protein expression of β‐catenin, GSK‐3β, p‐GSK‐3β, Bcl‐2 and Bax. DCFH‐DA fluorescent probe was to detect the effect of ARMC10 on mitochondrial ROS level, Annexin V‐FITC fluorescent probe was to detect the effect of ARMC10 on apoptosis, and ATP assay kit was to detect the effect of ARMC10 on ATP production. Mitochondrial dynamics was dysregulated in clinical IS samples and in the OGD/R cell model, and the relative expression of ARMC10 gene was significantly decreased in IS group (p < 0.05). Knockdown and overexpression of ARMC10 could affect mitochondrial dynamics, mitochondrial function and neuronal apoptosis. Agonist and inhibitor affected mitochondrial function and neuronal apoptosis by targeting Wnt/β‐Catenin signal pathway. In the OGD/R model, ARMC10 affected mitochondrial function and neuronal apoptosis through the mechanism that regulates Wnt/β‐catenin signalling pathway. ARMC10 regulates mitochondrial dynamics and protects mitochondrial function by activating Wnt/β‐catenin signalling pathway, to exert neuroprotective effects. | INTRODUC TI ON 10] The fission and fusion of mitochondria within the cell are known as mitochondrial dynamics and are controlled by a series of proteins. 11,12Several genes controlling mitochondrial fission/fusion dynamics have been identified, such as fission protein 1 (Fis1), mitochondrial fission factor (Mff), dynamic-related protein 1 (Drp1) and mitofusin 1/2 (Mfn1/2).Generally, Drp1 and Fis1 control mitochondrial fission, while Mfn1/2, Optic Atrophy 1(OPA1) control mitochondrial fusion.[15] In ischaemic stroke, inadequate oxygen and nutrient supply disrupts energy homeostasis and adenosine triphosphate (ATP) synthesis, ultimately leading to mitochondrial dysfunction and permanent brain damage.Many studies have found that mitochondrial dynamics play an important role in ischaemic stroke. For example, Liu et al. noted that hyperglycaemia exacerbates cerebral ischaemia/reperfusion-induced neuronal injury by activating cellular autophagy and mitochondrial division via ERK1/2. 16 et al. investigated that ligustilide induces Drp1-mediated mitochondrial fragmentation in vivo and in vitro through activation of the AMPK signalling pathway, protecting against neural injury and preventing ischaemic stroke. 17 exacerbates neuronal mitochondrial fragmentation and injury and participates in neuronal ischaemia-reperfusion injury in a GTPase-dependent manner. 18e armadillo repeat containing 10 (ARMC10) gene, also known as splicing variant involved in hepatocarcinogenesis (SVH), is located on chromosome 7 and encodes the ARMC10 (SVH) protein.Different splice variants can form four isoforms of SVH-A, B, C and D proteins, which can be expressed in a variety of tissues and are particularly highly expressed in human brain and kidney. 19Serrat et al. found that ARMC10 have unique features in the functional regulation of mitochondrial dynamics. 20Jan Kriska noted that the Wnt/β-catenin signalling pathway promotes neurogenesis in neural stem/progenitor cells and activates neuronal differentiation in the subventricular zone. 21However, there are few reports on the involvement of ARMC10 in the pathological process and mechanism of ischaemic stroke.Therefore, the aim of the present study was to investigate the role of ARMC10 through the altering mitochondrial dynamics involved in the pathological process of ischaemic stroke, thereby affecting neuronal function. In the present study, we firstly investigated the correlation between mitochondrial dynamics and IS, and then, we constructed ARMC10 knockdown and overexpression cell models to confirm that ARMC10 regulates mitochondrial dynamics and thus affects mitochondrial function and neuronal apoptosis.Next, we detected the expression levels of key molecules and downstream target genes of the Wnt/β-catenin signalling pathway, to explore the mechanism of ARMC10 regulating mitochondrial function involved in IS neuronal injury.Finally, OGD/R model was constructed in SH-SY5Y cells to verify the hypothesis. | Cell transfection The overexpression plasmid vector of ARMC10 constructed by Hippobio (Zhejiang, China).SH-SY5Y cells were cultured in six-well plates at a density of 6 × 10 5 cells per well and transfected with siRNA and overexpression plasmid of ARMC10 gene, Ribo FECT™ CP (RIBBIO, China) and Lipofectamine™ 3000 Transfection Reagent (ThermoFisher, America) were used in accordance with the manufacturer's protocol. Follow-up experiments were carried out after 48 h of culture. 2.2.3 | Oxygen-glucose deprivation/re-oxygenation (OGD/R) treatment SH-SY5Y cells were seeded in 6-well plate and perform OGD/R processing when the density reaches 80%.Discarded the culture medium, washed the cells gently with PBS and add sugar-free DMEM medium (BI, Israel) (serum-free) and placed in 37°C anaerobic incubator for 24 h.Subsequently, the medium was changed to normal medium and cultured in CO 2 incubator for 8 h.CCK-8 Kit (Solarbio, China) was used to detect cell viability of different time points, OGD24 h/R8 h was selected to establish OGD/R model.Each group was provided with 3 wells. | RT-qPCR Refer to the Hieff qPCR SYBR Green Master Mix manual (YEASEN, China) to prepare the RT-qPCR reaction system and set the reaction conditions.The primers were used for RT-qPCR were synthesized by Tsingke Biotechnology Company (China).Relative RNA levels detected by RT-qPCR using the SYBR Green method.The relative expression of RNAs was normalized to β-actin/β-catenin mRNA calculated using the comparative CT method.Primer sequences were shown in Table 1. Gene Primers determination kit (Solarbio, China) determine the concentration of total protein.Boil samples 10 min and immediately put on the ice. According to the molecular weight of the target protein, prepared PAGE Gel (Epizyme Biotech, China).Fully separate protein through electrophoresis, then transferred onto PVDF membranes and blocked with 5% skim milk powder.The following protein level were detected by Image information was collected with ultra-sensitive exposure system. | ATP assay Mitochondrial ATP content was detected using ATP Assay Kit (Beyotime Biotechnology, China).Remove culture medium and add 200 μL lysis buffer, vortex to lyse fully, centrifuged with cryogenic centrifuge at 12,000×g for 5 min.Take the supernatant for future experiment.Add 100 μL ATP detection working fluid to the detection hole and place 5 min at room temperature.Add 50 μL standard solution or sample in per hole then luminescence was detected with CentroLB960 Micro-orifice plate luminescence detector. | Statistical analysis The images of western blot were processed by ImageJ software, and the experimental data were statistically analysed by GraphPad Prism 8.0.1.All the experimental data were tested for normality and homogeneity of variance, and then, independent sample t-test or one-way analysis of variance (one-way ANOVA) was performed.The statistical results were expressed in the form of mean ± standard deviation (mean ± SD), α = 0.05, p < 0.05 indicated that the difference was statistically significant. | Changes of mitochondrial dynamics and function in clinical samples and OGD models To explore the correlation between ischaemic stroke and mitochondrial dynamics, the peripheral blood of three IS patients and three healthy controls were collected to extract mononuclear cells, and then, the morphological characteristics of mitochondria were observed with LSCM.The results showed significant fragmentation of mitochondria in IS group.The mitochondrial aspect ratio (AR) value was significantly lower than that in control, while the mitochondrial fragmentation count (MFC) and sphericity results were opposite to the AR trend.(Figure 1A). The in vitro experimental results also showed that OGD/R treatment had an impact on mitochondrial morphology.The value of form factor (FF) in OGD/R group was significantly lower, while the MFC value of the OGD/R group was significantly higher than control (Figure 1B).After OGD/R, the ultrastructure of mitochondria underwent significant changes, with a significant increase in the proportion of swollen mitochondria and a significant decrease in the proportion of mitochondria with more than 5 cristae (Figure 1C). Mitochondrial dysfunction occurs after OGD/R treatment, characterized by an increase in mitochondrial ROS levels and a decrease in ATP production.In addition, the apoptosis rate of SHSY-5Y cells in the OGD/R group significantly increased, indicating that OGD/R treatment increased neuronal apoptosis (Figure 1D). | Exploring genes that regulate mitochondrial dynamics changes In order to further explore the genes that regulate mitochondrial dynamics changes, RT-qPCR was used to detect the expression of mitochondrial dynamics-related genes in the IS group and the control group.The results showed that the relative expression level of ARMC10 in IS group was significantly lower than that in normal control (p < 0.01), but there was no significant difference in the expression level of Drp1, Mfn1, Mfn2, Fis1 and OPA1 between the two groups (p > 0.05) (Figure S1A).Consistent with the clinical samples, the results of RT-qPCR and western blot showed that ARMC10 expression is significantly reduced after ischaemia-reperfusion (I/R) injury (Figure 2B,C; Figure S1B). | Verify the effect of ARMC10 on mitochondrial dynamics Mitochondrial morphology analysis showed that knocking down ARMC10 resulted in significant fragmentation of mitochondrial morphology.The AR value significantly decreased, while the sphericity and MFC value significantly increased.While overexpression of ARMC10 could reduce mitochondrial disruption caused by OGD/R (Figure 3A).In addition, the ultrastructure | ARMC10 affects mitochondrial function and Neuronal apoptosis Then, we validated that the change of ARMC10 expression level affected mitochondrial function and neuronal apoptosis.Transfection of si-ARMC10 significantly increased ROS production and decreased ATP production, while overexpression of ARMC10 significantly decreased ROS content and increased ATP production (Figure 4A).In addition, silencing ARMC10 could up-regulate the expression of proapoptotic protein Bax and reduce the level of anti-apoptotic protein Bcl-2, while overexpression of ARMC10 could lead to the opposite result (Figure 4B).The results of flow cytometry were consistent with western blot, indicating a significant rise in cell apoptosis in si-ARMC10 group.The cell apoptosis rate of OGD/R + Empty plasmid group was significantly higher, and overexpression of ARMC10 could rescue OGD/R induced cell apoptosis (Figure 4C). | ARMC10 activates Wnt/β -Catenin signal pathway and initiates the expression of downstream target genes Next, we explored whether ARMC10 activate Wnt/β-Catenin signalling pathway by which ARMC10 affects mitochondrial function and leads to neuronal apoptosis.In SH-SY5Y cells, the expression levels of ARMC10 were altered by transfection of si-ARMC10, ARMC10 overexpression plasmids and pCDH empty plasmids, respectively. We hypothesized that knockdown of ARMC10 affected key proteins of Wnt/β-catenin signal pathway, including β-catenin, GSK-3β and p-GSK-3β, which have been reported. 22,23Western blot and RT-qPCR confirmed that the expression of β-catenin and p-GSK-3β decreased significantly, while the expression level of GSK-3β increased significantly.In addition, the results of western blot and RT-qPCR showed that the overexpression of ARMC10 up-regulated the level of βcatenin and p-GSK-3β, and the expression of GSK-3β was reduced at the same time (Figure 5A; Figure S2A).decreased significantly, while after overexpression of ARMC10, the decrease of β-catenin transformation was rescued (Figure 5D). After Wnt/ β-catenin signalling pathway was activated, β-catenin transferred from cytoplasm to nucleus, initiating transcription and translation of downstream target genes. 24,25Knocking down ARMC10 significantly up-regulated the expression of AXIN2 at both mRNA and protein level, while the expression of AXIN2 decreased significantly after overexpression of ARMC10 (Figure 5B).The expression of c-Myc and CyclinD1 significantly decreased in si-ARMC10 group and up-regulated the expression of c-Myc and CyclinD1 after overexpression of ARMC10 (Figure 5C).and the up-regulation of GSK-3β induced by OGD/R (Figure 7A; | Agonist and inhibitor affect mitochondrial function and neuronal apoptosis by targeting Wnt/β -catenin signal pathway Figure S2C).The results of RT-qPCR and western blot showed that β-catenin nuclear translocation was reduced and AXIN2 expression was increased, while cMyc and CyclinD1 expression was decreased after OGD/R treatment, but the above effects were reversed when ARMC10 was overexpressed (Figure S3). Compared with the control group, the OGD/R group showed a significantly increasing in ROS generation and a significantly decreasing in the level of ATP generation.Relative to the OGD/R + Empty group, OGD/R + OE group showed that ROS production was significantly decreased and ATP production level was significantly increased; OGD/R + LiCl group showed a significantly inhibited in ROS production and a significantly rise in ATP production level compared with OGD/R group (Figure 7B). The expression of apoptotic protein Bax protein was significantly up-regulated and the expression of anti-apoptotic protein Bcl-2 protein was significantly down-regulated in the OGD/R group compared with the control group; the expression of Bax protein was significantly down-regulated and the expression of Bcl-2 protein was significantly up-regulated in the OGD/R + ARMC10-OE group relative to the OGD/R + Empty group.The results are shown in Figure 7C. | DISCUSS ION Mitochondrial dynamics, a key mechanism for mitochondrial quality control, has emerged as an important target for neuronal protection after cerebral ischaemia/reperfusion. 26We found that mitochon- on mitochondrial dynamics in an adult male rat model of stroke and found that OPA1 was up-regulated and cerebral oedema was reduced after cerebral ischaemia. 27This was consistent with our findings.Ischaemic stroke induces an increase in mitochondrial fission, which contributes to increased mitochondrial energy production. However, excessive fission impairs normal mitochondrial function.function. 30,31ATP synthesis and ROS production are often used as important indicators for evaluating mitochondrial function. 32,33r results showed that transfection of si-ARMC10 significantly increased ROS production and decreased ATP production, whereas ischaemia/reperfusion injury. 34Mirra S et al found that ARMC10 is expressed in chicken spinal cord nerve and participates in the regulation of chicken spinal cord nerve development by regulating Wnt/β-catenin signal pathway. 35Therefore, we hypothesized that ARMC10 may be involved in cerebral ischaemia-reperfusion injury by regulating the Wnt/β-catenin signalling pathway. We examined the expression levels of key molecules (β-catenin, GSK-3β, p-GSK-3β) and downstream genes (c-Myc, CyclinD1, AXIN2) of the Wnt/β-catenin signalling pathway, and our results showed that overexpression of ARMC10 activated the Wnt/β-catenin signalling pathway, while knockdown of ARMC10 could inhibit the activity of Wnt/β-catenin signalling pathway.The transfer level of β-catenin from cytoplasm to nucleus was crucial step that Wnt/β-catenin signalling pathway plays an important role in biological functions. We found that knockdown of ARMC10 significantly reduced the amount of β-catenin protein in the nucleus, whereas overexpression of ARMC10 significantly increased the amount of β-catenin protein in the nucleus, which further confirmed that ARMC10 could activate the Wnt/β-catenin signalling pathway.In addition, we further explored the relationship between the Wnt/β-catenin signalling pathway and mitochondrial function and neuronal apoptosis using | CON CLUS ION In conclusion, our study demonstrates that ARMC10 regulates mitochondrial dynamics and affects mitochondrial function through the Wnt/β-catenin signalling pathway involved in cerebral Li et al. expounded that OMA1-mediated OPA1 cleavage (S1-OPA1) and then S1-OPA1 Fifty-three patients 2 . 2 | with first-ever IS who attended the First People's Hospital of Zhengzhou City from October 2018 to February 2019 were included in this study, while 53 individuals who underwent health examination at the First Affiliated Hospital of Henan University of Traditional Chinese Medicine were randomly selected as controls.The selected patients had to be admitted to the hospital within 2 days after the occurrence of stroke.IS was diagnosed according to the diagnostic criteria revised by the Fourth National Academic Conference on Cerebrovascular Disease.Those with family history of diabetes mellitus, cardiovascular disease and hypertension were excluded.All study subjects were Han Chinese and not related to each other by blood.The study protocol was approved by the Ethical Committees of Zhengzhou University.All study participants signed an informed consent form.Cell transfection and oxygen-glucose deprivation/re-oxygenation (OGD/R) treatments 2.2.1 | Cell culture SH-SY5Y, a neuroblastoma cell line, Cells were purchased from National Collection of Authenticated Cell Cultures.SH-SY5Y was cultured with MEM/F-12 (1:1, Hyclone, China) containing 10% foetal bovine serum (FBS, BI, Israel) and 1% Streptomycin mixture (Solarbio, China).When the cell density reaches more than 80%, it can be subcultured.Washed cell with PBS twice and digested with 1 mL trypsin-EDTA for 2 min.Collected cells after termination of digestion, centrifuge at 1000 rpm for 5 min.Resuspension cell and placed in a 25 T culture flask bottle, culture it in a constant temperature incubator.Culture conditions: 5% CO 2 , 37°C, humidity 95%. 2. 3 | Extraction and purification of DNA and RNA, RNA reverse transcription 2.3.1 | Extraction of peripheral blood PBMCs Transfer blood from EDTA-Na 2 tube to 15 mL centrifuge tube, adding equal volume of physiological saline.Mixed with 3 mL of human peripheral blood mononuclear cell isolation solution, centrifuge at 2000 rpm for 25 min.Suck out the leukocytes and transfer to new 15 mL centrifuge tube, five-fold volume of PBS solution suspend cells, centrifuge at 1000 rpm for 5 min, abandon supernatant and add 2 mL of 1640 complete medium for Mitotracker Red staining. 2. 3 . 2 | RNA extraction and RT-qPCR Cells were collected and mixed with 1 mL Trizol and left on ice 5 min.Add 0.2 mL chloroform per 1 mL Trizol and place 3 min on ice.Removed upper water phase into new tube after 4°C 12,000×g centrifugal 15 min, mixed with 500 μL isopropyl alcohol and set on ice 10 min and centrifuged again.Used 75% ethanol wash subside twice, dry it and add RNase-free water.Detect RNA by NanoDrop 2000 spectrophotometer.First-strand cDNA was synthesized using Reverse transcriptase kit (YEASEN, China). 2. 4 . 1 | Nuclear/cytoplasmic separation protein extraction Cytoplasmic protein extraction reagents A and B were added, respectively, vortex to fully disperse the cell precipitation, ice bath.Centrifugation at 12,000-16,000×g for 5 min at 4°C, the supernatant obtained was cytoplasmic protein, transferred to a new EP tube and temporarily placed on ice.Cytoplasmic protein extraction reagent was added into precipitate, vortex 15 s per 15 min, centrifuged again after 40 min.Remove supernatant into new tube, that is, the nuclear protein.Total protein was extracted with RIPA buffer(high) (Solarbio, China) and PMSF (Solarbio, China) using BCA protein concentration TA B L E 1 Primer sequences of RT-qPCR. Intracellular ROS was measured by means of DCFH-DA fluorescent probe (Reactive Oxygen Species Assay Kit, Beyotime Biotechnology, China).Cells were collected and washed with PBS and added 1 mL diluted DCFH-DA; positive control group was added with 1 mL diluted group.After incubate cells in 37°C incubator 20 min, wash cells with PBS three times and detect DCF on the flow cytometry (BD, America).Group was added to the positive control wells.2.5.3 | Apoptosis assaysApoptosis was evaluated using the Annexin V-FITC/PI Apoptosis Detection kit(Beyotime Biotechnology, China).Collected cells and washed with PBS, and resuspended in 500 μL binding buffer.10 × 10 4 suspended cells were centrifuged by 1300 rpm for 5 min, the supernatant was discarded and mixed with 195 μL Annexin V-FITC binding solution.Add 5 μL Annexin V-FITC and 10 μL propidium iodide (PI) staining solution in turn and mix gently.Incubate 20 min without light at room temperature and shake every other 5 min during the period to improve dyeing efficiency.Cold storage away from light, flow detection was completed within 1 h. 1 Changes of mitochondrial dynamics and function in clinical samples and OGD models (*p < 0.05, **p < 0.01, ***p < 0.001).(A) The mitochondrial morphology of PBMCs was compared between IS group and control group with LSCM (Mitotracker Red was used for mitochondrial staining).(B)Comparison of mitochondrial morphology between control group and OGD/R group (Mitotracker Red was used for mitochondrial staining).(C) The changes of mitochondrial ultrastructure after OGD/R treatment were observed by electron microscopy.(D) After OGD/R, flow cytometry was used to detect the level of ROS and apoptosis, ATP detection with Microplate Luminometer.ofmitochondria also changed significantly after knockdown of ARMC10, showing that the proportion of swollen mitochondria increased significantly and the number of crista decreased significantly.Compared with OGD/R + ARMC10 Empty, the proportion of mitochondrial swelling was significantly reduced, and the number of crista was significantly increased after overexpression of ARMC10 (Figure3B). The transfer level of β-catenin from cytoplasm to nucleus is crucial step that Wnt/β-catenin signalling pathway plays an important role in biological functions.Therefore, SH-SY5Y cells were collected for nucleocytoplasmic separation and extraction of nuclear protein, and then, the transfer level of β-catenin protein from cytoplasm to nucleus was detected by western blot.After knocking down ARMC10, the amount of β-catenin transferred from cytoplasm to nucleus F I G U R E 2 Exploring genes that regulate mitochondrial dynamics changes.(*p < 0.05, **p < 0.01, ***p < 0.001).(A) RT-qPCR detect the expression of ARMC10 in peripheral blood of the control group and the IS group.(B) RT-qPCR detect ARMC10 expression levels of SH-SY5Y in control group and OGD/R group.(C) ARMC10 expression levels of SH-SY5Y in ODG/R group and control group.F I G U R E 3 Verify the effect of ARMC10 on mitochondrial dynamics (*p < 0.05, **p < 0.01, ***p < 0.001).(A) Effect of ARMC10 expression on mitochondrial morphology was observed by LSCM (Mitotracker Red was used for mitochondrial staining).(B) The effect of ARMC10 expression on mitochondrial ultrastructure was observed by electron microscopy. LiCl and XAV-939 are commonly used agonist and inhibitor targeting the Wnt/β-catenin signal pathway.Therefore, LiCl and XAV-939 were used to detect their effects on mitochondrial function and neuronal apoptosis.Protein expression levels of βcatenin, GSK-3β and p-GSK-3β were detected by western blot, and the results showed that LiCl treatment significantly up-regulated the protein expression of β-catenin and p-GSK-3β and inhibited the expression of GSK-3β; compared with DMSO, the protein expression of β-catenin and p-GSK-3β was down-regulated and the expression of GSK-3β was significantly increased in XAV-939 group (Figure6A; FigureS2B).Expression level of CyclinD1 and c-Myc increased significantly by treatment with LiCl, XAV-939 could down-regulate the level of CyclinD1 and c-Myc (Figure6B).The application of LiCl attenuated the increasing of ROS production and decreasing of ATP production induced by ARMC10 silencing.The use of pathway inhibitors showed the same effect (Figure6C).After LiCl treatment, the expression level of pro-apoptotic protein Bax was significantly down-regulated, and the expression level of anti-apoptotic protein Bcl-2 was significantly up-regulated, and opposite results after the treatment of XAV-939 were observed (Figure6D). F I G U R E 4 3 . 7 | ARMC10 affects mitochondrial function and Neuronal apoptosis (*p < 0.05, **p < 0.01, ***p < 0.001).(A) Flow cytometry detect the effect of ARMC10 expression on mitochondrial function.(B) Western blot detect the effect of ARMC10 expression on apoptosis proteins.(C) Flow cytometry was used to detect the effect of ARMC10 expression on cell apoptosis.In OGD/R model, ARMC10 regulates Wnt/β -catenin signalling pathway affecting mitochondrial function and neuronal apoptosis Finally, we validated that ARMC10 activates Wnt/β-catenin pathway and affects mitochondrial function and neuronal apoptosis in OGD/R cell model.Under the condition of OGD/R, the expression of β-catenin and p-GSK-3β decreased significantly, while the expression of GSK-3β increased significantly, and the overexpression of ARCM10 rescued the down-regulation of β-catenin and p-GSK-3β drial dynamics were dysregulated in clinical IS samples and in the OGD/R cell model, mitochondrial morphology of PBMCs from IS patients exhibited significant fragmentation changes.Similarly, we observed mitochondrial fragmentation, mitochondrial swelling and F I G U R E 5 ARMC10 activates Wnt/β-Catenin signal pathway and initiates the expression of downstream target genes (*p < 0.05, **p < 0.01, ***p < 0.001).(A) RT-qPCR and western blot detected protein expression levels of key molecules in Wnt/β-catenin signalling pathway.(B) RT-qPCR and western blot detected expression of AXIN2.(C) RT-qPCR and western blot to determine the changes of downstream target genes of Wnt/β-catenin signalling pathway under different conditions.(D) The level of β-catenin protein in the nucleus was detected by nucleoplasm separation.a reduction in cristae in the OGD/R cell model.The ultrastructure of mitochondria changed markedly after OGD/R: the proportion of swollen mitochondria increased markedly, and the proportion of mitochondria with cristae more than five decreased markedly.Zhang et al. investigated the effects of 2-week exercise preconditioning Dysfunctional mitochondrial dynamics can impair mitochondrial distribution and transport in neurons, thereby shortening energy supply and inducing apoptosis.Restoring the balance between mitochondrial fusion and fission is beneficial for recovery from ischaemic stroke.Mitochondrial dynamics are recognized as a potential therapeutic target for ischaemic stroke.So, which genes regulate mitochondrial dynamics and influence mitochondrial function during the pathology of ischaemic stroke?Chen et al. found that in the transient global cerebral ischaemia model of rat, hypoglycaemia and hypoxia can depress the phosphorylation of Drp1 at Ser616 in the CA1 region of the hippocampus, which leads to increased mitochondrial division.28Li et al. constructed a Glucose-Oxygen Deprivation/Reperfusion (OGD/R) model in the PC-12 cell line and found that the expression of the mitochondrial fusion gene Mfn2 was reduced, and the mitochondrial morphology was fragmented. 29Therefore, several genes which control mitochondrial fission/fusion dynamics such as Drp1, Mfn1, Mfn2, F I G U R E 6 Agonist and inhibitor affect mitochondrial function and neuronal apoptosis by targeting Wnt/β-Catenin signal pathway (*p < 0.05, **p < 0.01, ***p < 0.001).(A) Western blot detected protein expression levels of key molecules in Wnt/β-catenin signalling pathway.(B) Western blot was used to detect the expression of downstream target genes of Wnt/β-catenin signalling pathway.(C) FCM was used to detect effects of LiCl (20 mM) and XAV-939 (10 μM) on mitochondrial function.(D) Western blot was used to detect the effect of ARMC10 expression on apoptosis proteins.Fis1, OPA1 and ARMC10 were selected.The results showed that the relative expression level of ARMC10 in IS patients was significantly lower than that in normal control (p < 0.01), but there was no significant difference in the expression levels of Drp1, Mfn1, Mfn2, Fis1 and OPA1 between the two groups (p > 0.05).Based on literature review, there was little relevant study on the relationship between ARMC10 and ischaemic stroke currently.ARMC10, as a gene associated with mitochondrial dynamics that may be involved in the pathologic process of IS, has attracted our attention and interest.First of all, we explored whether ARMC10 affects mitochondria dynamic.Morphology analysis showed that knockdown of ARMC10 resulted in significant fragmentation of mitochondrial morphology, the AR value was significantly reduced, while the sphericity and MFC value were significantly increased.In addition, overexpression of ARMC10 reduced mitochondrial disruption.In addition, the ultrastructure of mitochondria also changed significantly after knockdown of ARMC10, which was manifested as a significant increase in the proportion of swollen mitochondria and a significant decrease in the number of crista.Overexpression of ARMC10 resulted in a significant decrease in the proportion of swollen mitochondria and a significant increase in the number of cristae.In conclusion, knockdown of ARMC10 in SH-SY5Y cells significantly impaired mitochondrial dynamics, while overexpression of ARMC10 rescued the impairment of mitochondrial dynamics.Mitochondrial dynamics is an important mechanism for maintaining mitochondrial homeostasis and is closely related to mitochondrial F I G U R E 7 In OGD/R model, ARMC10 regulates Wnt/β-Catenin signalling pathway affecting mitochondrial function and neuronal apoptosis (*p < 0.05, **p < 0.01, ***p < 0.001).(A) The key molecules of Wnt/β-catenin signalling pathway were detected by western blot.(B) FCM was used to detect the function of different groups of mitochondria.(C) Expression of apoptosis-related proteins under different conditions.(A and C shares the same batch of GAPDH bands in their WB results) overexpression of ARMC10 significantly decreased ROS content and ATP production.Mitochondrial fusion/fission dynamics broadly affect neuronal function, including neuronal survival and plasticity.Mitochondrial dysfunction can cause apoptosis through the release of Cytochrome-c (Cyt-C) or pro-apoptotic proteins.Our results showed that silencing ARMC10 could up-regulate the expression of pro-apoptotic protein Bax and reduce the level of anti-apoptotic protein Bcl-2, while overexpression of ARMC10 can lead to the opposite result.The cell apoptosis rate of OGD/R + Empty plasmid group was significantly higher, and overexpression of ARMC10 could rescue OGD/R induced cell apoptosis.Serrat et al. found that ARMC10 protein was localized in the outer mitochondrial membrane by immunofluorescence staining methods and molecular labelling techniques.In addition, an Alzheimer's disease (AD) model was constructed in mouse neuronal cells, and it was found that overexpression of ARMC10 reduced the number of mitochondria in neuronal cells in the exercise state and promoted the occurrence of mitochondrial aggregation; at the same time, overexpression of ARMC10 rescued mitochondrial fragments, reduced β-amyloid (Aβ)-induced neuronal apoptosis and exerted neuroprotective effects. 20This is consistent with our findings.From this, we concluded that ARMC10 protein is localized in the outer mitochondrial membrane, and overexpression of ARMC10 can promote mitochondrial fusion and rescue OGD/R injury-induced mitochondrial fragmentation and mitochondrial ultrastructure destruction, thereby protecting mitochondrial function and attenuating neuronal apoptosis.The Wnt/β-catenin signal pathway participates in the regulation of developmental processes and CNS mutation, including the proliferation, differentiation and migration of neuronal cells, axon growth and synaptogenesis, and plays an important role in inhibiting cell apoptosis and promoting cell survival.Liu et al. showed that enhancement of the Wnt/β-catenin signalling pathway could protect hepatic mitochondrial function, reduce mitochondria-mediated endogenous hepatocyte apoptosis and ultimately attenuate hepatic inhibitors and agonists of the β-catenin signalling pathway, and the results showed that activation of the Wnt/β-catenin signalling pathway could protect mitochondrial function and reduce neuronal apoptosis.Inhibition of the Wnt/β-catenin signalling pathway can lead to mitochondrial dysfunction and promote neuronal apoptosis.Finally, we constructed an OGD/R cell model using SH-SY5Y cell line to further validate the role of ARMC10-dependent Wnt/β-catenin signalling pathway in brain I/R injury.We examined the expression levels of key molecules and downstream genes of the Wnt/β-catenin signalling pathway, as well as mitochondrial function and neuronal apoptosis.The results showed that OGD/R treatment inhibited the activation of the Wnt/β-catenin signalling pathway and decreased the expression of downstream target genes, while overexpression of ARMC10 had a rescue effect.In addition, we found that OGD/R treatment induced a decrease in ATP production and an increase in ROS production, aggravating neuronal apoptosis.Overexpression of ARMC10 rescued mitochondrial dysfunction induced by OGD/R injury, reduced neuronal apoptosis and exerted neuroprotective effects after IS.Interestingly, Wnt/β-catenin signalling pathway agonists had similar mitochondrial protective effects compared with overexpression of ARMC10.Recent studies have shown that activation of the Wnt/β-catenin signalling pathway improves neurological function by inhibiting mitochondrial oxidative stress, increasing mitochondrial membrane potential, inhibiting apoptosis and promoting mitochondrial biogenesis in Parkinson's disease rats.36Consistent with previous studies, we also found that the protective effect of ARMC10 against brain I/R injury was regulated by the Wnt/β-catenin signalling pathway.Our results suggested that the Wnt/β-catenin signalling pathway may serve as a promising candidate for neuroprotective therapy.There are some limitations to this study.First, the clinical sample size is small and there may be some confounding factors, which may cause bias in the statistical results.It is necessary to further expand the sample size to investigate the changing pattern of ARMC10 to provide data for clinical diagnosis and prognosis.Secondly, this study only investigated the mechanism by which ARMC10 affects mitochondrial dynamics, mitochondrial function and neuronal apoptosis in ischaemic stroke in a cellular model, but lacked the corroboration of in vivo studies.The results of cellular experiments need to be further validated by constructing a mouse MCAO model.
6,960.8
2024-06-01T00:00:00.000
[ "Medicine", "Biology" ]
Posttranscriptional Regulation of the Human ABCG2 Multidrug Transporter Protein by Artificial Mirtrons ABCG2 is a membrane transporter protein that has been associated with multidrug resistance phenotype and tumor development. Additionally, it is expressed in various stem cells, providing cellular protection against endobiotics and xenobiotics. In this study, we designed artificial mirtrons to regulate ABCG2 expression posttranscriptionally. Applying EGFP as a host gene, we could achieve efficient silencing not only in luciferase reporter systems but also at the ABCG2 protein level. Moreover, we observed important new sequential-functional features of the designed mirtrons. Mismatch at the first position of the mirtron-derived small RNA resulted in better silencing than full complementarity, while the investigated middle and 3′ mismatches did not enhance silencing. These latter small RNAs operated most probably via non-seed specific translational inhibition in luciferase assays. Additionally, we found that a mismatch in the first position has not, but a second mismatch in the third position has abolished target mRNA decay. Besides, one nucleotide mismatch in the seed region did not impair efficient silencing at the protein level, providing the possibility to silence targets carrying single nucleotide polymorphisms or mutations. Taken together, we believe that apart from establishing an efficient ABCG2 silencing system, our designing pipeline and results on sequential-functional features are beneficial for developing artificial mirtrons for other targets. Introduction The human ABCG2 protein is one of the 48 known members of the human ATPbinding-cassette (ABC) protein family. It was originally cloned from the placenta and cells selected for multidrug resistance [1][2][3], but according to our present knowledge, ABCG2 is also expressed in various differentiated tissues, including ovary, kidney, liver, breast epithelial cells, intestinal epithelia, and the blood-brain barrier [4]. This multidrug transporter protein provides resistance against various endo-and xenobiotics and hypothesized to play a physiological role in the chemoimmunity defense system [5]. The ABCG2 protein has also been identified in many types of tissue-derived stem cells and in human embryonic stem cell lines (hESC), and its role is presumably the protection against different toxins and stress [6,7]. Moreover, its expression was shown to be a reliable marker of the "sidepopulation phenotype" [8]; therefore, investigating its role and function in various stem cells is still an important issue. There are several model systems where ABCG2 is overexpressed or knocked out [9][10][11]; however, a model where the function of ABCG2 is turned off in a carefully controlled and reversible manner is lacking. MicroRNA (miRNA)-based regulation could present a versatile platform for such purposes, providing a posttranscriptional fine-tuning of gene expression, thereby a careful studying of protein function. The majority of miRNAs are processed via the canonical miRNA biogenesis pathway. The~20-24 nucleotide (nt) long, single-stranded mature RNA derives from an imperfect RNA hairpin structure, which is usually transcribed from the genome by a Pol II polymerase [12,13]. This primary transcript (pri-miRNA) is then cleaved by a nuclear RNase III-like enzyme Drosha (assisted by its partner protein, DGCR8), releasing a~60-70 nt long hairpin (called pre-miRNA; [14][15][16][17]). The pre-miRNA is then transported from the nucleus to the cytoplasm by the Exportin-5 shuttle system [18][19][20][21]. In the cytoplasm, Dicer, another RNase III-like enzyme, cleaves the pre-miRNA, liberating the double-stranded miRNA:miRNA* molecule [22,23]. Dicer acts as a molecular ruler, and the cleavage site can be measured either from the 5 -or the 3 -end of the pre-miRNA, depending on the stability of the 5 -end [24][25][26][27]. During further processing, one strand (called guide strand) of the liberated small RNA duplex is incorporated into an Argonaute (AGO) protein-containing complex, forming a mature RISC (RNA induced silencing complex) and guiding it to the target transcript. Up to our present knowledge, strand selection is mainly determined by thermodynamic characteristics (strands with low thermodynamic stability at their 5 -end are favorable) and the 5 nucleotide identity (A and U are favorable [28]). The regulatory effect of miRNAs is usually manifested by the destabilization/degradation and/or translational inhibition of the target mRNA molecule via the partial base pairing of the miRNA and the 3 -untranslated region (3 -UTR) of the mRNA [29,30]. Non-canonical miRNA biogenesis pathways could bypass certain steps of the canonical process, typically one or even both of the two cleavage steps [31][32][33][34]. Mirtrons, which are generated in a Drosha-independent pathway, represent the most prominent group of the alternatively processed miRNAs. They reside in short introns, which are essentially equivalent to the precursor form (pre-miRNA) of the given miRNA. Thus, the first step of the mirtronic miRNA processing is different from the canonical one: the pre-miRNA is liberated from the primary transcript by the splicing machinery instead of the Drosha/DGCR8 complex. The mirtron pathway was first described in Drosophila melanogaster and Caenorhabditis elegans [35,36], and later, it was experimentally demonstrated to be operational also in mammals [37][38][39]. Mirtrons, owing to their special features, are promising genetic tools for the regulation of genes of interest. They could be expressed by Pol II promoters; therefore, their expression can be spatiotemporally regulated, while their maturation does not interfere in the nucleus with the endogenous canonical miRNA maturation pathway [37,40]. There are several articles investigating the potential of artificially designed mirtrons as silencers and showing additional advantages, such as embedding multiple artificial mirtrons in a gene for delivery and investigating various therapeutic potentials [40][41][42][43]. In this study, we present the design of artificial mirtrons for silencing the ABCG2 multidrug transporter protein. Testing several potential candidates, we could successfully silence targets in luciferase reporter assays. Moreover, we could also effectively reduce the protein level of ABCG2. In addition, we observed important sequential-functional features of the designed mirtrons. Changing the complementarity to the target in various positions revealed the importance of the middle and 3 region in more efficient repression, while one mismatch in the first position or the seed region did not abolish efficient silencing. The various changes also influenced the balance between translation inhibition and mRNA destabilization. As an important aspect, we also point out to consider the presence of nucleotide polymorphisms when designing mirtrons against a particular gene of interest. Bioinformatics, Statistical Analysis During mirtron design, we used several prediction programs to select promising artificial mirtron sequences for further experimental investigations. After designing the different mirtron sequences, we predicted their splicing from the EGFPm coding context. We used the SpliceDB, Softberry program for splicing donor and acceptor site predictions [44,45] and the Human Splice finder for branch point analysis [46]. For structural and delta G (Gibbs free energy) predictions, we used the mFold program [47]. We selected four artificial mirtrons for subsequent investigations, with variable structure and prediction parameters (Supplementary Figures S1-S4). Regarding experimental studies, experiments with three parallels were repeated at least twice. For statistical analysis, a two-sided Student's t-test was performed. Plasmid Constructs For the expression of artificial mirtrons, oligonucleotides corresponding to the sense and antisense sequence of the specific mirtrons were hybridized to form a double-stranded DNA, then inserted as an artificial intron into the PvuII site of EGFPm by blunt-end ligation [39,48]. As a control, we used the third intron of the mouse IgCε gene [49] or a canonical intronic miRNA (mir-33b), as we described earlier [39]. For luciferase constructs, hybridized double-stranded oligonucleotides for the wild type and the seed region mutated target sequences of corresponding mirtrons were ligated between the XhoI/NotI restriction sites of Renilla luciferase 3 -UTR in the psiCHECK2 vector (Promega, Madison, WI, USA). For ABCG2 experiments, a previously established pCDNA3.1_ABCG2 plasmid was used [50]. All plasmid constructs were verified by Sanger sequencing. Cell Cultures and Manipulation HeLa cell lines were maintained in Dulbecco's modified Eagle's medium (DMEM, cat. #31966047) supplemented with 10% of fetal bovine serum (cat. #10500064), 1% of L-glutamine (cat. #25030081), and 1% of penicillin/streptomycin (cat. # 15070063, all from Thermofisher Scientific) using standard cell culture methodology. Cells were transfected with FuGENE ® HD reagent (Roche Applied Science, Penzberg, Germany) in a 6-well or 24-well plate, according to the manufacturer's instruction. To stain the cell nuclei, 10 µM of the Hoechst 33342 dye was used according to the standard protocol. EGFP and Hoechst fluorescence was detected by an IX51 fluorescence microscope (Olympus, Shinjuku City, Tokyo, Japan). To establish cell lines stably expressing EGFPm-mirtron constructs, we applied the Sleeping Beauty transposon-based gene delivery technology as described earlier [51]. Following transfection, cells were sorted for EGFP positivity at day 8 and subsequently at day 15 using a FACS Aria High Speed Cell Sorter (Beckton-Dickinson, Franklin Lake, NJ, USA) to obtain homogenously expressing cell populations. Stable expression was also checked by subsequent FACS analyses. The established cell lines were further used for mRNA level and western blot experiments. RNA Analysis Total RNA was isolated from cultured cells using Trizol reagent (Invitrogen, Waltham, MA, USA). To remove genomic DNA contaminations, RNA samples were treated with DNaseI (New England Biolabs, Ipswich, MA, USA) at 37 • C for 1 h. For cDNA preparations, 1 µg of total RNA was reverse transcribed with random primers using High Capacity cDNA Reverse Transcription Kit (Thermofisher Scientific, Waltham, MA, USA). For splicing experiments, a polymerase chain reaction was performed on cDNA prepared from transiently transfected cells (1000 ng of artificial mirtron expressing plasmids were transfected into cells in a 6-well plate), using the following primers: 5 -TTCTTCAAGTCCGCCATGCC (forward) and 5 -ACTTGTACAGCTCGTCCATGCCG (reverse). To carry out real-time quantitative PCR (qPCR), we used specific TaqMan ® assays and reagents. Reactions were performed on StepOne™ or StepOnePlus™ platforms, according to the manufacturer's instructions (Thermofisher Scientific). For relative quantitation, the ∆∆Ct method was applied, and we used the RPLP0 mRNA (catalog number: Hs9999902_m1) as endogenous control. For Renilla mRNA level experiments, cells were transfected with 500 ng of respective sensor/mutant sensor expressing plasmid in a 6-well plate. For Renilla luciferase mRNA detection, a custom-made TaqMan assay was used, containing the following primers: 5 -CGAGTGGCCTGACATCGA (forward), 5 -ACGAAGAAGTTATTCTCAAGCACCAT (reverse) and 5 -CAGGGCGATATCCTC (probe, with 5 -FAM and 3 -MGB labeled). For firefly luciferase mRNA detection, the following custom-made assay was used: 5 -GCTTCGAGG-AGGAGCTGTTC (forward), 5 -CCAGCAGGGCAGACTGAATTT (reverse) and 5 -CAGCC-TGCAAGACTAC (probe, with 5 -FAM and 3 -MGB labeled). For ABCG2 mRNA level measurements, 1000 ng of ABCG2 expressing and 500 ng of psiCHECK2 plasmids were co-transfected into cells, seeded in 6-well plates. For qPCR analysis of ABCG2 mRNA, a pre-developed assay was used (catalog number: Hs01053790_m1). Luciferase Assay In each experiment, 300 ng of the mirtron/control expressing plasmids were cotransfected with 15 ng of sensor or mutant sensor luciferase plasmids into cells, seeded on a 24-well plate. Sensors containing two copies of the respective target site were cloned downstream of Renilla luciferase in the psiCHECK2 vector. Mutant sensors differ in 3 mismatched nucleotides in the predicted miRNA seed region. Luciferase activity was measured at 48 h posttransfection by a 2030 Multilabel Reader luminometer (PerkinElmer, Waltham, MA, USA) using the Dual-Luciferase Reporter Assay System (Promega). Signal specific for firefly luciferase expressed from the same psiCHECK2 plasmid was used to normalize for transfection efficiency. To fully exclude any non-specific effects, luciferase activities of the sensors were also measured in the presence of an unrelated miRNA (hsamir-33b) as non-cognate control. Western Blot (Immunoblot) Artificial mirtron and control expressing stable HeLa cell lines were transfected with 500 ng ABCG2 expressing plasmid in a 6-well plate. Cells were lysed and collected 48 h after transfection. After briefly sonicated, samples were run on 7.5% acrylamide gel, then electroblotted onto PVDF membrane (BioRad, Hercules, CA, USA). Membranes were blocked by 5% milk/TBS-Tween and incubated with mouse monoclonal BXP-21 antibody (kindly provided by Dr. George Scheffer) overnight at 4 • C for ABCG2 detection. Next, membranes were incubated in HRP-conjugated Anti-Mouse IgG secondary antibody solution (Jackson's, cat # 715-035-151) for 1 h at room temperature. For signal detection, an ECL reagent (Thermofisher Scientific) was used, and the membranes were exposed to Agfa films. Monoclonal Anti-β-Actin-Peroxidase antibody (Sigma, cat. #A3854) was used for β-actin detection as a control. Experiments were repeated at least three times, and one representative experiment is shown in the figures. Expression levels were quantified by densitometry of the scanned images using the ImageJ software. Artificial Mirtron Design Although there are numerous advantages of mirtrons as silencers, for the process of artificial mirtron design, some criteria should be considered. First, regarding splicing, a GU 5 -end as 5 splicing donor site and a (C)AG 3 -end as 3 acceptor site is advantageous. Then, a functional polypyrimidine tract and a branch point should be placed somewhere in the mirtron sequence ( Figure 1A). Additionally, as was mentioned above, there are some features to be considered for proper processing by Dicer and for the loading of the appropriate strand of the small RNA duplex into a functional RISC. Besides, there are some other concerns regarding efficient silencing, such as the complementarity of the small RNA to its target. Theoretically, the guide strand can be placed either in the 5 -or in the 3 -arm of a mirtron. However, in all cases, we chose the 5 -arm because, in this case, the most important part of the potential guide RNA, the 5 -end and therefore the seed region is well defined by splicing, avoiding potential heterogeneous ends, resulted by Dicer processing. Hence, it is easier to plan target specificity and influence strand selection. Concerning the branch point, we analyzed several mammalian mirtrons and found that some of them have their potential branch point in the loop region, while some have it in the 3 -arm region. We decided to position it in the loop region while the polypyrimidine tract was placed in the 3 -arm. We used the mmu-mir-1224 mirtron loop sequence as the loop of our artificial mirtrons since it had the best scores for branch point and splicing prediction analysis during the design process ( Figure 1B). Regarding target site selection, we selected target sequences from the coding region of ABCG2 ( Figure 1C) since it was previously shown to be applicable [40] and because our earlier effort to target 3 UTR did not result in efficient silencing (data not shown). During the design process, we selected AC dinucleotides in the ABCG2 cDNA beside pyrimidine-rich sequences in the 5 neighborhood to be the potential target of a 5 -arm derived mirtronic small RNA. Thus, the 5 -arm of the artificial mirtron (artmir) is complementary to the target site, the loop region contains the branch point, and the 3 -arm has the polypyrimidine tract ( Figure 1B). We designed several sequence variants to test complementarity/silencing ability correlations and chose candidates for experimental investigations by bioinformatic analysis. Here we show four artificial mirtron variants targeting two constitutive exons as potential target sites: art1 and art2 for target I (residing in exon 12), and art3 and art4 for target II (residing in exon 13; Figure 1C). Investigating Splicing Ability of the Selected Artificial Mirtrons For the expression of artificial mirtrons, we used our earlier established expression system [39]. A modified EGFP sequence (EGFPm) was used, of which the coding region was separated into two exons. The artificial mirtrons were cloned as introns between the two exons. Therefore, EGFP fluorescence indicates accurate splicing, and artificial mirtron expression can be easily monitored (Figure 2A). In the case of all four artmirs, we observed quite strong EGFP expression in the transfected cells, suggesting proper splicing ( Figure 2B and Supplementary Figure S5). Investigation of splicing by RT-PCR indeed revealed successful, very efficient splicing. We detected a small amount of unspliced mRNA form only in the case of art1 ( Figure 2C). Splicing accuracy was confirmed by sequencing of the gel-purified PCR products (Supplementary Figure S6). Our experimental results were consistent with our splicing predictions of the design phase. We chose sequences with very high values of splicing donor, acceptor and branch point predictions, and among them, art1 had the lowest values (see Supplementary Figures S1-S4). Since the expression cassette in our plasmid was located inside a Sleeping Beauty transposon (Figure 2A), co-transfection with a transposase expressing plasmid allowed us to make stable cell lines by sorting the cells based on the EGFP signal. We successfully established all four artificial mirtron-expressing stable cell lines for further experiments. Functional Testing of Artificial Mirtrons by Luciferase Reporter Assay Since all of the examined artmirs could be effectively spliced out from the host gene, we tested their ability to silence gene expression. We used luciferase sensor assays, for which two copies of the particular target were cloned downstream of the Renilla luciferase coding region. Besides the fully complementary seed region containing sensor, we also used a mutant sensor bearing 3 mismatches in the seed region to check seed region specificity ( Figure 3A). Investigating Splicing Ability of the Selected Artificial Mirtrons For the expression of artificial mirtrons, we used our earlier established expression system [39]. A modified EGFP sequence (EGFPm) was used, of which the coding region was separated into two exons. The artificial mirtrons were cloned as introns between the two exons. Therefore, EGFP fluorescence indicates accurate splicing, and artificial mirtron expression can be easily monitored (Figure 2A). For target I., we detected downregulation of both sensor types in the case of both artmirs compared to a non-cognate control (Figure 3B left). Mutant sensors were silenced at a similar extent, by~30%. Regarding the sensor (having a fully complementary seed region), both artmirs could achieve repression, but art2 had a much higher silencing capacity on it. The extent of the repression was~43% for art1, whereas~87% for art2. However, if we compare the downregulation of the sensor to the mutant sensor, we see a significant difference only in the case of art2 (Figure 3C left). The knockdown efficiency, in this case, was also high, more than 80%. For target II., we detected downregulation of both sensor types in the case of the corresponding artmirs compared to a non-cognate control ( Figure 3B right). Silencing efficiencies were similar (~30%) among art3 and art4 in the case of both sensor types. However, a comparison of the sensor repression to the mutant sensor repression indicated no differences between artmirs and the non-cognate control ( Figure 3C right). As mentioned above, we designed artmirs to have different sequence complementarity to the target ( Figure 3A), and we wanted to test whether there is a difference in their mechanism of silencing: is the observed luciferase repression realized by the cleavage/destabilization of the target mRNA or via translational repression? For this, we measured the mRNA level of Renilla luciferase by quantitative PCR. In the case of target I., no significant decrease could be detected in the mutant sensor containing Renilla luciferase mRNA. Regarding the sensor containing Renilla luciferase mRNA, we detected a significant decrease only in the case of art2, where a~30% reduction was observed, compared to the control ( Figure 3D left). Concerning target II., there was no change in either sensor or mutant sensor containing Renilla luciferase mRNA level when art4 was expressed. However, in the case of art3, we observed a slight,~15% reduction of both sensor types (Figure 3D right). Targeting ABCG2 Expression by Artificial Mirtrons Next, we investigated the ability of the designed mirtrons to silence the expression of the human ABCG2 gene. First, we examined their impact on ABCG2 mRNA expression. Compared to the control, we observed a significant decrease,~45% only in the case of art3 ( Figure 4A). Based on our previous luciferase experiments, we expected a reduction in the mRNA level also for art2 (see Figure 3D). However, the human ABCG2 gene has various polymorphisms compared to the reference sequence and sequencing the target site I. of our expression construct revealed the presence of one silent polymorphism (CCC > CCA, Pro480Pro). This extra mismatch in the 3rd nucleotide position of the seed region of art1 and art2 compared to the ABCG2 mRNA could well explain the results ( Figure 4B). Finally, we tested if the designed artmirs can influence ABCG2 expression at the protein level. For this, we carried out western blot experiments. We could detect a significant reduction in ABCG2 protein expression by art2 and a much less prominent decrease by art3. However, in the case of art1 and art4, carrying extra mismatches in the 3 and the middle region of the miRNAs, we observed no significant changes compared to the control ( Figure 4C,D). p < 0.001 relative to respective control. (C) To examine 'seed-specific' silencing, luciferase activity (Renilla/firefly) of sensorcontaining experiments are normalized to the respective mutant sensor values (set to 1 for each mirtron). *: p < 0.001, relative to respective normalizing control. (D) Luciferase mRNA level measurements by qPCR. Error bars represent standard deviations; *: p < 0.05, **: p < 0.01, relative to respective control. Discussion In this study, we aimed to design artificial mirtrons to silence ABCG2 expression and investigate some sequential features that could influence efficient silencing. As was mentioned above, mirtrons can serve as useful tools for gene silencing, and they can be exploited in particular when genome editing is not amenable or silencing should be reversible. As artificial introns, they could be placed in various reporter genes, or for therapeutic applications, they may be combined with other genes of interest, achieving more than one genetic effect simultaneously with one expression cassette. Here, we present an artificial mirtron-based approach by which a significant silencing effect can be achieved on the ABCG2 multidrug transporter protein using the EGFP host protein. By developing and combining with appropriate Pol II promoters, it can serve as a useful tool for the investigation of ABCG2 function in various stem cells, including human embryonic stem cells and cells exhibiting the so-called "side-population phenotype" [6,8,52]. To date, there are only a few studies addressing the development of artificial conventional mirtrons to silence gene expression and their potential use in therapeutic applications. In those studies, the silencing effect of artmirs was investigated mostly in luciferase reporter assays and at the target mRNA level [40,41,43]. In a subsequent article, the potential application of 3 -tailed artificial mirtrons was studied, where in addition to the mRNA level, an efficient decrease could also be detected on the indirectly measured protein level of VEGFA [42]. Our data further strengthen the applicability of artificial mirtrons as gene silencers since our careful design could result in mirtrons efficiently reducing the ABCG2 expression when the target protein level was measured directly. When examining the designed artmirs, interesting sequential features could be observed. We designed artmirs complementary to their targets or having mismatches at various positions. Using mutant sensors in luciferase assays revealed that 3 nucleotide mismatches in the seed region did not abolish silencing at the protein level since all four artmirs could have a silencing effect on the target, compared to the non-cognate control ( Figure 3B). The extent of silencing was comparable to that measured on the sensor for art1, art3 and art4, indicating 'non-seed-specific' repression. However, in the case of art2, a strong 'seed-specific' silencing effect was observed (~80% reduction). Regarding mRNA levels, we detected a significant reduction in the Renilla mRNA level only in the case of art2 and art3. Art2 reduced its sensor mRNA level by~30%, while art3 had a smaller effect but surprisingly on both sensor types ( Figure 3D). Worth noting, that while art3 is fully complementary to its target, art2 has one mismatch outside the seed region, at the first position, due to the mirtron design rule (having G at the 5 -end). However, despite this mismatch, art2 decreased its sensor mRNA level and more extensively than art3. Nevertheless, when the target is located in the original genomic context, the ability of art2 to reduce the ABCG2 mRNA level was abolished by an additional mismatch positioned in the seed region (3rd position, Figure 4B). In summary, data of the luciferase experiments suggest that art1 and art4 silenced their targets via translational repression, while art2 and art3 could accelerate the degradation of their target mRNA to some extent, even if having mismatches to the target (1st nucleotide of art2 in its sensor, or seed mismatches in mutant sensor of art3). Concerning ABCG2, only art3 repressed its mRNA level (~45%), but it only resulted in a slight reduction of the amount of protein. Conversely, art2 had no impact on mRNA level but exhibited a quite strong repression at the protein level. In contrast to these, art1 and art4 had no effect on either ABCG2 mRNA or protein level. Considering the sequence environment, ABCG2 mRNA has one, while Renilla mRNA has two target sites; however, art3 can regulate the former one more efficiently (~45% versus 15%). Nevertheless, it is worth noting, that in ABCG2 mRNA, the target sequence resides in the cDNA region instead of the 3 UTR. The results indicate that flanking sequences could strongly influence the miRNA effect. In the natural mRNA context, art3 achieves a slight decrease in ABCG2 protein level, most probably by degrading its mRNA, while art2 operates only by translational repression. Regarding our data, we noticed that in the case of the 'miRNA-mimic' artmirs (art1 and art4) containing mismatches in the 3 and the middle region, losing base pairing at the investigated positions did not accelerate silencing either at luciferase or at the ABCG2 protein level, compared to their respective counterparts (art2 and art3). In addition, we also noticed that the presence of rare polymorphisms in the target region should also be considered since they could influence base pairing and thereby efficient silencing of the designed mirtrons ( Figure 4B). On the other hand, some other sequential features can be very useful during artificial mirtron design: for example, our data support the possibility of adding a non-complementary G to the 5 -end of a mirtron without decreasing silencing ability, which is very important and useful since it is a strong mirtron criterion [35,36]. Taken together, some of our results using artificial mirtrons are in line with earlier data, such as the reduction of the target mRNA level in the case of full complementarity between the target and the small RNA [29,53]. However, we observed some additional, not expected features, e.g., a reduction in the target mRNA level when the first nucleotide of the small RNA is not complementary. Further experiments are needed to reveal whether these findings are common phenomena or a consequence of the given target sequence and/or its context, which may have different accessibility by the RISC. Another explanation could be an altered RISC assembly when the various small RNA guides are preferentially associated with different Argonaute proteins. Notably, the most successful silencer artmir of LRRK2 was associated with the greatest amount to AGO4 [41]. Conclusions In summary, using our artificial mirtron design and testing scheme, we could successfully establish an efficient silencing system for the ABCG2 multidrug transporter. In addition, we observed important new sequential-functional features of the designed mirtrons. Our silencing system could be directly applied to study the function of this membrane protein in several in vitro or in vivo models. Moreover, combining the artmirs with host proteins other than EGFP, this system would also be suitable for versatile, functional studies in stem cells, where ABCG2 plays an important yet not fully understood role. However, apart from the concrete established model system, we believe that our mirtron design pipeline could also be efficiently applied to target other genes in future studies.
6,322.4
2021-07-01T00:00:00.000
[ "Medicine", "Chemistry" ]
Phases of $\mathbf{U(N_c)}$ QCD$_3$ from Type 0 Strings and Seiberg Duality We propose an embedding of $U(N_c)$ QCD$_3$ with a Chern-Simons term in string theory. The UV gauge theory lives on the worldvolume of a Hanany-Witten brane configuration in type 0B string theory in the presence of Sagnotti's O$'3$ orientifold. We use the brane configuration to propose a magnetic Seiberg dual. We identify various phases of the magnetic theory with conjectured phases of QCD$_3$. In particular the symmetry breaking and bosonization phases are both associated with condensation of the dual squark field. We also discuss the abelian theory without Chern-Simons term and argue that flavour symmetry is not broken. Finally, we also predict novel type 0B string dynamics from QCD dynamics. Introduction String theory has long been a source of insight for investigations in strong coupling dynamics of quantum field theory. In particular, dualities in field theories often follow from properties of the corresponding brane configuration in string theory. Having independent evidence from field theory and string theory is a step in verifying dualities. Most of the effort so far has been largely focused on supersymmetric theories in various dimensions, owing to the fact that non-perturbative phenomena in both string theory and field theory are better understood in that setting. One may naturally ponder the ubiquity of dualities in generic QFTs, and their relationship to string theory. Indeed, recent years have seen progress made on the field theory front for non-supersymmetric gauge theories in three dimensions. There has been significant progress in the understanding of the phase diagram of QCD 3 with a Chern-Simons term. Consider a U(N c ) theory with N f massless Dirac fermions and a level K Chern-Simons term. It was argued [1,2,3,4] (see also [5,6]) that for N f /2 ≤ K the theory admits a dual description in terms of a gauge theory coupled to scalars as follows However, one may wonder whether something changes for N f /2 > K. In the case of SU(N c ) gauge symmetry, it was conjectured in [7] that when N ⋆ > N f /2 > K the theory admits a flavour symmetry breaking phase where A similar picture was developed in [7] also for SO(N) and Sp(N) gauge theories. For N f ≥ N ⋆ the theory is expected to flow to a CFT 4 . Following [9] which concerned the symplectic gauge group, we propose that the infrared phase diagram of U(N c ) QCD 3 can be understood in terms of a non-SUSY Seiberg duality. Our proposal involves a modification of the UV theory, i.e. we start with a UV theory, which we refer to as the electric theory, whose Lagrangian is more complicated than QCD 3 . This theory flows in the IR to QCD 3 . The electric theory also admits a Seiberg dual description, which we refer to as the magnetic theory. The various IR phases of the electric theory (and so of QCD 3 ) can then be identified with the phases of the magnetic dual. In particular both the bosonized phase and the symmetry breaking phase, which will be our main focus, can be understood in terms of the condensation of a scalar field, namely the dual "squark", in the magnetic theory. Our proposal of Seiberg duality is motivated by string theory 5 . In order to realise U(N c ) QCD 3 we embed the gauge theory in a Hanany-Witten brane configuration of type 0B string theory. The brane configuration consists of N c D3 branes suspended between an NS5 branes and a (1, k) fivebrane. In addition, there exits N f flavour branes and an O ′ 3 orientifold plane. It is similar to the corresponding supersymmetric brane configuration of Giveon and Kutasov in type IIB [14]. By swapping the fivebranes we obtain the brane configuration that realises the magnetic Seiberg dual. The relation between field theory and string theory phenomena teaches us about non-supersymmetric brane dynamics. The aforementioned squark condensation 4 In the 't Hooft limit, when N c → ∞ and K, N f are kept fixed, the theory exhibits rich vacua [8]. The discussion of this limit is beyond the scope of this paper. 5 Other approaches to obtain 3d duality with relation to string theory are given in [10,11], while the possibility of relating these dualities to supersymmetric dualities were explored in [12,13] translate into a reconnection of colour and flavour branes. Our Seiberg duality proposal passes several non-trivial checks: as in the symplectic case [9] it satisfies global anomaly matching and RG flows after mass deformations. It is also supported by planar equivalence [15,16]: when N c , N f , k are taken to infinity the electric theory becomes equivalent to a supersymmetric theory and the magnetic theory becomes equivalent to a supersymmetric theory. The electric and magnetic theories form an N = 2 supersymmetric Giveon-Kutasov dual pair. Therefore, there exists a limit in which our non-supersymmetric dual pair becomes a known supersymmetric dual pair. Another method of obtaining Seiberg duality in string theory is by using non-critical strings [17]. The method relies on the embedding of SQCD in non-critical string theory, pioneered in [18]. Instead of swapping the fivebranes, the duality is obtained by replacing the sign of the coefficient in front of the Liouville term in the string worldsheet action, µ → −µ. The advantage of using this method is that the non-critical type 0 string does not contain a closed string tachyon in the bulk [19,20]. The field theory that lives on the branes is the same in both the critical and the non-critical approaches. In the following we will always denote the bare CS level by k, with k ≥ 0. In addition, we define the frequently occurring combination The paper is organised as follows: in section 2 we review the essential properties of type 0B string theory and its brane configurations. In section 3 we consider a certain brane configuration and propose a Seiberg duality. In section 4 we show how the phase diagram of the electric theory manifest itself in the magnetic and in section 5 we focus on QED 3 . Section 6 is devoted to conclusions. Overview of type 0B In this section we review aspects of D3 branes and O ′ 3 planes in type 0 string theory. For the relevant background we refer the reader to [21]. Type 0B string theory can be obtained by a Z 2 orbifold of type IIB, with the Z 2 action generated by (−1) Fs , the mod 2 spacetime fermion number operator. The untwisted sector is therefore identical to the bosonic sector of the parent type IIB theory. The twisted sector is composed of a tachyon in the NS-NS sector as well as a new full set of R-R fields. The tachyon will eventually be projected out by the orientifold action. The doubled set of R-R fields lead in effect to a doubling of the D-brane spectrum. In particular there are now two types of threebranes which we denote by D3 and D3 ′ respectively. The worldvolume theory on a stack of n D3 and m D3 ′ branes was worked out in [22,23]. It is a U(n) × U(m) gauge theory with 3 complex scalars in the adjoint representation, and a pair of bifundamental Weyl fermions. In order to project out the closed string tachyon we make use of the Ω(−1) f R projection [24,25]. Here, Ω is worldsheet parity and (−1) f R is the operator that counts the number of right moving worldsheet fermions mod 2. Combining this with reflection in 6 spatial directions I 6 we get an O ′ 3 ± orientifold, the (3+1) dimensional fixed hyperplane with respect to the Ω(−1) f R I 6 action. The existence of two types of orientifold planes follows from the fact that the NS-NS two form can have a non-trivial Wilson line exp i B and the signs are chosen to reflect the R-R charge of the orientifold plane. Note that unlike the O3-planes of type IIB we do not have the additional possibilities associated with the R-R discrete torsion. Under the action of Ω, D3 turns into D3 ′ , thus requiring an equal number of each type of brane. In fact Ω projects out half of the doubled set of R-R fields in the closed string sector. We are interested in stacks of N D3 branes (together with their image N D3 ′ s) on top of O ′ 3 ± , with the worldvolume directions of D3 and D3 ′ parallel to that of the O ′ 3 ± -plane (see table 2). The worldvolume theory of such a configuration was worked out in [23]. In both cases one has a U(N) gauge field and 6 adjoint scalars parameterising the directions transverse to the worldvolume. There are also a pair of Weyl fermions which transform in the 2-index symmetric or antisymmetric representation of U(N) in the configuration with O ′ 3 + and O ′ 3 − respectively. We will denote these theories by Y + ( ), Y − ( ) respectively, highlighting the orientifold type on which they live as well as the representation of the worldvolume fermions (the two features relevant for our purposes). We summarise this in where (−1) F is the mod 2 fermion number operator and J is the symplectic form The choice of gauge group for the N = 4 theory descends to the choice of fermion representation (figure 1); starting from the parent theory with SO(2N) gauge group one lands on Y − ( ), and the supersymmetric Sp(N) theory leads to Y + ( ) [26]. The Möbius amplitude for a single D3 and its image D3 ′ separated by a distance 2|X ± | where q = e −πt and the f i (q) are defined as in [27]. We would like to extract the charge of the orientifold plane as well as the brane-orientifold potential. We note that the integrand in (6) is, up to a sign, identical to the case analysed in [28]. We will state the relevant results in the following. For large separation X ± , the leading order term as t − → 0 is given by where G 6 (X 2 ± ) = (4π 3 ) −1 |X ± | −4 Γ(2) is the 6d scalar propagator. We see that the long range potential between the branes and O ′ 3 − (O ′ 3 + ) is attractive (repulsive). For small X ± , (7) is no longer a valid approximation, instead one can expand the exponential in (6) around where the coefficients Λ, M are both positive, with the explicit form given in [28]. From this, it follows that there is a short range attractive (repulsive) force between the branes The nature of the interaction at short and long distances from the orientifold is similar. Therefore, the theory with fermions in the antisymmetric (symmetric) representation is perturbatively stable (unstable). Note that instabilities of non-perturbative nature may still arise, but are less straightforward to detect in string theory. Instead, we may rely on the field theory analysis and try to revert some lessons back to the brane setup (as in section 4.2.2). Notice that the (in)stability of the brane configuration translates in the worldvolume field theory to statements about the vev of the scalars X ± . This is obvious from the second term in (8), where the sign of the mass term for the scalars is positive (negative) for the theory with anti-symmetric (symmetric) fermions. In the Field theory, this is encoded in the 1-loop Coleman-Weinberg potential, which gets unequal contributions from the bosons and fermions in each theory. As observed in [29], the threebranes in type 0 carry the following charge and tension It is then a matter of comparing (7) with 4V 4 G 6 (X 2 ± )T O ′ 3 ± T D3 κ 2 0 to see that the orientifold charge and tension are This is clearly different from the situation in type II theories where an Op ± plane carries ±2 p−5 units of Dp brane charge. The charges (10) of the O ′ 3 ± relative to the D3 will be crucial in constructing seiberg dual pairs in the next section. A pseudo-moduli space The discussion in the previous section shows that the Y + ( ) theory is unstable, namely the D3s are repelled away from the orientifold. But the analysis tells us nothing about where the stable vacuum of the theory lies. In a non-SUSY setup, the scalar vevs, or correspondingly the coordinates of the branes are not to be viewed as moduli but are rather dictated by the dynamics of the theory. Generically one expects a scalar potential V (X + ) to be induced via loop corrections. It is however useful to have a completely kinematical discussion of the possible pseudo-moduli of the brane system before imposing the dynamical constraints. We will examine the situation both in string theory and field theory. Using the U(N) matrices, the most generic vev for the scalars X + takes the diagonal form From a field theoretic point of view, depending on the specific values of the eigenvalues a i we encounter 3 possibilities: (i) The a i are all distinct. In this case the gauge group is broken to its U(1) N maximal torus and the worldvolume fermions all become massive. There are also adjoint (charge 0) scalars for each U(1) factor in U(1) N (ii) When n of the N eigenvalues become exactly degenerate there is an enhanced U(n) symmetry. The breaking pattern in this case takes the form All worldvolume fermions are massive but there are scalars in the adjoint of the unbroken gauge group. A special case of this type is when all the eigenvalues coincide and the entire gauge symmetry is unbroken. (iii) There is a more exotic possibility. Consider the situation where n eigenvalues take the opposite sign of an exactly degenerate set of m eigenvalues, i.e. The unbroken gauge symmetry is now U(n) × U(m) × U(1) N −(n+m) . As in the cases From the string theory perspective, case (i) corresponds to a configuration where all branes are at distinct points away from the orientifold, that is, none of the D3s coincide. Case (ii) corresponds to n D3 branes coinciding in the bulk (away from the orientifold). Case (iii) is more interesting. Suppose that v > 0, then in the brane picture v denotes the coordinates of n D3 branes in the transverse space. On the other hand giving negative vevs to m of the scalars corresponds to separating m D3s from the orientifold in the negative direction. But only the quotient space, i.e. the positive direction is physical. When we send m D3s to a negative point in the transverse space, their image D3's are given positive coordinates and appear in the physical space. So we see that case (iii) corresponds to n D3s and m D3's coinciding at coordinate v in the bulk. The worldvolume theory of this configuration beautifully matches what one would expect from field theory discussed in (iii). 1,9 . All objects extend along the shared x 0,1,2 directions as well as those indicated below. We are interested in Hanany-Witten setups to study 3d theories, which requires the introduction of NS5 branes. Our construction is the non-SUSY analogue of the 3d N = 2 setup in type IIB (see e.g. [30]). In particular, we have NS5 branes which are non-parallel in two of their spatial coordinates as in table 2, we distinguish them by referring to one as an NS5 ′ . The orientifold charge is switched from O ′ 3 + to O ′ 3 − and vice versa on either side of an NS5 or NS5 ′ which intersects the orientifold. We will only consider configurations where the orientifold is asymptotically O ′ 3 + and label only the asymptotic charge of the orientifold plane in our diagrams (see figure 2). Seiberg duality has a standard string theory derivation [31] which follows from a rearrangement of non-parallel NS5 branes in the Hanany-Witten setup. In constructions without an orientifold, it is possible to achieve this rearrangement without the need for the NS5 branes to intersect. This is done by using the freedom to separate them in a direction mutually transverse to the NS5 and NS5 ′ . In the presence of an orientifold, the NS5s are bound to the orientifold plane and this is no longer possible. The NS5 branes will inevitably intersect as we try to move them past one another [32]. The result of moving non-parallel fivebranes through one another in the presence of an orientifold is well understood. This is the so called Hanany-Witten transition [33]. In type IIB constructions with an orientifold this amounts to the creation/annihilation of a D3 between the NS5 and NS5 ′ depending on the orientifold type, a fact that follows from imposing the conservation of linking number. In the absence of D5 branes the linking number of an NS5 is proportional to the difference of the net D3 brane charges ending on it from the left and right respectively. Following the discussion around (10) it is easy to see that for the type 0 configuration of figure 2 the linking number of the NS5 and NS5 ′ are conserved provided a pair of D3s are created in between them as we go from (a) to (b). This is twice the corresponding situation in type IIB as one would expect from the fact that the charge of O ′ 3 ± relative to the type 0 D3 is a factor of two greater than the type IIB analogue. Hanany-Witten setup In the next section we discuss the Hanany-Witten setup that leads to the non-SUSY gauge theories of interest with and without flavours. 3d dualities from non-supersymmetric brane configurations In this section we consider Hanany-Witten setups that lead to three-dimensional CS theories. See figure 3 and 4. The construction is analogous to [30]. The difference here, besides being in type 0B, is the presence of the O ′ 3 orientifold discussed previously. In section 3.1 we consider the setup of figure 3. The low-energy theory of such a configuration is that of non-SUSY analogue of N = 2 CS theories without flavours of (s)quarks. Such a setup turns out to be meaningful for the discussion of 3d dualities without matter. These dualities are also known in the literature as level-rank dualities. In section 3.2 we consider the addition of N f flavour D5-branes, see figure 4. The lowenergy theory emerging from such a brane configuration includes quarks and squarks in the fundamental representation of the gauge group. Level-rank duality We begin by discussing how level-rank duality is realised in our setup. The discussion follows that of [34], and we provide a more refined account. In particular, we will be more careful about the CS level of the U(1) factor of the gauge group. The starting point is the brane configuration (a) of figure 3 with N c D3 branes stretched between an NS5 brane and a (1, k) 5-brane. We will refer to this as the electric theory. The worldvolume theory is the dimensional reduction of the Y − ( ) subject to suitable boundary conditions. There is a U(N c ) gauge field A µ with a YM term and level k CS interactions, as well as a real scalar σ in the adjoint of U(N c ) and two antisymmetric (complex) Dirac fermions in the and the of U(N c ), respectively. The Lagrangian takes the following form 6 L (E) Here F µν is the gauge field strength and D µ ≡ ∂ µ − iA µ is the covariant derivative. The covariant derivative is understood to act on the various fields in the representations of U(N c ) they belong to. D is the auxiliary field of the vector multiplet borrowed from the supersymmetric parent theory. It belongs to the adjoint representation of the gauge group just like the gauge field and scalar gaugino. It is straightforward to obtain the Seiberg dual of this theory following e.g. [32,30] with a slight modification that takes into account the effect discussed in figure 2. After reshuffling the NS5 and (1, k) fivebrane we arrive at the configuration (b) in figure 3, where the number of colour D3s is now κ ≡ k − N c + 2. We refer to this as the magnetic theory. 6 Such a Lagrangian is understood as descending from its parent N = 2 counterpart. In the large N limit we expect to recover a supersymmetric YM-CS theory. The following rule is expected to hold: ⊕ → Adj. The worldvolume theory is now that of a gauge field a µ with YM term and level −k CS interactions as well as a real adjoint scalar s and antisymmetric Dirac fermions l andl. The Lagrangian is We are interested in the IR dynamics of these theories. In the absence of supersymmetry, the scalars on the two sides are expected to acquire a 1-loop mass of the order of the cutoff [34] m 2 σ ∼ g 2 e Λ, m 2 s ∼ g 2 m Λ . As in the discussion following (8) this translates to an attractive force between the branes and the orientifolds, signalling perturbative stability of the configuration. At energies well below the cutoff scales, the scalars are decoupled and do not play a role. Note that the scalars also have tree level CS masses, but we expect them to be subleading due to the stringy nature of the masses in (16). After integrating out the scalars we are left with gauge fields and antisymmetric fermions, both of which have tree-level CS masses M CS = ±g 2 k where the sign of the mass follows from the sign of the bare CS levels in (14) and (15). Due to the lack of supersymmetry, also the gauginos (the antisymmetric fermions) get a mass at one-loop and can be integrated out. Integrating out the antisymmetric fermions shift the levels of the U(1) and SU(N c ) (resp. SU(κ)) factors of the gauge group by disproportionate amounts. As a result the IR of the electric theory is a U(N c ) K 1 ,K 2 CS TQFT where NS5 While the IR of the magnetic theory is described by a U(κ) L 1 ,L 2 CS TQFT with Putting everything together we end up with the TQFTs U(N c ) κ,κ−Nc and U(κ) −Nc,−Nc+κ , In fact, these theories are dual to each other. Therefore, in the IR, we recover the following level-rank duality U(N c ) κ,κ−Nc ←→ U(κ) −Nc,−Nc+κ . Including flavours We can include flavours in the discussion by adding D5 branes to the setup, the worldvolume directions spanned by the flavour D5 branes are as in table 2. The IR phases of the electric theory turn out to be richer than the cases studied above and are nicely encoded in terms of the dual magnetic theory. We begin by analysing each theory separately semi-classically before mapping out the phase diagram. Electric theory The flavoured electric theory is realised on the brane configuration (a) of The tree level Lagrangian is given by where L (E) N f =0 is, as before, given by (14). The additional flavour terms are described by Here a, b = 1, · · · , N c are colour indices and i, j = 1, · · · , N f are flavour indices. The interactions with the gauginos fix the representations of the (s)quark fields to be as in table 4. The fate of the scalar σ of the gauge multiplet of the electric theory is similar to the flavourless case. The one-loop corrections to the scalar propagator get positive contributions from its coupling to itself and to the gauge field and negative contributions from its coupling to the gaugino λ. Since there are more bosonic than fermionic degrees of freedom, the vacuum σ = 0 is stable; σ does not play a role in the IR dynamics of the theory and can be integrated out. A similar story pans out for the squark Φ. Indeed, the squark couples to the gauge field A µ , the scalar σ and the gaugino λ. Since there are more bosonic than fermionic degrees of freedom, one expects the squark to acquire a positive mass M 2 Φ > 0 and decouple from the IR physics. For a non-zero level k = 0, the gauge field and the gaugino acquire a Chern-Simons mass M CS = g 2 k. We therefore expect the IR physics to be dominated by the topological CS theory coupled to N f fundamental quarks, i.e. QCD 3 with N f quark flavours. 7 The IR levels of the electric theory are shifted by the gaugino as in (17), as well as the fundamental quarks. In summary, using the dictionary (3) we have electric IR: which is nothing but the left hand side of (1). On the other hand, when k = 0, the IR theory is that of YM theory coupled to the gaugino and the fundamental quarks. It is less straightforward to say anything concrete about the IR dynamics of this theory. Magnetic theory The flavoured magnetic theory lives on the configuration (b) of figure 4. It is obtained from the flavoured electric theory by the standard Giveon-Kutasov move [30,32] modified so as to account for the brane creation described in figure 2. One can easily verify that the resulting number of colour branes between the NS5 and the (1, k) fivebrane is The magnetic field content is given in table 4. This can be obtained in a similar fashion to the electric theory, i.e. by subjecting the theory on the D3 branes in table 1 to the appropriate boundary conditions. We have a gauge multiplet identical to the magnetic theory of the N f = 0 case. The matter multiplet consists of a complex scalar φ and a Dirac fermion ψ. Their representations with respect to the gauge and flavour groups are given in table 4. There are in addition new degrees of freedom, which have no analogue on the electric side, corresponding to the motion of the flavour D3 branes along the x 8,9 directions. These give rise to two gauge singlets; the meson M which is an SU(N f ) adjoint and its fermionic partners, the "mesinos" χ transforming as of SU(N f ) andχ transforming as of SU(N f ). The tree level Lagrangian for this theory is where L (M ) N f =0 is as in (15). The matter Lagrangian is Note that in addition to the magnetic gauge coupling g m , we now have another coupling constant y which controls interactions between the (s)quarks and the meson multiplet. The scalar s of the magnetic gauge multiplet gets a positive mass and decouples, just as it did in the flavourless case. This signals the stability of the colour branes near the orientifold. The squark φ couples to the gauge multiplet as well as the meson multiplet. There are more bosonic than fermionic degrees of freedom in the gauge multiplet, and more fermionic than bosonic degrees of freedom in the meson multiplet. Therefore, the squark aquires a 1-loop mass of the form The two effects compete and the squark may become massive or tachyonic. Since at large k the gauge field becomes heavy and decouples we operate under the assumption that in this limit the squark is tachyonic. The matter Lagrangian (25) for the magnetic theory includes a coupling between the meson field and the scalar quarks If the meson acquires a vev of the form M j i M k j = u 2 δ k i the squark φ becomes massive. If the squark acquires a vev φ i a = vδ i a , and flavour symmetry is unbroken, the mesons become massive. Therefore, the most likely scenario is that in all phases [9] In the following we will always work with this assumption in mind. This will be crucial in obtaining the phase diagram of QCD 3 . Phase diagram As we saw in (22), the IR theory on the electric brane configuration is precisely QCD 3 . In this section we argue that the conjectured phase diagram of QCD 3 can be understood in terms of the dual magnetic description. Many of the features are similar to the symplectic case analysed in [9]. For this reason we will be somewhat brief and focus only on the details which are new to the unitary theory. Region I: Bosonization We start with the region of the parameter space where κ ≡ k + 2 − N c ≥ N f . This corresponds to region I in the phase diagram of figure 5. In this region the rank of the magnetic gauge groupÑ c = N f + κ is automatically positive. Following the discussion around (26), the N f squarks are assumed to be tachyonic throughout this region. This is reasonable as one can go to arbitrarily large values of k while keeping N f fixed. In this regime the gauge sector becomes heavy and decouples from the dynamics. The main contribution to the mass of the squark (φ) comes from the meson multiplet, which is indeed negative. Thus, our main assumption is that this remains true as we move to finite k. Let us then assume that the magnetic squarks condense. In the brane configuration, this corresponds to Higgsing N f colour D3 branes via reconnection to N f flavour D3 branes. This is the Higgs mechanism in the string theory language. The world-volume of the N f Higgsed D3 branes no longer supports a gauge multiplet as they end on D5s from one side and end on the NS5 brane from the other. However, we still have κ colour D3 branes which support a U(κ) −k gauge theory with massive gauge field and massive gauginos. The CS mass is still proportional to k, and we can integrate out the gauge field and gauginos at energies below g 2 k. The reconnection preserves the original U(N f ) global symmetry. We will shortly argue, from the field theory side, that there are N f scalars in the fundamental after the Higgsing. In the brane set-up these can only come from open strings stretched between the colour branes and N f Higgsed D3 branes. Let us try to understand the phenomenon described in the last paragraph in terms of the field theory description of the magnetic theory. Indeed, the Higgsing corresponds to giving a colour-flavour locking vev to the magnetic squark without breaking the global U(N f ). The gauge symmetry breaking pattern is given by leaving the gauginos in the and of the Higgsed gauge group as well as N f fundamental squarks. The N f magnetic quarks become massive due to Yukawa terms. In addition, the meson and the mesino all become massive due to interactions like (27) and can be integrated out. The IR levels get shifted after integrating out the gaugino according to (18) so that, using the dictionary (3), the IR of the magnetic theory in this region of the parameter space is described by Such a bosonic dual is described in the IR by a Lagrangian that contains, in addition to a CS term with appropriate levels and coupling between the scalars and gauge field, also self-interactions for the squarks. These correspond to mass terms of the formφ a i φ i a as well as quartic interaction of the form (single-trace) (φ a i φ j a )(φ b j φ k b ) and (double-trace) (φ a i φ i a ) 2 . These terms can be generated, if not already present, by the RG flow consistently with global symmetries. As a final step, tuning the mass terms both in the electric IR theory in (22) and in the magnetic IR theory in (30), we recover a well-established duality. This is nothing but the duality (1). Symmetry breaking When N ⋆ > N f > κ, which corresponds to region II and II ′ in the phase diagram of figure 5, we expect rather different dynamics for the system and we anticipate breaking of the flavour symmetry. As we shall see, the physics in these regions is still captured by a tachyonic squark, colour-flavour locking and brane reconnection, but the implications and the resulting physics will be different with respect to region I. Note that the electric theory we discuss is a U(N c ) gauge theory, while the result in ref. [7] is for SU(N c ). Region II ′ Let us begin with region II' in the phase diagram of figure 5. In this region κ < 0. Therefore, on the magnetic side, there are less colour D3 branes than flavour D3 branes: N c = N f + κ < N f . We will assume that the squarks condense also in this case. Nonetheless, squark condensation leads in this case to a fully Higgsed gauge group. Once again this is realised in string theory by reconnecting N f + κ colour and flavour D3 branes (we stress that κ < 0 here). After the Higgsing, we are left with |κ| flavour D3 branes stretched between the D5 brane and the (1, k) fivebrane, as well as the N f + κ connected D3 branes. The latter no longer support a gauge multiplet and therefore gauge symmetry is fully broken. The global symmetry now consists of a U(N f + κ) factor corresponding to the symmetry on the N f + κ reconnected branes as well as a U(κ) factor from the remaining flavour D3 branes. Using the dictionary (3) we have that in this region the global symmetry breaking pattern is This symmetry breaking pattern is the one anticipated in [7]. As a consequence, the IR physics of this phase is described in terms of the Grassmannian corresponding to the symmetry breaking pattern given in (31). Such a Grassmannian will be essentially parametrised by 8 massless Nambu-Goldstone bosons. We identify the Nambu-Goldstone bosons as the massless modes of open strings stretched between the two stacks of flavour branes. Region II , after reconnection the theory in the IR is Naively, we seem to have a puzzle: instead of obtaining a theory of massless Nambu-Goldstone bosons we obtain bosonization. The NG theory we are seeking is nothing but the effective description of (34) for large negative masses of the squarks φ. According to the field theory analysis of Komargodski and Seiberg [7] upon condensation of the squarks we land on the symmetry breaking phase. Indeed, after reconnection, the scalars in the bosonic dual (34) correspond to scalar modes of the open strings in the brane configuration. Therefore our proposal is that these scalars are tachyonic and are to be stabilised via open string tachyon condensation. We do not know whether a nice geometric picture emerges after this condensation. Regardless, in the field theory limit one eventually lands on the Grassmannian M(N f , κ). This picture is consistent with the mass deformations of the brane setup, already discussed in [9]. Comments about QED 3 The discussion of the phase diagram in the preceding sections holds for a general number of colours N c . However, "accidents" happen when N c = 1, 2 that modify parts of the discussion. In the case of N c = 2 the electric gaugino is a singlet of the SU(2) factor of the gauge group, but it carries charge 2 under the abelian factor. Because of this, some intermediate steps taken to arrive at the general phase diagram in figure 5 are slightly modified, the end result is however unaffected and the phase diagram of figure 5 is the correct picture for N c ≥ 2. On the other hand, we start to see deviations from the general picture of figure 5 for N c = 1 i.e. QED 3 . In particular, as we shall see momentarily, when k = 0 there is no symmetry breaking phase. This in turn suggests that no symmetry breaking can occur for non-zero k since the window for which a Grassmannian phase exists in the IR is maximised for k = 0 [7]. QED 3 with vanishing CS-term When the electric gauge group is U(1), there is no electric gaugino. Therefore, the IR of the electric theory is U(1) 0 theory coupled to N f fermions. The magnetic dual has a gauge group U(N f + 1) with vanishing CS level at tree-level. Previously, squark condensation lead to masses being generated for the quarks, meson and the mesino, due to the presence of Yukawa interactions. However, in this case after reconnection we have a U(1) gauge theory with no CS term and N f massless Dirac fermions. The reason that in this specific case the fermions do not acquire a mass is that there is no gluino when the gauge group is U (1) and no Yukawa term. In the absence of supersymmetry and without fine-tuning the squarks acquire a mass. So we end up with a magnetic theory that admits the same matter content as the electric theory, namely a dual U(1) theory with N f dual quarks. The brane setup is such that the flavour branes coincide and hence flavour symmetry remains unbroken. Thus, our magnetic theory predicts no spontaneous breaking of U(N f ). This is consistent with existing conjectures about the IR behaviour of QED 3 [35]. Conclusions In this manuscript we discussed QCD 3 based on a unitary group and its embedding in string theory. The UV field theory on the brane configuration consists of fields that acquire a mass and decouple as the theory flows to the IR. The advantage of having such a UV theory is that it admits a Seiberg duality. The magnetic Seiberg dual leads to new insights about QCD 3 . In particular the bosonized theory admits a simple realisation as a magnetic dual of the electric fermionic theory. While in the electric side scalar quarks acquire a mass and decouple, in the magnetic side the fermionic quarks acquire a mass due to Yukawa coupling and decouple. The Seiberg dual also enables us to gain a better understanding of the symmetry breaking phase. Triggered by condensation of the dual squark the magnetic gauge theory is completely Higgsed and flavour symmetry gets broken. In addition, we learned about the abelian theory, with or without a Chern-Simons term. The level k (with k ≥ 0) U(1) theory with N f flavours admits a magnetic dual that upon Higgsing flows to another U(1) theory with k ′ = −k and N f flavours. Flavour symmetry is not broken, as expected from field theory analysis. For k = 0 the theory looks self-dual. While for N f = 2 the self duality is well understood [7], for N f = 2 the naive self-duality deserves further investigation. We haven't discussed the regime of N f > N ⋆ . This regime is hard to analyse both in field theory and in string theory. As in the symplectic case [9] we anticipate that it is described by meson condensation.
9,224.6
2019-08-12T00:00:00.000
[ "Physics" ]
INPUT-OUTPUT ANALYSIS : A CASE STUDY OF TRANSPORTATION SECTOR IN INDONESIA This study Aimed to analyze the transport linkages and multiplier effects of each subsector of transport when there is a change in the budget of the transport sector in the Indonesian economy. This study uses an analytical tool input output models of Indonesia in 2010, with 185 sectors. Input output models is used to analyze the relationship backwards and forwards in the transport sector of the Indonesian economy and the multiplier effect on the Indonesian economy as a whole. Results of the analysis Showed that the transport has a total backward linkage high while total forward linkage of transport is relatively low. This is an indication that transportation plays in attracting and developing the upstream sector, but less instrumental in developing the downstream sector. The results Obtained from the analysis of the output multiplier effect when a decline in the transport sector budget has a high value, while the income multiplier and labor multiplier when a decline in the transport sector budget has a low value. This shows a Decrease in the budget in the transport sector can reduce the production output of the Indonesian economy but less budget reduction effect on income and employment. INTRODUCTION Indonesia is the largest archipelago country in the world that Consist 13 466 islands and is the fourth most populous country in the world with a total population of approximately 258 million people or about 3.5% of the total World Population. With a high population the higher activities undertaken by the community and needed her more in the mode of transportation for fulfillment. Gross Domestic Product (GDP) 2014 National Indonesia cumulative growth of 5.06%. Of all sectors, the third sector is the biggest contributor to economic improvement of the transport and communications sector which amounted to 9.31%, the construction sector amounted to 6.58%, and financial services, leasing and corporate services amounted to 5.96%. This shows that the transport and communications sector is vital to the growth of national economy. Growth in the transport sector will reflect the direct economic growth so that transport has an important and strategic role, both macro and micro. The success of the transport sector in macro can be seen from the contribution of value added in the formation of the Gross Domestic Product (GDP), the impact multiplier (multiplier effect) that causes to the growth of other sectors and the ability to reduce the rate of inflation through the smooth distribution of goods and services to the entire country. Given the activities in the field of transportation plays an important role in the distribution of goods and services to the entire country and between countries, so transport is a strategic component in equity and economic growth, the flow of the movement of people and goods, the flow of information (flow of information), funds flow (flow of finance) which needs to be managed quickly and accurately to meet the demands punctuality. Transportation is also a means of prosperity, political development, social security and defense culture. The role of transport as a 'bridge' that facilitate all economic activities and national logistics, provide social and economic value added (Increased economic social values) (Silondae, 2016). Fluency in economic activity, especially in the distribution of a wide range of supported output modes of transport can extend their reach in the economic activity of a region. Will indirectly create new jobs thus increasing the absorption amount of labor and reduce the number of unemployed in Indonesia. The increasing number of labor and faster distribution of the output is by itself will increase incomes and local levels. Thereby indicating that the corresponding relationships with the transport sector in economic activities is very great because almost all economic activities require the support of the transport sector for the smooth process of production, distribution, and consumption. Construction of transportation infrastructure can open up the accessibility thereby increasing the production of society that led to the increased purchasing power of community. Transportation within the scope of transportation economics is essential to meet the transportation needs are constantly increasing in line with population growth, economic growth require the development of roads, terminals, ports , the setting and the means to support a transportation system that is efficient, safe and smooth and environmentally sound. Efficient transport system is used as a reference understood better economic considerations investation transportation infrastructure. Thus, this study aimed to analyze the relationship between transport sector land, sea, and air that affect the economy in Indonesia include the linkage forward (forward linkage) and backward linkages (backward linkage) transportation sector to other sectors, and calculate the multiplier effect output, income, and labor when there is a change in the budget of the transport sector in Indonesia by using the statistical approach of inputoutput (IO). THEORETICAL BASIS Transportation Transport defined as the transfer of goods and people from origin point to destination point. Transporting process is a movement from the place of origin, from which the transport activities begin, the place of destination, where the transport of ends. Transportation causes a higher value of goods at the destination of the place of origin. Value or usefulness given by transport such usefulness place (place utility) and the use of time (time utility) (Nasution, 2004). Transport is part of the economic activity associated with increasing one's satisfaction with the change in the geographic location of goods or people. This bias means moving raw materials to a place where the material can be manufactured more easily, or moving to finished goods to the place where the goods can be useful for consumers. Moreover, transportation is also biased to bring consumers a place where they are biased enjoy the services provided (Benson, 1975). Transportation is an activity services (service activities). Transportation services needed to assist the other sector-sectors (agriculture, industry, mining, trade, construction, financial sector, the government sector, transmigration, defense and security, and others) to transport goods and people in the activity at each respective sector. Based on the explanation, the transportation service is said to be derived demand or demand on derivation or derivative, which means increased demand for transportation services necessary to serve a wide range of economic activities and development increased. Increased demand for transportation services is derived from the increase in activities of other sectors (Siregar, 1995;in Adisasmita, 2010). Kamaluddin (2003) argue that the transport associated with economic and socioeconomic state and society because it affects the availability of goods in the region, stability and uniformity of prices, falling prices, rising land values, specialization between regions, the development of large-scale enterprises and the urbanization and population concentration. The presence of cheap transport provide convenience to people who are not able to produce certain goods or their availability in deprivation can be supplied the goods from the producing areas to meet the needs of the communities concerned. The transport sector provides a multiplier effect for other economic sectors such as trade, industry, agriculture, tourism and transport sectors, biased lainnya.Sektor other sectors into growing and contribute substantially to the economic growth of Indonesia, causing trickling-down effect from upstream to downstream. Availability of transportation services is positively correlated with economic activity and development in the community (Setyowati, 2015). Nicholson (2007) explains that the production function is a mathematical function that shows the relationship between the inputs used to produce a given level of output. Systematically production functions are Theory of Production This equation relates the output of a number of inputs, capital, labor, resources, and technology. Describe the production function is technically a company in the production process as efficiently as possible in using any combination of input that is as effective as possible and allow the production factor inputs to be combined in a variety of options, so the output can be produced in various ways. Production economics theory of analysis to distinguish between the two approaches, as follows: a) Production with one variable input (labor) Sukirno (2013) describes the production of a simple theory is the relationship between the level of production of goods to the amount of labor used to produce different levels of the production of such goods. In the analysis, let us assume that other production factors are fixed, namely capital, land, and technology deemed unchanged amount. The only factor of production that can be changed is the amount of labor. b) Production with two variable inputs (labor and capital) Isoquant curve shows a combination of two kinds of different inputs but produces the same output. Isoquant curve has several characteristics that is negatively sloped, getting to the right position isoquant curve indicates the higher number of outputs, isoquant curves never intersect with other isoquant curve and isoquant curves are convex to the origin point (Munir, 2008). Pindyck (2001) describes an enterprise in the production process using two input and two varied. Labor and capital as inputs used in the production process. Isoquant is a curve that shows the possible combinations of inputs produce the same output. Nazara (2005), Muryani (2017) describes input-output analysis is a general equilibrium analysis equipment. Balance in input-output analysis is based flow of transactions between economic agents. The main emphasis in input-output analysis is on the production side. The production technology used by the economy plays an important role, because of the technology in relation to the use of intermediate inputs. To a certain extent, the primary input is regarded as an exogenous variable, as well as the final demand side is also often used as an endogenous variable. Input-Output Model According to BPS (2015) data presented in the table IO is a detailed information about the inputs and outputs of sectoral able to describe the linkages between sectors in economic activities. In accordance with the basic assumptions used in the preparation process, input-output models are static and open. IO table presents information about goods and services transactions that occur between economic sectors with the form of presentation in the form of a matrix. Fields along the lines of the IO table shows the allocation of the output generated by the sector to meet the demand for intermediate and final demand. In addition, the stuffing on the line show the added value of sectoral composition of value creation. While the stuffing along the columns show the structure of the inputs used by each sector in the production process. The usefulness of input-output table is the first to look at the composition of the supply and use of goods and services, especially in the analysis of the needs and possibilities of import substitution. Second, to determine which sectors more dominant influence on economic growth and sectors are sensitive to economic growth of national / regional. Third, to estimate the effect of final demand on output, value added, imports, tax revenues, and employment in various sectors of production. Fourth, for preparation of a projection and evaluation of macro economic variables (CBS, 2015). RESEARCH METHODS The study was conducted using a quantitative approach by using Input-Output model analysis. Input-Output Model used to determine the transport sector linkages with other economic sectors in the national economy and quantify the impact of budget changes to the transportation sector to the economy's output, income and employment communities in Indonesia. In 2016 the total budget the Ministry of Transportation of Rp 48.46 trillion. While in 2017 the total budget of the Ministry of Transportation amounted to 45.58 trillion, resulting in a decrease of Rp 2.88 trillion. So the scenario that developed is a decline in the transport sector budget at Rp 2.88 trillion in the transport sector. Between Input proportion coming from the transport sector (sector i) of the total input of other sectors (sectors j) is called the Input Coefficient Between obtained by the formula: Where is coefficient input transport sector (subsector j) of the other sectors (subsectors i), is user input by the transport sector (subsector j) of the other sectors (subsectors i), and is output transport sector (subsector j). a. Rear linkage (Backward Linkage) The size of the rear linkage to the transport sector (subsector j) can be seen from the number of input coefficients between the transportation sectors (subsector j) or the number of elements in the row matrix A j. Index backward linkages obtained by the formula: Where IKBLj is rear linkage index transport sector (subsector j); is coefficient of input between transport sector (subsector j) derived from other sectors (subsectors i); n is number of sectors. b. Linkage to the Future The level of forward linkages to other sectors (subsectors i) can be seen from the number of input coefficient values between the in-line with other sectors (subsectors i) the number of elements in the row matrix A i. Straight forward linkage index i sector is obtained by the formula: Where IKBLi is index forward linkages to other sectors (subsectors i); is coefficient of input between transport sector (subsector j) derived from other sectors (subsectors i); and n is number of sectors. Analysis of Output Multiplier To analyze the impact of changes in transportation budget to the output of the input-output model is used with a supply side approach. In this analysis the primary input into exogenous factors. That is the primary input changes affect the economic growth of both sectoral and total. The equation calculates the coefficient input as follows: = (3.5) Then unloaded and get results: Where is X is vector line; (I -A) -1 is matrix inverse output; V is vector of final demand If the budget is denoted (a), then the output changes caused as a result of changes in (a) are: ∆X '= a ∆ (I -A) -1 Analysis of Income Multiplier Household income figures show the transport sector which changes the amount of income received by households that are created due to changes in the budget in the transport sector. The first thing done is to develop a matrix coefficient of income, calculated with the following formula: Where 1 is coefficient of income; Wi is total revenue sectoral; Xj = total sectoral output. When it is known coefficient of earnings, then the change in income can be calculated using the equation: Where ∆Wi is additional revenue; n1 is coefficient of income; and Xj is additional sectoral output. Analysis of Labor Multiplier Analysis of Changes in Employment Opportunity input changes occur because of changes in the budget, will result in a change in total inputs, directly or indirectly, the change in total output would lead to changes in final demand. Changes in final demand due to changes in output due to changes in the budget which led to changes in employment, the first thing to do is to draw up labor coefficient matrix. Labor coefficient shows the relationship between labor output is the amount of labor required to produce one unit of output, mathematically can be written: Where n1 is coefficient of labor; Li is Number of sectoral labor; Xj is total sectoral output. If the coefficient of labor is already known, it can be calculated using the change in employment by the equation Where ∆Li is additional employment opportunities; n1 is coefficient of labor; Xj is additional sectoral output. Rear linkage and linkage to the Future Linkage index consists of forward linkage index (forward linkage) or are calculated based on the degree of sensitivity by the side of the line or output mechanism andindex backward linkages (backward linkage) or so-called power deployment are calculated based on the column or input mechanism. There are two kinds of linkages that is, the direct linkages, and linkages total. Linkage is the sum total of direct relevance and indirect linkages. If the value of the index forward linkages to more than 1, it indicates that the sector has the power thrust against other sectors. Improvement on the output produced will greatly affect the production process of other sectors. However, if the index is less than 1, which means the sector is less able to provide a thrust force in the production process of other sectors. While the sector has a high rear linkage linkage index, giving an indication that the sector has the ability to develop other sectors as a provider of inputs for the purposes of the production activities of the sector. Table 2 shows that out of 6 sub-sectors of transportation in Indonesia, only five transport subsectors that have an index of high backward linkages or more than one. Subsectors include rail transport subsector; land transport; Sea transport; transport streams, lakes, and defections; warehousing and transportation support services. This is an indication that the subsectors have the ability to develop other sectors as a provider of inputs for the purposes of the production activities the sector. While air transport subsector provides an indication that the sector does not have ability to develop other sectors as a provider of inputs for the purposes of the production activities of the sector. Table 2. Backward linkage index Transportation Subsector Indonesia Year 2010 Source: BPS 2010 (processed) Sectors which have a total value of backward linkage is a subsector of the largest warehousing and transportation support services with a rate of 1.7256. The value has no meaning if there is an increase in output by one unit in the subsector warehousing and supporting services transport, then this subsector requires additional input from the subsector warehousing and supporting services, transport and other economic sectors that are used in the production process, including from the subsector warehousing and supporting services transport it alone amounted to 1.7256 units. Then, these figures suggest that by increasing the output of the subsector warehousing and transportation support services will increase the output of the upstream sector. Forward linkage index as shown in Table 3 describe the effect of the output produced magnitude of transport subsectors to the formation of inputs to other economic sectors. Figures linkage produced a subsector give meaning, that any additional output in these subsectors, it will increase the overall output of the downstream sector of the value linkages. This can happen due to the increase of the output produced by these subsectors resulted in increased distribution of output that will be used as inputs by other sectors, so that the overall output in the downstream sector will increase. In Table 3 shows that the highest value of total forward linkage index transport sector is highest in the land transport subsector is equal to 0.7181. This figure means that if there is an increase in the output by one unit then output these subsectors will provide additional output in other sectors amounted to 0.7181 or in other words no additional distributed to other production sectors in the economy, including land transport subsector itself to used in the production process. This shows that the output sub-sectors of land transport is widely consumed and used as inputs for production activities in other production sectors. Subsector which has an index of total forward link transport sector was lowest for the transport subsector rivers, lakes, and crossing that is equal to 0.5237. Of the six subsectors of transport, all sub-sectors do not have a total forward linkage index is more than one. This shows that the transport sector less able to push the downstream sector. While the index sixth deployment transport subsectors has a high value or more than one. Distribution Index is quite important to use to see how much influence the output of the land transport subsector; Sea transport; air transport; rail transport; transport streams, lakes, and defections; freight warehousing and transportation support services to the downstream sectors evenly. Analysis Impact Multiplier Analysis multiplier is the main analysis that can be performed by using tables Input-Output to know how the effect of a change in the primary input can affect the output of the economy (output multiplier), household income or society (income multiplier), and employment (employment multiplier) in the transport sector. Table Indonesia 2010 Based on the analysis table as the impact of changes to the budget 4:11 transport sector, respectively in the subsectorland transport of Rp 0.27 trillion, rail transport subsector amounted to Rp 3 trillion, sea transport subsector Rp -3.52 trillion, and air transport subsector Rp -1.8 trillion, would reduce economic output amounting to trillion. This shows that the reduction in the budgets in the transport sector amounted to USD -2.88 trillion will reduce economic output amounting to trillion. In detail the impact that occurs due to a decrease in the budget is the rail transport sub-sectors increased by USD 3.00786 trillion, land transport subsector increased by USD 0.34449 trillion, sea transport subsector decreased by USD -3.54119 trillion, transport subsector rivers, lakes, and crossing increased by USD 0.00433 trillion, air transport subsector decreased by USD -1.84557 trillion, subsector warehousing and transportation support services decreased by USD -0.52930 trillion. The results in Table 4, following which the income multiplier effect of land transport subsector amounted to 0.05551 value indicates when there are changes in land transport subsector budget of Rp 0.27 trillion then there is a change in people's incomes Indonesia Rp 0.05551 trillion. In the rail transport sub-sectors when there are changes in the budget of Rp3 trillion then there is a change in people's incomes Indonesia Rp 0.87920 trillion. At sea transport sub-sectors when there are changes in the budget of Rp -3.52 trillion then there is a change in the income of -0.33099 trillion Indonesian society. In the air transport sub-sectors when there are changes in the budget of Rp -1.8 trillion then there is a change in the income of -0.24841 trillion Indonesian society. With the budget decreases in the transport sector would lead to changes in output. Change in output will cause changes in the number of its workforce. Labor coefficient indicates a number of a number of workers in each sector. The higher the coefficient of the labor sector in the sector shows a growing number of labor required to produce the output. Instead sectors that have low labor coefficient indicates the lower the absorption capacity of its workforce. Then the multiplier effect of labor figures on Table 4 road transport subsector amounted to 0.00458 value indicates when there are changes in land transport subsector budget of Rp 0.27 trillion then a change in employment in land transport subsector amounted to 0.00458 in the Indonesian economy. In the rail transport sub-sectors when there are changes in the budget of Rp 3 trillion then a change in employment amounted to 0.08480 in the Indonesian economy. At sea transport sub-sectors when there are changes in the budget of Rp -3.52 trillion then there is a change of -0.04068 employment in the Indonesian economy. In the air transport sub-sectors when there are changes in the budget of Rp -1.8 trillion then there is a change in the employment of -0.00210 in the Indonesian economy. Sea transport sub-sectors are subsectors that have the value change of the lowest labor force, due to the budget on sea transport subsector decreased the most compared to other subsectors. Conclusion Based on the analysis of linkage, from 6 transport subsectors are 5 sub-sectors that have a total backward linkage index is high (more than one), namely rail transport subsector; land transport; Sea transport; transport streams, lakes, and defections; warehousing and transportation support services. This shows that the transport sector is able to attract and develop the upstream sector. In terms of total forward link, from 6 transport subsectors wholly owns total forward linkage index is low (less than one). This shows that the transport sector less able to push the output of the downstream sector. This could happen because of the construction of transport infrastructure is only focused on the island of Java and Sumatra alone, so that the transport sector less able to push the output of the downstream sector. Three transport subsectors has a high value of output change. This shows the case of budget decreases in the transport sector, decrease the output in the Indonesian economy. The results of analysis of income and employment multiplier entire transport subsectors has a low value. The low value of the multiplier of income and employment in each subsector shows if there is a decrease in the budget of each subsector of transportation, provide a relatively low impact on the community and the household income or employment. Suggestion Based on the conclusions obtained, if the government cut the budget in the transport sector giving high impact of the decline in output. So expect the government in subsequent years do not take a decision to lower the budget of the transport sector, not least in the transport sector stable budget so that it will not affect output in the Indonesian economy. Future studies related to transportation in Indonesia should use the matrix data input-output tables are newer Indonesia than in 2010, thus the impact of the transport sector to reflect the state of the transport sector in Indonesia today. Further research is needed on deeper analysis, related to the impact multiplier output, income and employment generated in addition to changes to the budget. So that can know the factors that influence changes in output, incomes, and employment economy in Indonesia.
5,836.6
2018-12-20T00:00:00.000
[ "Economics" ]
ADIPOR1 deficiency-induced suppression of retinal ELOVL2 and docosahexaenoic acid levels during photoreceptor degeneration and visual loss Lipid metabolism-related gene mutations can cause retinitis pigmentosa, a currently untreatable blinding disease resulting from progressive neurodegeneration of the retina. Here, we demonstrated the influence of adiponectin receptor 1 (ADIPOR1) deficiency in retinal neurodegeneration using Adipor1 knockout (KO) mice. Adipor1 mRNA was observed to be expressed in photoreceptors, predominately within the photoreceptor inner segment (PIS), and increased after birth during the development of the photoreceptor outer segments (POSs) where photons are received by the visual pigment, rhodopsin. At 3 weeks of age, visual function impairment, specifically photoreceptor dysfunction, as recorded by electroretinography (ERG), was evident in homozygous, but not heterozygous, Adipor1 KO mice. However, although photoreceptor loss was evident at 3 weeks of age and progressed until 10 weeks, the level of visual dysfunction was already substantial by 3 weeks, after which it was retained until 10 weeks of age. The rhodopsin mRNA levels had already decreased at 3 weeks, suggesting that reduced rhodopsin may have contributed to early visual loss. Moreover, inflammation and oxidative stress were induced in homozygous KO retinas. Prior to observation of photoreceptor loss via optical microscopy, electron microscopy revealed that POSs were present; however, they were misaligned and their lipid composition, including docosahexaenoic acid (DHA), which is critical in forming POSs, was impaired in the retina. Importantly, the expression of Elovl2, an elongase of very long chain fatty acids expressed in the PIS, was significantly reduced, and lipogenic genes, which are induced under conditions of reduced endogenous DHA synthesis, were increased in homozygous KO mice. The causal relationship between ADIPOR1 deficiency and Elovl2 repression, together with upregulation of lipogenic genes, was confirmed in vitro. Therefore, ADIPOR1 in the retina appears to be indispensable for ELOVL2 induction, which is likely required to supply sufficient DHA for appropriate photoreceptor function and survival. Introduction Recent progress in health research has revealed the significant impact of abnormal lipid metabolism in the pathogenesis of various diseases. Specifically, metabolic syndrome is associated with excessive intake of dietary lipids 1 ; neural degenerative diseases, such as Alzheimer's and Parkinson's diseases, are related to oxidative stressinduced lipid peroxidation 2 and retinal degenerative and blinding diseases, age-related macular degeneration 3 , as well as retinitis pigmentosa 4,5 , are reportedly caused by abnormal lipid accumulation. Here, we focused on abnormal lipid metabolism in the retina, which is associated with adiponectin receptor 1 (ADIPOR1) deficiency. ADIPOR1 was initially described as an adiponectin receptor that affects systemic lipid and glucose metabolism [6][7][8] . However, more recently, it has also been described as a receptor for C1q tumor necrosis factor-related protein 9 (CTRP9), a newly discovered adipokine, which binds to SIRT1, a longevity factor 9 . Moreover, in humans, ADIPOR1 mutation causes retinitis pigmentosa with or without systemic disorders, such as developmental and speech delays 10,11 . Lipids serve as a major component of cellular membranes, while 30-60% of the total brain weight comprises lipids 2 . Similar to the brain, the retina is also a neural tissue that contains abundant lipid bilayers forming photoreceptor outer segment (POS) discs where light stimuli are received. Moreover, the function of the visual pigment, rhodopsin, a membrane-bound protein present in POS discs, is sensitive to the proportion of docosahexaenoic acid (DHA)-containing phospholipids present in the membrane 12,13 . DHA is transported from the liver to the choroidal vessels, located beneath the retinal pigment epithelium (RPE) 14 , and through the RPE, subsequently transferred to photoreceptors, where it is used for the biogenesis of POS disc membranes as a major acyl chain 15 . ADIPOR1 deficiency reportedly reduces DHA uptake from the circulation to the photoreceptors 16 , leading to a reduction in DHA in the outer retina 17 . However, DHA (C22:6n-3) can also be locally produced by elongating the n-3 fatty acid precursors 18 using very-long-chain fatty acid elongase (ELOVL) enzymes. These enzymes contribute to the maintenance of DHA levels in the brain, and impaired chain elongation causes neuroinflammation and demyelination 19 . Thus, a similar pathway for local DHA production could exist within the retinal tissue. Here, we sought to characterize the mechanism associated with visual loss in an Adipor1 KO mouse model by analyzing the molecular changes that occur during the development of the disorder. We ultimately propose that ADIPOR1 deficiency affects the expression of Elovl2, which contributes to DHA synthesis 18 , and leads to insufficient levels of DHA in the retina, which likely contributes to retinal neural degeneration in Adipor1 KO mice. The current study will help elucidate the roles of ADIPOR1 in lipid metabolism and may contribute to the future exploration of a new therapeutic approach for retinal neurodegeneration. Animals Adipor1 KO mice (Adipor1 tm1Dgen ) were originally produced by Deltagen (San Mateo, CA, USA) and delivered through Mutant Mouse Regional Resource Centers and the Jackson Laboratory (#005775). The Adipor1 KO mice were provided as the 129P2/OlaHsd × C57BL/6 background and backcrossed with C57BL/6 mice more than 10 times before the experiments. BALB/c mice were purchased from CLEA Japan (Tokyo, Japan). All animal experiments were performed using male mice, except for photopic ERG for which both male and female mice 20 . The samples were collected by sacrificing the minimum number of animals needed to ensure that the derived data would be constant and significant. Data collection and analyses were performed under genotype blinded conditions. Electroretinography ERG recording was performed as previously described [20][21][22] . Briefly, mice were dark-adapted for 12 h and placed under dim-red illumination until they were anesthetized. The ground and reference electrodes were placed on the tail and in the mouth, respectively, while the active gold wire electrodes were placed on the cornea. Full-field scotopic ERGs were recorded in response to a flash stimulus at intensities ranging from −2.1 to 2.9 log cd s/m 2 , and photopic ERGs were recorded after 10 min of light adaptation in response to flash stimuli ranging from 0.6 to 1.6 log cd s/m 2 with a background of 30 cd s/m 2 (Ganzfeld System SG-2002; LKC Technologies, Inc., Gaithersburg, MD, USA), using PowerLab System 2/25 (AD Instruments, New South Wales, Australia) after pupil dilation with a mixture of 0.5% tropicamide and 0.5% phenylephrine (Mydrin-P, Santen Pharmaceutical, Osaka, Japan). The responses were differentially amplified and filtered through a digital bandpass filter ranging from 0.3 to 450 Hz. The a-wave amplitude was measured from baseline to trough, whereas the b-wave amplitude was measured from the a-wave trough to the b-wave peak. The implicit times were measured from stimulus onset to the peak of each wave. Peak points were automatically indicated by the system and confirmed by the examiner. Real-time reverse-transcription PCR Total RNA was isolated with TRIzol reagent (Life Technologies, Carlsbad, CA, USA) and reverse transcribed using the SuperScript VILO master mix (Life Technologies). Real-time PCR was performed using the StepOnePlus™ PCR system (Applied Biosystems, Foster City, CA, USA), and gene expression was quantified using the 2 -ΔΔCT method and normalized to Gapdh 20,21 . Primers are listed in Table 1. Electron microscopy Mice were euthanized, and their eyes were immediately fixed with 2.5% glutaraldehyde, post-fixed in 2% osmium tetroxide, dehydrated in a series of ethanol and propylene oxide solutions, and embedded in epoxy resin. Sections were stained with uranyl acetate and lead citrate and examined and photographed using an electron microscope (model 1200 EXII; JEOL, Tokyo, Japan). Lipidomics Lipidomics were performed as previously described 23,24 . Briefly, mouse retinas were frozen immediately after Table1 Primer list for real time PCR. Bisulfite PCR and efficiency of bisulfite modification Twenty-four hours after siRNA transfection, DNA was extracted from bEnd.3 cells with NucleoSpin Tissue kit (Macherey-Nagel, Germany). Bisulfite conversion was performed using EZ DNA Methylation-Gold Kit (Zymo Research, Orange, CA). CpG island prediction and methylation primer design were performed using Meth-Primer 26 . PCR was carried out in a final volume of 50 μl containing 100 ng bisulfite-treated DNA, 2.5 mM MgCl 2 , 400 nmol each primer, 0.3 mM each dNTP, 0.25 μl TaKaRa EpiTaq ™ HS (for bisulfite-treated DNA) (1.25 U/ 50 μl). PCR cycling conditions were as follows: 40 cycles of 10 s at 98°C, 30 s at 56°C for, 90 s at 72°C and finally ended with 1 min at 72°C for an extension. PCR products were separated on 1.0% agarose gel, purified using QIAquick Gel Extraction Kit (QIAGEN, Hilden, Germany), and then cloned into the pMD20-T vector using Mighty TA-cloning Kit (Takara, Japan). At least ten positive recombinant colonies of each product were sequenced by the Sanger method (Eurofins Genomics, Japan). The analysis of 65 bisulfite sequences was carried out with the QUMA (QUantification tool for Methylation Analysis) program (http://quma.cdb.riken.jp). Statistical analysis Data are expressed as means ± standard deviations. Statistical analyses were performed using one-way analysis of variance with Tukey's post hoc tests for comparisons among three or more groups or two-tailed Student's t tests or Fisher's exact test for comparisons between two groups using SPSS Statistics 26 (IBM, Armonk, NY, USA). Differences were considered statistically significant at P < 0.05. Adipor1 expression in the retina Adipor1 mRNA was prominently expressed in the photoreceptor inner segments (PIS), and in the inner layers of the wildtype (WT) retina at 4 weeks of age (Fig. 1A). No specific signals were detected in Adipor1 KO retinas with the antisense probe (Fig. 1B) or WT retinas hybridized with the sense probe (Fig. 1C). As pigments that interfere with mRNA detection in the RPE are present in C57B/6 WT mice, albino BALB/c mice were analyzed to demonstrate Adipor1 mRNA expression in the RPE (Fig. 1D). Adipor1 mRNA was detected in the neural retina by postnatal day 3 (P3), and the expression increased by P10, reaching a plateau at 3 weeks, which was maintained thereafter (Fig. 1E). In the RPE, the levels had upregulated by 3 weeks, peaking at 5 weeks, and were maintained thereafter (Fig. 1F). Adipor1 KO mice had visual function impairment Scotopic ERG revealed a marked decrease in the a-wave amplitude representing photoreceptor function in the homozygous Adipor1 KO mice at 3 weeks of age ( Fig. 2A, B), although no changes were observed in heterozygous mice. The amplitudes of the a-wave and of the b-wave in KO mice at 10 weeks were comparable to those at 3 weeks; however, the implicit time of the b-wave, which reflects subsequent neuronal network function to the photoreceptors, was increased at 10 weeks (Fig. 2C, D). Thus, visual function, particularly, rod photoreceptor function, was substantially impaired in the Adipor1 KO mice at 3 weeks of age and gradually progressed thereafter. The b-wave amplitudes of photopic ERGs, which show cone photoreceptor function, were comparable between homozygote Adipor1 KO and WT and heterozygotes at 3 weeks, while they were decreased in the homozygotes at 28 weeks ( Supplementary Fig. 1), indicating that cone dysfunction became evident after rod photoreceptor dysfunction progressed. Photoreceptor cell death in Adipor1 KO mice Histological changes in photoreceptors were analyzed using hematoxylin and eosin staining. The thickness of the photoreceptor layer, the ONL, was compared between WT and homozygous Adipor1 KO mice. At 2 weeks of age, no significant differences were observed; however, ONL thickness was significantly reduced by 3 weeks, and the change was further evident at 10 weeks because of progressive loss of photoreceptors in KO mice (Fig. 3A, B). TUNEL staining revealed more apoptotic cells only in the ONL of the homozygous KO mice at 3 weeks (Fig. 3C, D). Moreover, among WT, heterozygous, and homozygous KO mice (Fig. 3E), the abundance of rhodopsin was already remarkably reduced in the retina of homozygous KO mice at 3 weeks (Fig. 3F, G), while changes were not observed in heterozygotes. Further, Rhodopsin (Rho) mRNA levels (Fig. 3H), those of its upstream transcription factors, Crx (Fig. 3I), Nrl (Fig. 3J), and rod photoreceptor markers, Gnat1 (Supplementary Fig. 2) and Pde6b (Supplementary Fig. 2) were repressed in homozygotes. In contrast, the levels of cone photoreceptor markers, Arr3 (Supplementary Fig. 2) and Pde6c (Supplementary Fig. 2) did not change. In homozygous Adi-por1 KO mice, GFAP expression, which represents reactive glia, was increased in Müller glial cells expressing GS, suggesting that Müller glial cells were also affected directly or indirectly by ADIPOR1 deficiency (Fig. 3K). Similarly, F4/80, a macrophage/microglia marker (Fig. 3L), and Ho-1, an oxidative stress marker (Fig. 3M), mRNA expression was increased, suggesting that microenvironmental stress was upregulated in the absence of retinal ADIPOR1. DHA reduction in the Adipor1 KO retina Further analyses were performed in 2-week-old mice to explore the pathogenesis during the development of the above-described phenotypes. The POSs, where folded plasma membranes are regularly aligned and retain rhodopsin as discs, were already misaligned and damaged in Representative waveform from individual mice at each stimulus intensity (A, C). The a-wave amplitudes decreased in homozygous KO mice compared to WT and heterozygous mice at 3 and 10 weeks. The reduction in amplitudes and increased implicit time of the b-wave became evident at 10 weeks. No differences were observed between WT and heterozygotes. Data are shown as means ± standard deviations. n = 5, **P < 0.01 vs. WT, one-way ANOVA. the retina of Adipor1 KO mice (Fig. 4A). Pigmented melanosomes were increased in the RPE, where POSs are phagocytosed and digested to regenerate the visual pigment 12 . Next, to determine whether disorganized POSs are accompanied by changes in lipid bilayer composition, retinal phospholipids were quantified using (LC-MS). In the retinas of homozygous KO mice, phospholipids containing DHA as FFAs, such as phosphatidylglycerol, phosphatidylinositol, phosphatidylserine, phosphatidylcholine, and phosphatidylethanolamine, were substantially reduced (Fig. 4B), while those containing oleic acid were not affected (Fig. 4C). Because DHA can be synthesized by a series of enzymatic reactions, the mRNA expression of enzymes that participate in fatty acid elongation was quantified. Elovl2 mRNA was downregulated (Fig. 4D), while those of other enzymes, such as of Elovl4, were not. This change was observed only in homozygotes, not heterozygotes. Moreover, Elovl2 mRNA was predominantly expressed in the PISs and in the inner layers of the retina (Fig. 4E). Consistent with the real-time PCR results (Fig. 4D), Elovl2 was suppressed in the PIS of homozygous Adipor1 KO mice (Fig. 4E). The expression of genes that contribute to lipogenesis in the retina was further analyzed. In the homozygous KO mouse retina, mRNA expressions of the transcription factors Srebf1 and Srebf2, which regulate lipid synthesis, as well as their downstream genes, were increased (Fig. 4F). However, no change was observed in Mfsd2a expression, the deficiency of which causes retinal degeneration via decreased DHA trafficking 27 . Adipor1 KD represses Elovl2 in vitro Adipor1 KD decreased Elovl2 mRNA levels in bEnd.3 cells, indicating that Elovl2 transcription was, at least in part, regulated by ADIPOR1 (Fig. 5A). Similarly, Elovl5, which also elongates polyunsaturated acyl-CoA, and Elovl2 28 were also downregulated, whereas Elovl6, which acts on saturated acyl-CoA, and 3-hydroxy acyl-CoA dehydrogenase 1 and 4 29 were upregulated. In addition, Srebf2 and its downstream genes related to lipogenesis were increased (Fig. 5B) similar to the in vivo observations. CpG island of the upstream of Elovl2 gene promoter region was analyzed using bisulfite PCR to evaluate the changes of DNA methylation status with or without Adipor1 KD. There were several points where methylation status was significantly different after Adipor1 KD (Supplementary Fig. 3). Discussion Adipor1 mRNA was found to be expressed throughout the retina and prominently in the PIS. Its level increased in the retina after birth until 3 weeks of age. Visual function was impaired in homozygous Adipor1 KO mice as early as at 3 weeks, at which point rhodopsin expression was substantially reduced, while apoptosis-induced photoreceptor loss was starting. Electron microscopy images revealed that misalignment of the POS, the location where rhodopsin is retained, was already present at 2 weeks of age. Moreover, DHA-containing phospholipids were reduced and Elovl2 was suppressed by 2 weeks. The reduced expression of Elovl2 by ADIPOR1 deficiency was also confirmed using an in vitro KD system. Adipor1 mRNA expression gradually increased from immediately after birth to postnatal 3 weeks, which corresponded to photoreceptor development and maturation. The photoreceptor connecting cilium, through which all POS proteins and membrane components must be conveyed, appears between the PIS and POS at P3 and is developed by 2 weeks 30 . A previous study reported the presence of ADIPOR1 protein in the POS 31 , and the current study showed that the increase in Adipor1 expression was parallel to POS development. Considering that Adipor1 mRNA was concentrated in PISs, where organelles such as ribosomes and the Golgi apparatus are accumulated 32 , the ADIPOR1 protein may be formed in (see figure on previous page) Fig. 3 Photoreceptor cell death in homozygous Adipor1 KO mice. Hematoxylin-eosin (H&E) staining (A) of retinal sections, and ONL thickness (B) from 2-, 3-, and 10-week-old WT and homozygous Adipor1 KO mice (A). ONL thickness was not different between WT and KO mice at the age of 2 weeks; however, it significantly decreased at 3 weeks, and the difference was further evident at 10 weeks. n = 5-9. C, D TUNEL assay (magenta) with immunohistochemistry staining for rhodopsin (green) in the retina of 3-week-old mice. The number of TUNEL-positive cells increased (D), and rhodopsin expression in POS decreased in the homozygous KO retina. n = 5. E Real-time PCR. mRNA levels of Adipor1 in the retina of WT, heterozygous, and homozygous KO mice were confirmed. n = 5. F, G Immunoblot analysis. Rhodopsin protein levels decreased in the homozygous KO mice retina at 3 weeks of age. n = 3. H-J Real-time PCR. mRNA levels of Rhodopsin (Rho) (H), Crx (I), and Nrl (J) in the retinas decreased in the homozygous KO mice at 3 weeks of age. (F-J) There were no differences between WT and heterozygotes, n = 4. K Immunohistochemistry for GFAP (green), a marker for reactive glia, and GS, a marker of Müller glial cells (magenta). GFAP was colocalized with GS and was upregulated in the homozygous KO mice retina at 3 weeks of age. n = 5. L, M Real-time PCR. mRNA levels of F4/80 (L) and Ho-1 (M) increased in the retina of homozygous KO mice at 3 weeks of age. There were no differences between WT and heterozygotes, n = 4. GFAP glial fibrillary acidic protein, GS glutamine synthetase, Het heterozygotes, POS photoreceptor outer segment. Data are shown as means ± standard deviations. **P < 0.01 WT, twotailed Student's t test in (B, D), vs. WT, one-way ANOVA in (E, G, H, I, J, L, M). Scale bar, 50 μm. PISs and transferred through the connecting cilium to POSs during their development. The a-wave amplitude in ERG was substantially decreased in Adipor1 KO mice compared to WT mice at 3 weeks of age, consistent with the finding of previous reports analyzing Adipor1 KO mice from another lineage (AdipoR1 gt ) 16 , indicating impaired photoreceptor function in the absence of ADIPOR1. However, while ONL thinning was mild at this time point, and it progressed rapidly thereafter, the photoreceptor function was substantially reduced by 3 weeks and did not remarkably progress afterward. Considering that rhodopsin mRNA and protein levels, together with its upstream transcription factors, Crx and Nrl, were substantially decreased by 3 weeks, reduced rhodopsin expression in the POSs of live photoreceptors may have more significantly contributed to impaired photoreceptor responses, than apoptosisinduced photoreceptor loss at this time point. Rhodopsin suppression was consistent with the findings of a previous study reporting that rhodopsin protein abundance was reduced in the retina of systemic Adipor1 KO mice from another lineage (Adipor1 < tm1.2Lex > ) 31 and of Adipor1 KD by adenovirus 31 ; there was a possibility that rhodopsin reduction was due to protein degradation related to POS degeneration by reduction in DHA, one of the major components of POS discs 12,13 . However, the current study extended the results to describe transcriptional repression, which was not previously defined; thus, rhodopsin was reduced as a result of impaired production. The influence on rhodopsin in Adipor1 KO mice was also in contrast to ciliopathy, a congenital disease in which no POSs develop, and associated with ectopic rhodopsin expression due to impaired rhodopsin trafficking 33 ; Adi-por1 KO mice, which also showed POS deficiency, did not exhibit ectopic rhodopsin expression and simply showed rhodopsin repression. Similar to that observed in Adipor1 KO mice in the current study, photoreceptor apoptosis has also been previously reported to start at 3 weeks of age in Rhodopsin (Rho) KO mice 34 . Given that the neuronal survival pathway, insulin-phosphoinositide 3-kinase signaling, can become activated by rhodopsin-mediated visual signals 35 , rhodopsin suppression caused by ADIPOR1 deficiency may have accelerated photoreceptor apoptosis, although this requires further investigation. In addition, Müller glial cells, which preserve homeostasis of the tissue microenvironment 36,37 , became reactivated to increase GFAP expression, and macrophage and oxidative stress markers in the retina were increased in the Adipor1 KO mice. These results suggest that the apoptotic and degenerative changes in the photoreceptors may have caused inflammation and oxidative stress 38 , which subsequently affected the entire retina, although a direct effect of ADIPOR1 deficiency on these cells could (see figure on previous page) Fig. 4 DHA is reduced in Adipor1 KO retinas. A Electron microscopy images of the outer layers of WT and homozygous Adipor1 KO retinas at 2 weeks of age. Low (left) and high (center and right) magnifications of the images. Photoreceptor outer segment discs, composed of plasma membranes containing phospholipids, were misaligned and melanosomes in the RPE were increased in the Adipor1 KO retina. (B, C) LS/MS assay of the WT and Adipor1 KO retinal samples at 2 weeks of age. Relative levels of DHA containing phospholipids (B) and oleic acid-containing phospholipids (C). DHA-containing phospholipids were reduced while oleic acid-containing phospholipids were preserved in the retina of KO mice compared with that of WT mice. D Real-time PCR. Relative mRNA levels of proteins associated with the fatty acid elongation pathway in the retina at 2 weeks of age. Elovl2 was repressed in the retina of homozygous KO mice compared with that of WT and heterozygous mice. E In situ hybridization. Elovl2 mRNA was prominently expressed in the PIS and weakly in the inner layers of the WT mouse retina at 2 weeks of age. However, the expression was reduced in the homozygous KO retina. F Real-time PCR. Relative mRNA levels of molecules in the lipogenesis pathway in the retina of homozygous KO mice compared to that of WT and heterozygous mice at 2 weeks of age. D, F There were no differences between WT and heterozygotes. PIS photoreceptor inner segments, RPE retinal pigment epithelium. Data are shown as means ± standard deviations. n = 4, *P < 0.05, **P < 0.01 vs. control, two-tailed Student's t test in (B), vs. WT, one-way ANOVA in (D, F). Scale bar, 5 μm (A), 50 μm (E). not be excluded. However, the whole retinal reaction related to photoreceptor degeneration is consistent with the results of previous reports with retinal inflammation 39,40 and light exposure-induced oxidative stress 41 models. Thus, inflammation and oxidative stress may have accelerated neurodegeneration in the retina of Adipor1 KO mice. Cone system dysfunction was not observed, and the expression of cone markers did not change at the age of 3 weeks, while the dysfunction became evident by 28 weeks in the Adipor1 KO mice, suggesting that the cone system abnormality had gradually progressed and became evident later than rod photoreceptor degeneration. It has been reported that rod-derived cone viable factor (RdCVF), which is a soluble factor released from rod photoreceptors that regulates glucose metabolism 42 and oxidative stress 43 in the cone photoreceptors, is indispensable for cone photoreceptor function 44 . Delayed cone photoreceptor dysfunction may have been, at least partly, related to rod photoreceptor degeneration in the current study. A previous report described reduced uptake of labeled DHA in the retina of Adipor1 KO mice at 3 weeks of age, at which point the photoreceptors were already reduced only in the homozygotes 16 . However, the labeled DHA taken up by the eye-cup tissue was also substantially reduced in the heterozygous tissue, while total DHA was restored in heterozygote retinas, and the amount of DHA taken up was low compared to the total DHA levels in the retina (approximately 1:10,000 according to the previous report) 16 . The amount of DHA delivered via the circulation and taken up by the RPE to subsequently supply photoreceptors was reduced by ADIPOR1 deficiency. However, an alternative system associated with ADIPOR1 signaling is likely to present, potentially in photoreceptors, that serves to regulate total DHA levels within the retinal tissue, which may act independently of the circulating DHA system. In the current study, we found that the fatty acid elongation enzyme, ELOVL2, which elongates C20-C24 PUFAs, such as arachidonic acid (20:4n-6), eicosapentaenoic acid (20:5n-3), docosatetraenoic acid (22:4n-6), and docosapentaenoic acid (22:5n-3) 18,45 , was suppressed in the retina of homozygous, but not heterozygous, Adipor1 KO mice, at the age of 2 weeks before the time point when the photoreceptor loss was evident. Moreover, ELOVL2 was downregulated by Adipor1 KD in vitro. In addition, Elovl2 mRNA was predominately expressed in PISs, similar to that observed for Adipor1. Taken together, these results suggest that ADIPOR1 may affect Elovl2 transcription, thereby interfering with DHA production via elongation of the carbon chain by substrates, such as γ-linolenic acid, in the retina. Elovl2 expression was not reduced in the retinas of heterozygotes, consistent with the total DHA levels, which were not affected in the heterozygous retina. Moreover, the absence of Elovl2 repression in heterozygotes was consistent with the absence of photoreceptor dysfunction in Adipor1 heterozygotes. In fact, Elovl2 KO caused reduced DHA levels in the liver and serum, indicating that DHA can be provided endogenously through ELOVL2 action, not only derived from the diet 45 . High-fat diets do not induce hepatic steatosis 45 , and impaired spermatogenesis related to DHA insufficiency is not rescued by DHA supplementation, both in Elovl2 KO mice 46 , suggesting that endogenously synthesized DHA is essential for lipid homeostasis in specific tissues and organs 28,45 . Moreover, Elovl2 mutant mice, in which Elovl2 enzymatic activity was reduced, showed decreased DHA contents in the retina and visual dysfunction of rod photoreceptor cells 47 at 6 months of age. Thus, Elovl2 deficient mice show similar retinal phenotypes to Adipor1 KO mice when they age. Deficiency in endogenously synthesized DHA activates hepatic SREBP-1c, which is translated from Srebf1 mRNA, and stimulates transcription of lipogenic genes 45 . Therefore, the upregulation of lipogenic genes in the retina of Adi-por1 KO mice, as well as in vitro, was consistent with the condition described above for ELOVL2 deficiency 45 . One known regulatory system in Elovl2 transcription is the changes in DNA methylation [47][48][49][50] . In fact, several parts of the CpG island showed different levels of DNA methylation after Adipor1 KD in vitro in the current study. Future study to analyze whether the changes promote the repression of Elovl2 is warranted. Alternatively, ADIPOR1 is required for the expression of desaturase, which acts upstream of ELOVL2 during DHA synthesis 51 ; ADIPOR1 deficiency may have decreased desaturase, thereby reducing the substrate of ELOVL2 and Elovl2 expression. The mechanisms underlying the connection between Adipor1 and Elovl2 would be an area of interest for future research. Finally, the expression of a DHA transporter, Mfsd2a, was not altered in the retina of Adipor1 KO mice, although pigmented melanosomes were increased in the RPE of Adipor1 KO mice similar to that in Mfsd2a KO mice 52 . However, Mfsd2a KO mice do not exhibit photoreceptor degeneration 52 , supporting the notion that photoreceptor degeneration is not induced by a reduction in DHA delivery, but rather by reduced endogenous DHA production via the fatty acid elongation system. In conclusion, ADIPOR1 deficiency-induced reduced expression of Elovl2, a fatty acid elongation enzyme that produces DHA in local tissues. Decreased DHA in the retinas of Adipor1 KO mice likely involved reduced DHA production in the photoreceptors through the ELOVL2 enzymatic reaction, which subsequently caused photoreceptor damage and visual impairment. The finding will help explore a new therapeutic approach for treating retinal degeneration induced by DHA depletion due to ADIPOR1 deficiency in the future. Availability of data and materials The datasets generated or analyzed during the current study are available from the corresponding author on reasonable request. The data have also been uploaded as Supplementary Information.
6,620.6
2021-05-01T00:00:00.000
[ "Medicine", "Biology" ]
Effect of illuminating wavelength on the contrast of meibography images Evaluation of the Meibomian glands morphology is becoming a popular assessment for dry eye. This evaluation is usually done by imaging the glands on the everted lids while they are illuminated with infrared light. Nowadays techniques to determine gland condition and dropout are based on grading scales with which meibography images are subjectively evaluated. In this work, we have measured the contrast of Meibomian gland images from ten subjects and for a range of wavelengths of the monochromatic illuminating light. We have used a xenon lamp and a monochromator as a light source, and a semiautomatic image processing technique for measuring the image contrast from 600 nm to 1050 nm. Contrast values inside glands are from 0.025 to 0.015 and between glands from 0.06 to 0.04. The greater values of contrast are obtained when Meibomian glands are illuminated with a wavelength close to 600 nm. © 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Tapie firstly used the trans-illumination technique in 1977 with a red light filter illumination probe inserted behind the everted eyelid and a slit lamp to observe Meibomian glands [12], afterward an IR light source was used to do the visualization [17,24]. This procedure showed disadvantages like heating discomfort and pain reports from patients, the difficulty of capturing images from the entire eyelid due to the small trans-illumination area and a process of capturing IR images limited to IR cameras [17]. Arita et al. introduced the non-contact meibography technique in 2008 [20]. This system combines a slit lamp biomicroscope with an IR filter (830 nm), and an IR charge-coupled device (CCD) video camera to image the Meibomian glands from the everted eyelid [20]. This technique does not need an illumination probe, and that increases the comfort of the patients during the exam. Pult et al. developed another non-contact device in 2011 [25], consisting of a modified IR security camera (802CHA CCD; Shenzhen LYD Technology Co. Ltd, Shenzhan, China). This IR CCD video-camera incorporates an IR light source which is aimed to the everted eyelid which is then imaged and captured to analyze it. Srinivasan et al. reported the use of the topographer Keratograph 4 (OCULUS, Wetzlar, Germany and JenVis Research -Jena, Germany-) to perform meibography in 2012 [26]. The illuminating wavelength of Keratograph 4 is 880 nm. Since then, other instruments based on IR light have been used for this purpose. Recently, Napoli et al. [27] propose the usage of an Optical Coherent Tomography (OCT) technique to evaluate Meibomian glands showing its advantages against conventional imaging techniques. In this work, Napoli et al. used image processing technique to enhance the contrast and brightness of OCT images (taken at a wavelength of 840 nm) so the microscopic structures of the glands could be highlighted. The main drawback of this technique is the availability of such an expensive equipment as an OCT scanner to an average practitioner. The visualization of the glands essentially permits obtaining information about their number, their partial or total loss, their thickness and tortuosity and even the bent angle they show along the eyelid. There are several grading scales to score all these aspects but there are no agreed and established standards in the classification of Meibomian glands [19,20,25,[28][29][30][31][32]. In the last years, automatic techniques for detecting the inner structure of the glands [33,34] and assessing the glands [35] from gland images taken with Keratograph 5 (OCULUS) with an illuminating wavelength of 840 nm have appeared in the literature. In Table 1 we have summarized the illuminating wavelength used in some of the studies mentioned above. From this table we can infer that non-contact meibography is usually performed with an illuminating wavelength close to 840 nm. It seems clear that meibography is a powerful and simple technique to assess the condition of the Meibomian glands but the images usually lack good contrast and may be difficult to analyze. Because of this, optimization of the imaging system should be sought. It is also important to determine whether it is possible to use red light instead of infrared one, as this would make possible the usage of cheaper and simpler setups to take the images and grading the Meibomian glands, either manual or automatically. The present study shows the relationship between the contrast of the Meibomian gland image and the visible and infrared illuminating wavelengths employed in the imaging process. It also gives some clues of which wavelengths gives higher contrast of the Meibomian gland images. A hypothesis to be tested in subsequent works is whether this contrast can provide the practitioner with an extra objective parameter to improve over the current graduated scales in the evaluation of the pathology of the glands. Material and methods The present study was conducted in accordance with the Helsinki Declaration guidelines and was approved by the Independent Research Ethics Committee of the Clinic Hospital San Carlos of Madrid (favorable judgment C.P.-C.I. 15/518-E.). The study was carried out with ten subjects who were provided with informed consent that they accepted. Only the right eye of each subject was measured. In order to avoid experimental complexity, we chose subjects with healthy eyes without any problems on the ocular surface. They were not contact lens users. The experimental set up is shown in Fig. 1. We have imaged Meibomian glands of the everted lower right lid of each subject using a digital CCD camera iNET-GmbH model NS1130BU fitted with a Navitar lens NMV-6WA with focal length 25 mm, and maximum aperture F/1.4. Distance from the everted lid to the first rim of the lens was 10 cm, and, at this object distance, magnification yielded 74.0 ± 0.7 pixels/mm. The measurements were made in a refraction room with general lightning turned off, and illuminating the eye with the monochromatic light coming from a double grating monochromator PTI SID101 to which white light from a 75 Watts xenon lamp PTI A-1010B was injected. Monochromator slits were adjusted so that the FWHM (full width a half height) of the output light was 2 nm, as measured with a fiber spectrometer AvaSpec ULS2048-USB2-VA-50, (Avantes, Apledoorn, The Netherlands.) The same spectrometer was used to calibrate the manual dial of the monochromator. Maximum wavelength error in the measured spectral region was smaller than 0.3 nm. The light from the monochromator output slit was collimated with a cylindrical lens and, by means of a plane mirror, directed to the subject's eye at an angle of 20° with respect to the axis of the imaging optics. Images of the everted lid were taken in steps of 25 nm from 600 to 1050 nm so that images at 19 different wavelengths were taken. To avoid the ambient light modifying the contrast of the images, all of them were taken in a totally darkened laboratory. The images were recorded as 8-bit TIFF files and processed with the Matlab package. Figure 2 shows the whole image as captured by the imaging system for λ = 625 nm. In this image, we can see the field of view and the illuminated region. In each recorded image, we selected a rectangular region of interest (ROI) with size 300 × 90 pixels for further processing (see Fig. 2). The region contained about 4 Meibomian glands with well-resolved acini (typical linear size of acini being 20 to 40 pixels). The same ROI was used in all the images. The location of the ROI was manually selected by the operator, who centered the ROI in the region where the Meibomian glands presented the greater contrast. Although it is possible to implement an automatic segmentation algorithm for selecting the zone with Meibomian glands, the extension of this zone varies from eye to eye. This is why we chose to select manually a ROI with the same extension for all the subjects. The subject was allowed to blink from image to image, and for each acquisition, the experimenter had to tune the pressure on the everted lid to avoid specular reflection from wet areas inside the ROI. Once the experimental setup was tested and ready, and all the calibrations made, the whole set of measurements (around 18 measurements, corresponding to the different wavelengths employed) was completed in a single session lasting around 60 min, so the measuring time for a single wavelength would be around 1-2 minutes. In Fig. 3 we show the ROIs obtained for the eye of the same subject (P0) illuminated with different wavelengths. All the gland images were processed in order to obtain the intra-gland, or in-gland, contrast, and the inter-gland contrast. The in-gland contrast is the average of the Michelson's contrast measured along paths that join different acini belonging to the same gland. Michelson's contrast is defined as follows max min max min where max I and min I are, respectively, the maximum and minimum intensities measured along a given path. Similarly, the inter-gland contrast is the average of the Michelson's contrast corresponding to pairs formed by the maximum intensity in the acini, and the minimum intensity at the regions between different glands. In all cases, the input image of the algorithm were the unprocessed ROI, as the ones shown in Fig. 3. We will describe now the main steps of the image processing algorithm employed to determine the in-gland and inter-gland contrasts. 1. The first step is the computation of the background by filtering the ROI with a 60 × 60 median filter. The median filter acts as a low pass filter by removing the fine details of the image depending on the filter's window size. For the median filter, the window size is related with the cutoff frequency of the equivalent low pass filter. Mathematically, we have that Notice the differences in contrast and detail between these images. In Fig. 3(a) we have indicated the different anatomical features present: the gland border, the intergland area, and one acinus. 2. The next step is the gland detection and labeling by thresholding the input image. The threshold criterion is that a point belongs to a gland if its intensity is greater than that of the background image obtained in the preceding step. The resulting binary image, or gland mask, is labeled to identify each gland separately. In Fig. 4(b) we show the image of the glands determined for the patient P0 and wavelength 625 nm as detected by the algorithm. To state the accuracy of the gland detection we have computed the total area (in square pixels) of the glands detected by the algorithm, and the area of the glands segmented manually, shown in Fig. 4(a). For the automatic glands the total area was 16028 square pixels, while the manually segmented gland mask occupied an area of 17486 square pixels, with a difference around an 8% between them. The contours of the glands detected automatically are superposed with the ones determined manually in Fig. 4(c). Fig. 4(b). The process of gland detection can be represented by the following equation being ( ) , M x y the binary mask gland. By applying the labeling operator to the gland mask we get the label matrix as where [ ] L is the labeling operator. This operator assigns an integer value (label) to each isolated region of the gland mask which presents 8-pixel connectivity. As a result of the labeling operator, we get a matrix of integers, known as label matrix, ( ) , L x y , which indicates whether the pixel ( ) 4. Next, we use imregionalmin to obtain the coordinates of the local minima for each gland. These points will usually be located in the border between two acini. Similar to the former case we have that 5. Then, we compute the in-gland contrast. To do so, we work with the unprocessed image and we compute the maximum and minimum intensity corresponding to each acinus and an inter-acini point located in steps 3 and 4. Then we form pairs of acini/inter-acini points with the following rules: (1) the two points should belong to the same gland and, (2) the distance between them should be lower than the gland width. The gland width is calculated for each gland detected using Matlab function regionprops which allow to compute the property 'MinorAxisLength' which, for an elongated object, like the glands, corresponds to the width of the ellipse that has the same size and orientation than the object (See further details in reference [36]). Then, we compute the Michelson's contrast for each pair and we average to get the final in-gland contrast for the whole image. Similarly, we get the standard deviation of the in-gland contrast (see Fig. 6). where in jk C is the contrast corresponding to the pair formed by the j-th local maximum and the k-th local minimum. The average in-gland contrast, and its standard deviation, are obtained by averaging the contrast computed for all valid points. 6. Once we get the in-gland contrast, we locate the local minima of intensity at the region between glands using the high contrast images. Then, we form pairs with these points and the acini (local intensity maxima of the glands) in order to compute the inter-gland contrast. Similarly to the previous step, we validate each pair of points if the distance between them is smaller than the average gland width. In this case, if where w is the average width of all the glands detected. 7. Finally, we get the intensities corresponding to the validated acini/inter-gland point pairs, and we compute the Michelson's contrast for each pair using Eq. (1). A single inter-gland contrast figure is obtained by averaging. λ nm. Notice that the valid points (according to the criteria given at step 5) are joined by a blue line. The in-gland contrast has been computed from these points. Also, we have displayed in Fig. 5(b) the acini and inter-gland points detected for the same ROI, also showing the valid pairs from which the inter-gland contrast has been obtained. Results and discussion We will present first the detailed results obtained for subject P0 and the averaged values of contrast for the whole sample. In Fig. 6 we have represented the values of in-gland and intergland contrast obtained for three different wavelengths as a box plot. In this representation, for a given set of contrast values, the red horizontal line stands for the median of the data, the upper and lower sides of the blue box are the values of contrast which correspond to the 75% and 25% percentile, respectively, the dashed segment is the range of contrast, and, finally, the red crosses are data classified as outliers. Fig. 6. Box plot representing the median (red line), the 25th and 75th percentile (bottom and top of the blue box), the range, the range of data considered as valid (dotted black interval) and the outliers red crosses) of both in-gland contrast (left) and inter-gland contrast (right). These plots have been obtained for all the valid pairs detected from the images of the lid of subject P0 shown in Fig. 3, corresponding to the wavelengths of a) 625, b) 700 and c) 900 nm. Although the plots of Fig. 6 show a high variability for both contrasts, due to the relatively high range intervals, we can see that the box plot for inter-gland contrast is always higher than the box plot for the in-gland contrast, regardless the wavelength, indicating a difference between these contrasts. This corresponds with the subjective impression that we got when observing the images of the Meibomian glands from the same subject for the same wavelengths (see Fig. 3) as the contrast between glands is always higher than the contrast within a gland. Fig. 7. (a) In-gland (blue) and inter-gland (red) contrast as a function of the wavelength. The length of the error bars is twice the value of the standard deviation. The data corresponds to the subject P0. (b) Average in-gland (blue) and inter-gland (red) contrast against the wavelength for the whole population of our study. The error bar length represents the double of the standard deviation obtained among individuals Subjective evaluation of the ROIs in Fig. 3 suggests a larger contrast for wavelength =900 λ nm. The numerical results obtained confirm this point, as the average values for the inter-gland contrast are 0.039 for 625 nm, 0.036 for 700 nm, and 0.048 for 900 nm. On the other hand, the average in-gland contrast for these same wavelengths are 0.014, 0.015, and 0.017, respectively. However, it is worth noticing that the differences between wavelengths are quite low. Figure 7(a) shows the averaged inter-gland and in-gland contrasts for subject P0 as a function of the wavelength of the incident light. The difference between in-gland and intergland contrasts observed for the three wavelengths presented in Fig. 7(a), keeps approximately the same for all the wavelengths, with values around 0.05 for inter-gland contrast and values around 0.02 for in-gland contrast. In Fig. 7(b) we present a plot of the in-gland and inter-gland contrast, averaged for all the 10 subjects as a function of wavelength. Similar values of inter-gland and in-gland contrast are observed in the subject average so the difference between both is still 0.03. Considering that all of the subjects of our study presented healthy eyes and were not users of contact lenses, it seems that the basal values of the inter-gland and in-gland contrast for a healthy individual are 0.05 and 0.02, respectively. Apparently, from Fig. 7(b) there is some correlation between maxima and minima of the in-gland and inter-gland contrast with wavelength. According to these curves, there are three optimal wavelengths for Meibomian gland observation: the edges of the explored spectral range, (600 and 950 nm) and a small peak centered at the wavelength 725 nm. Notice that we have restricted the wavelength interval from 600 to 950 nm as we were not able to obtain good images (images with enough contrast to distinguish the Meibomian glands) for some subjects at wavelengths longer than 950 nm. In order to test whether there is a dependence between the average in-and inter-gland contrasts with the wavelength, we fitted the data to different functions. However, due to the high variability of the data, we did not find a reliable fit. Therefore, we have used contrast hypothesis tests in order to determine whether the measured contrast is higher for the extreme wavelengths of the spectral region studied, particularly for the red end. This is a relevant question because the usage of a red wavelength permits simpler and cheaper experimental setups as there is no need for IR sources. In particular, it would eventually allow the usage of mobile devices such as tablets, cell phones, etc. to take the images. Therefore, we have performed a hypothesis contrast in order to check whether the mean value of the contrast measured for a low wavelength (600 nm) is greater than the mean value for an intermediary wavelength (775 nm). We have used the Welch's unequal variances t-test for comparing two normally distributed populations with different means and variances. Using this test, we got a positive result for the inter-gland contrast as the null hypothesis (the mean values of contrast for the wavelengths of 600 and 775 nm are equal) was rejected by the test within a significance level of 0.05. For the in-gland contrast, the Welch's t-test was not applicable as the previous normality check of the samples (using the Lilliefors test with a significance level of 0.01) for the two wavelengths studied failed. Therefore we have performed a non-parametric Wilcoxon-Mann-Whitney test to check whether the null hypothesis that the median values of the in-gland contrast for the wavelengths of 600 and 775 nm are equal. In this case the null hypothesis was rejected at a significance level of 5%. The significance of this results is that it is possible to get more contrast when illuminating the eyelid with a wavelength close to 600 nm. Conclusions In this work, we have carried out an experiment for determine the contrast of Meibomian glands from the images of the inner side of the eyelids captured when illuminated with different wavelengths. To do so, we have employed an image processing algorithm based on the detection of local maxima and minima of intensity within the glands and within the inter gland regions. We have measured ten subjects with healthy eyes in order to determine the average change of the contrast with the wavelength. The local contrast presents high variability for a given wavelength and a given individual. However, the algorithm employed evaluates the contrast at a large number of locations within the ROI, yielding an average contrast value, albeit with still a considerable variance. Regardless of the wavelength, there is always a significant difference between the inter-and in-gland contrasts, being systematically greater the former than the latter. The variability of the averaged contrasts across individuals is smaller than the variability of the local contrasts for any given individual. As a future work, an important effort should be made in order to improve the quality of the data obtained, with particular emphasis on the control and reduction of variance. Finally, regarding the dependence of contrast with wavelength, the edges of the scanned spectral range are preferable for Meibomian gland inspection as they deliver 30 to 40% more contrast than the remaining red and infrared wavelengths. Although we have not been able to find a model that fits properly the average contrast against wavelength, we have found with statistical significance that the average inter and in-gland contrast for a low wavelength (600 nm) is greater than the one found at an intermediate wavelength (775 nm). Apparently, it seems that the contrast curves could keep growing below 600 nm and above 950 nm, but going beyond these thresholds is not advisable. For visible light, the contrast of Meibomian glands rapidly deteriorates for wavelengths shorter than 600 nm. Besides, other structures as capillaries will appear, disrupting the images of the glands. For wavelengths longer than 950 nm contrast seems to keep growing, but the efficiency of standard CCD or CMOS sensors gets too small, with the subsequent increment of the signal to noise ratio. The contrast of the images of Meibomian glands associated with healthy eyes is precisely quantified for the first time. The results obtained open the possibility of using the contrast of Meibomian gland images as an indicator or even a quantifier of Meibomian gland dysfunction although this will require measuring a large number of subjects with or without Meibomian gland dysfunction which is outside the scope of the present work.
5,261.2
2018-11-13T00:00:00.000
[ "Physics" ]
Wavelength-Selective Phase-Shifting Digital Holography : Color Three-Dimensional Imaging Ability in Relation to Bit Depth of Wavelength-Multiplexed Holograms † The quality of reconstructed images in relation to the bit depth of holograms formed by wavelength-selective phase-shifting digital holography was investigated. Wavelength-selective phase-shifting digital holography is a technique to obtain multiwavelength three-dimensional (3D) images with a full space-bandwidth product of an image sensor from wavelength-multiplexed phase-shifted holograms and has been proposed since 2013. The bit resolution required to obtain a multiwavelength holographic image was quantitatively and experimentally evaluated, and the relationship between wavelength resolution and dynamic range of an image sensor was numerically simulated. The results indicate that two-bit resolution per wavelength is required to conduct color 3D imaging. Introduction Holography [1,2] is a technique utilizing interference of light to record a complex amplitude distribution of an object wave.The recorded information is called a "hologram".A three-dimensional (3D) image is reconstructed from the hologram by utilizing the diffraction of light.Holography can be used to record and reconstruct a 3D image of an object or a phase distribution of a wave without having to use multiple cameras or an array of lenses.Furthermore, 3D motion-picture images of any ultrafast physical phenomenon (such as light pulse propagation in 3D space) can be recorded and reconstructed with a single-shot exposure [3,4].Digital holography (DH) [5][6][7][8][9] is used to record a digital hologram that contains an object wave and reconstructs both the 3D and quantitative phase images of an object by using a computer.DH can potentially be applied to the fields of not only ultrafast optical 3D imaging [10] but also microscopy [6,11,12], particles and flow measurements [13], quantitative phase imaging [14], lensless 3D imaging with incoherent light [15], multidimensional bio-imaging [16], multiwavelength 3D imaging [17], depth-resolved 3D imaging [18], simultaneous recording of multiple 3D images [19], and encryption [20]. Since a full space-bandwidth product of an image sensor is available, phase-shifting DH [21][22][23] is one way to capture an object wave.Using phase-shifting DH with multiple wavelengths, which is termed color/multiwavelength phase-shifting DH, 3D surface-shape measurements with multiwavelength phase unwrapping [24] and lensless color 3D image sensing [25,26] have been reported.DH using red-, green-, and blue-wavelengths is usually called "RGB digital holography", "three-wavelength DH", and "multiwavelength DH".In this paper, we call DH with multiple wavelengths "two/three-wavelength DH" or "multiwavelength DH" because DH using two-greenand one-blue-wavelengths was simulated.In regard to multiwavelength phase-shifting DH, two types of representative implementations have been reported: temporal division [24] and space-division multiplexing [25,26] of multiple wavelengths.In the case of temporal division, wavelength information is sequentially recorded by changing the wavelengths of light to form a hologram.Mechanical shutters or operations to turn the light sources on and off are required for selecting the recorded wavelength.Three phase-shifted holograms are required at a wavelength [27] and nine holograms are needed for three-wavelength DH.Therefore, temporal division requires much time for multiwavelength 3D imaging.In the case of space-division multiplexing, red-, green-, and blue-wavelengths are simultaneously recorded by using a color image sensor with a Bayer color-filter array.Three exposures are required to obtain a multicolor holographic image.However, both recordable wavelength bandwidth and space-bandwidth product are determined by the array and therefore spatial information and wavelength selectivity is partially sacrificed.In the case of space-division multiplexing, crosstalk between multiwavelength object waves occurs when the wavelength selectivity of the array is insufficient [28].The field of view (FOV) and spatial resolution of the DH system are decreased by the array due to the sacrifice of the space-bandwidth product of a hologram at each wavelength.The FOV is decreased by 75% compared to phase-shifting DH with a single wavelength. In the case of color/multiwavelength digital holography, not only temporal-division [24] and space-division multiplexing [25,26], which are generally adopted for multiwavelength imaging in an imaging system, but also spatial division [27][28][29], temporal frequency-division multiplexing [30][31][32][33], and spatial frequency-division multiplexing [34][35][36] can be merged to record multiple wavelengths.In the case of general imaging systems, wavelength information is temporally or spatially separated.Spatial division is being actively researched because neither temporal nor spatial resolutions are sacrificed.However, in the case of space division, alignment of multiple image sensors is a problem.Numerical correction is reported to solve this problem effectively [29].On the other hand, holographic multiplexing makes it possible to record multiwavelength/color information by using a monochrome image sensor and to reconstruct it from wavelength-multiplexed image(s).In the 1960s, Lohmann presented the concept of recording a multidimensional image by holographic multiplexing [37], which is based on spatial frequency-division multiplexing [34][35][36].This multiplexing enables single-shot multidimensional holographic sensing and imaging; however, it sacrifices the spatial bandwidth available for recording each object wave at each wavelength as the number of wavelengths is increased.As another means of holographic multiplexing, temporal frequency-division multiplexing has been researched, and it provides a wide spatial bandwidth regardless of the number of wavelengths [29][30][31].As for temporal frequency-division multiplexing technique, Fourier and inverse Fourier transforms are calculated for each pixel to separate wavelength information.To obtain a color 3D image, however, many wavelength-multiplexed images and an image sensor with a high frame rate are needed. Since 2013, we have been proposing an interferometric technique which selectively extracts wavelength information by using wavelength-multiplexed phase-shifted interferograms to measure multiwavelength object waves without using a color-filter array [9,[38][39][40][41][42][43].As for the proposed interferometry, multiwavelength information is multiplexed both on the space and in the spatial-frequency domain, and it is then separated in the polar coordinate plane by using wavelength-dependent phase shifts. Hereafter, multiwavelength DH based on the proposed interferometry is termed wavelength-selective phase-shifting DH (WSPS-DH).Applying the WSPS-DH provides a full space-bandwidth product of an image sensor at each wavelength regardless of the number of wavelengths measured.Moreover, operations to change the wavelengths of light to form a hologram are not required.When the number of wavelengths is N, only 2N+1 wavelength-multiplexed images are required for multiwavelength 3D image sensing [9,[38][39][40], while 3N holograms are recorded in the temporal division.Recordable wavelength bandwidth is determined by the spectral sensitivity of the monochrome image sensor.Therefore, both wavelength and spatial bandwidth of the WSPS-DH are greater than those of the space-division multiplexing with a color image sensor.After the initially reported WSPS-interferometry was reported, WSPS-DH utilizing 2N wavelength-multiplexed images was proposed [41].In the primitive scheme [38][39][40][41], phase ambiguity of 2π was utilized to selectively extract multiwavelength object waves, and then a technique employing arbitrary phase shifts for rigorously retrieving object waves at multiple wavelengths by using 2N+1 holograms was proposed [9,42,43].Although Doppler phase-shifting color DH [31,32] has also been proposed as another holographic multiplexing technique with a full space-bandwidth product, it requires the recording of a large number of images.WSPH-DH requires only 2N holograms at least [41] by employing two-step phase-shifting interferometry [44][45][46][47][48][49], while 512 holograms are required for Doppler three-wavelength phase-shifting DH [32].Therefore, WSPH-DH accelerates measurement speed by more than 80 times when recording three wavelengths [39,41].The proposed DH has the potential to obtain a multiwavelength holographic 3D image with a small number of recordings without any color absorption.It thus enables multimodal cell imaging with low light intensity when applied to biological microscopy.Moreover, in principle, it alleviates light damage to living cells during multidimensional holographic imaging. However, it is necessary to consider the influence of bit depth of the recorded holograms on the quality of the reconstructed image.This is because the proposed DH multiplexes holograms at multiple wavelengths on a monochrome image sensor, and available bit depth per wavelength is sacrificed.Furthermore, in the case of the proposed DH, it is worth evaluating whether the dynamic range of holograms is related to wavelength resolution because wavelength information is selectively extracted by using the wavelength dependency of the intensity changes induced by the phase shifts of holograms. In this paper, we investigate image quality in relation to the dynamic range of holograms formed by wavelength-selective phase-shifting DH.Image quality and wavelength resolution in relation to dynamic range are analyzed with numerically and experimentally obtained holograms. Wavelength-Selective Phase-Shifting Digital Holography (WSPS-DH) A schematic of WSPS-DH is shown in Figure 1.WSPS-DH is enabled by the wavelength dependency of the intensity change induced by wavelength-dependent phase shifts of interference light.By introducing wavelength-dependent phase shifts to interference light, wavelength information is separated in the polar coordinate plane.A phase shifter such as a mirror with a piezo actuator, a liquid crystal, a birefringent material, a spatial light modulator, an acousto-optic modulator, or an electro-optic modulator is used to generate the phase shifts.When multiwavelength information is recorded, light at wavelengths are not absorbed by a filter, and wavelength-multiplexed phase-shifted holograms are sequentially obtained by changing the phases of interference fringes.Object waves at multiple wavelengths are separately obtained from the recorded wavelength-multiplexed images when phase-shifting interferometry selectively extracts wavelength information [9,[38][39][40][41][42][43].Diffraction integrals are applied to the extracted object waves, and a multiwavelength holographic 3D image is then reconstructed.Since no light at wavelengths are absorbed by a filter, WSPS-DH is expected to achieve high light-use efficiency.When the WSPS technique is compared to temporal frequency-division multiplexing, the number of recordings can be reduced, and measurement speed can be increased. Up to now, 2π ambiguity of phase [38][39][40][41] or arbitrary symmetric phase shifts [9,42,43] are utilized to extract each object wave rigorously by solving systems of equations.Combining 2π phase ambiguity and arbitrary symmetric phase shifts enables a multiwavelength holographic 3D imaging with only 2N wavelength-multiplexed holograms and in total less than 1000 nm of movement of a mirror with a piezo actuator [50,51]. with only 2N wavelength-multiplexed holograms and in total less than 1,000 nm of movement of a mirror with a piezo actuator [50,51]. Experimenal Results The quality of the reconstructed image in relation to bit depth of wavelength-multiplexed holograms was investigated by using experimentally obtained holograms [40].The constructed optical system, the method of generating phase shifts, phase shift α, and the specification of the lasers used are described in Reference [40].Two lasers with oscillation wavelengths of 640 nm and 473 nm, respectively, were set to record five two-wavelength-multiplexed holograms.A monochrome complementary metal-oxide semiconductor (CMOS) image sensor was used to record the holograms.The sensor has 12 bits, 2592 × 1944 pixels, and a pixel pitch of 2.2 μm.Two overhead projector (OHP) transparency sheets were set as a color 3D object.The logo of the International Year of Light and the characters "2015" were drawn on the sheets, and blue-and red-color films were attached to the logo and characters, respectively.A red "2015" sheet and a blue logo one were set at different depths.Five wavelength-multiplexed holograms were obtained by utilizing 2π ambiguity of the phase, and a color 3D image was reconstructed with the algorithm described in Reference [40].Holograms that have less than 8-bit resolution were generated from the recorded holograms numerically.Object images were reconstructed by using compressed holograms in which bit depth was changed from 1 to 7 bits.Then, the images obtained by holograms without compression were regarded as the true values, and the cross-correlations coefficient (CC) and root-mean-square error (RMSE) of the reconstructed images were calculated. Experimenal Results The quality of the reconstructed image in relation to bit depth of wavelength-multiplexed holograms was investigated by using experimentally obtained holograms [40].The constructed optical system, the method of generating phase shifts, phase shift α, and the specification of the lasers used are described in Reference [40].Two lasers with oscillation wavelengths of 640 nm and 473 nm, respectively, were set to record five two-wavelength-multiplexed holograms.A monochrome complementary metal-oxide semiconductor (CMOS) image sensor was used to record the holograms.The sensor has 12 bits, 2592 × 1944 pixels, and a pixel pitch of 2.2 µm.Two overhead projector (OHP) transparency sheets were set as a color 3D object.The logo of the International Year of Light and the characters "2015" were drawn on the sheets, and blue-and red-color films were attached to the logo and characters, respectively.A red "2015" sheet and a blue logo one were set at different depths.Five wavelength-multiplexed holograms were obtained by utilizing 2π ambiguity of the phase, and a color 3D image was reconstructed with the algorithm described in Reference [40].Holograms that have less than 8-bit resolution were generated from the recorded holograms numerically.Object images were reconstructed by using compressed holograms in which bit depth was changed from 1 to 7 bits.Then, the images obtained by holograms without compression were regarded as the true values, and the cross-correlations coefficient (CC) and root-mean-square error (RMSE) of the reconstructed images were calculated. Color object images obtained from the compressed holograms are shown in Figure 2. The images were reconstructed by using two-wavelength-multiplexed holograms with resolution of more than 2 bits.As bit resolution was decreased, the reconstructed images degraded gradually.However, a clear color object image was reconstructed even when the number of bits was 4. Furthermore, a two-wavelength object image was reconstructed from holograms with 3-bit depth resolution. To investigate the quality of the reconstructed images quantitatively, CC and RMSE of the intensity images at respective wavelengths were calculated.Graphs of CC and RMSE are plotted in Figure 3. Maximum-and minimum-intensity values of the images were set as 255 and 0, respectively.A CC of nearly 0.8 and a RMSE of 1/10 maximum value were obtained when bit depth was 5. From the quantitative evaluations and Figure 2, it can be concluded that quite similar images are reconstructed even when the image sensor had a resolution of less than 8 bits. Color object images obtained from the compressed holograms are shown in Figure 2. The images were reconstructed by using two-wavelength-multiplexed holograms with resolution of more than 2 bits.As bit resolution was decreased, the reconstructed images degraded gradually.However, a clear color object image was reconstructed even when the number of bits was 4. Furthermore, a twowavelength object image was reconstructed from holograms with 3-bit depth resolution.To investigate the quality of the reconstructed images quantitatively, CC and RMSE of the intensity images at respective wavelengths were calculated.Graphs of CC and RMSE are plotted in Figure 3. Maximum-and minimum-intensity values of the images were set as 255 and 0, respectively.A CC of nearly 0.8 and a RMSE of 1/10 maximum value were obtained when bit depth was 5. From the quantitative evaluations and Figure 2, it can be concluded that quite similar images are reconstructed even when the image sensor had a resolution of less than 8 bits.Color object images obtained from the compressed holograms are shown in Figure 2. The images were reconstructed by using two-wavelength-multiplexed holograms with resolution of more than 2 bits.As bit resolution was decreased, the reconstructed images degraded gradually.However, a clear color object image was reconstructed even when the number of bits was 4. Furthermore, a twowavelength object image was reconstructed from holograms with 3-bit depth resolution.To investigate the quality of the reconstructed images quantitatively, CC and RMSE of the intensity images at respective wavelengths were calculated.Graphs of CC and RMSE are plotted in Figure 3. Maximum-and minimum-intensity values of the images were set as 255 and 0, respectively.A CC of nearly 0.8 and a RMSE of 1/10 maximum value were obtained when bit depth was 5. From the quantitative evaluations and Figure 2, it can be concluded that quite similar images are reconstructed even when the image sensor had a resolution of less than 8 bits. Numerical Simulations To investigate image quality and validate the experimental evaluations, in the case that bit depth of the image sensor was from 1 bit to 16 bits, image quality of reconstructed images was numerically simulated for three-wavelength WSPS-DH.A random pattern was set as the phase distribution of the object wave because scattering object waves were assumed.For color-intensity images, a photographic image of a flower and grass was prepared.For wavelengths of red-, green-, and blue-color light sources, 640 nm, 532 nm, and 488 nm were assumed.In the simulation, the distance between the object and image sensor was set to 150 mm, pixel pitch to 2.2 µm, and the number of pixels of the image sensor to 512 × 512.It was assumed that phase shifts were generated by a mirror with a piezo actuator and the mirror was moved 0 nm, 61 nm, ±244 nm, and ±488 nm sequentially.The intensity ratio between object and reference waves was 1:4 at each wavelength.Resolution of the image sensor was changed from 1 to 16 bits.Six wavelength-multiplexed phase-shifted holograms were obtained numerically and three-wavelength object waves were reconstructed by WSPS-DH [50,51].Reconstructed images in the numerical simulation are shown in Figure 4.In the same manner as revealed by the experimental results, as bit resolution was decreased, the reconstructed images degraded gradually.However, a clear multicolor object image was reconstructed even when the number of bits was 6.As shown in Figure 5, CCs of the reconstructed amplitude images were more than 0.8 when bit depth of the image sensor was decreased to 6 bits.Furthermore, although the color of the reconstructed image differed from that of the object, a three-wavelength object image was reconstructed even when the image sensor had a 4-bit depth resolution.On the other hand, it was found that RMSE and CC of the reconstructed phase distribution were worse than those of the amplitude images.For phase measurement and 3D shape measurement with multiwavelength phase unwrapping, an image sensor with high dynamic range is required.Using an image sensor with resolution of more than 9 bits will result in performance of less than RMSE of λ/20 [rad] in phase.Analysis for smooth phase distribution is a future work. Numerical Simulations To investigate image quality and validate the experimental evaluations, in the case that bit depth of the image sensor was from 1 bit to 16 bits, image quality of reconstructed images was numerically simulated for three-wavelength WSPS-DH.A random pattern was set as the phase distribution of the object wave because scattering object waves were assumed.For color-intensity images, a photographic image of a flower and grass was prepared.For wavelengths of red-, green-, and bluecolor light sources, 640 nm, 532 nm, and 488 nm were assumed.In the simulation, the distance between the object and image sensor was set to 150 mm, pixel pitch to 2.2 μm, and the number of pixels of the image sensor to 512 × 512.It was assumed that phase shifts were generated by a mirror with a piezo actuator and the mirror was moved 0 nm, 61 nm, ±244 nm, and ±488 nm sequentially.The intensity ratio between object and reference waves was 1:4 at each wavelength.Resolution of the image sensor was changed from 1 to 16 bits.Six wavelength-multiplexed phase-shifted holograms were obtained numerically and three-wavelength object waves were reconstructed by WSPS-DH [50,51].Reconstructed images in the numerical simulation are shown in Figure 4.In the same manner as revealed by the experimental results, as bit resolution was decreased, the reconstructed images degraded gradually.However, a clear multicolor object image was reconstructed even when the number of bits was 6.As shown in Figure 5, CCs of the reconstructed amplitude images were more than 0.8 when bit depth of the image sensor was decreased to 6 bits.Furthermore, although the color of the reconstructed image differed from that of the object, a three-wavelength object image was reconstructed even when the image sensor had a 4-bit depth resolution.On the other hand, it was found that RMSE and CC of the reconstructed phase distribution were worse than those of the amplitude images.For phase measurement and 3D shape measurement with multiwavelength phase unwrapping, an image sensor with high dynamic range is required.Using an image sensor with resolution of more than 9 bits will result in performance of less than RMSE of λ/20 [rad] in phase.Analysis for smooth phase distribution is a future work.The experimental and numerical results indicate that a resolution of at least 2 bits per wavelength in each hologram is required to obtain a multiwavelength 3D-object intensity image, and a color 3D image with a small color shift can be reconstructed when the sensor has more than 2-bit resolution per wavelength.Measurement error is reduced as bit depth is increased in the same manner as an ordinary imaging system; however, a faithful object intensity image can be reconstructed in a case of resolution of much less than 8 bits.The numerical results also show that using a low-bit image sensor causes a large error in phase measurement; therefore, an image sensor with more than 9 bits is desirable in the case of 3D shape measurement with phase information at multiple wavelengths. Numerical Analysis of the Wavelength Resolution Against Dynamic Range of Holograms The relation of wavelength resolution to bit resolution of holograms was numerically investigated.It was assumed that the optical setup is based on three-wavelength phase-shifting DH using a monochrome image sensor and a mirror with a piezo actuator under the following conditions.It was assumed that three-wavelength WSPS-DH with six wavelength-multiplexed holograms [50,51] was used and the mirror was moved 0 nm, 61 nm, ±488 nm, and ±732 nm sequentially.A color image and a rough surface were set as amplitude and phase distributions in 3D space, respectively.To investigate image quality quantitatively, CC and RMSE of the reconstructed images were calculated.It was initially assumed that the three wavelengths of light sources were λ1 = 640 nm, λ2 = 532 nm, and λ3 = 488 nm.After that, wavelength λ1 was set as 607, 589, 561, 556, 552, 546, 540, 534, 533, or 532.5 nm to investigate wavelength resolution of WSPS-DH.The wavelengths were determined from commercially available continuous wave (CW) lasers with long coherence lengths.Pixel pitch was 2.2 μm, and the number of pixels was 512 × 512.The wavelength resolution under the three conditions was investigated: an image sensor having 8-, 12-, and 16-bit resolutions.To investigate wavelength resolution of WSPS-DH under ideal conditions, no random noise such as incoherent stray light and dark-current noise was added to holograms. Reconstructed images obtained by this numerical simulation are shown in Figure 6, and graphs of calculated RMSE and CC of the amplitude and phase images at λ2 are plotted in Figure 7. High CC means that faithful images were obtained and low RMSE indicates that multiwavelength 3D image The experimental and numerical results indicate that a resolution of at least 2 bits per wavelength in each hologram is required to obtain a multiwavelength 3D-object intensity image, and a color 3D image with a small color shift can be reconstructed when the sensor has more than 2-bit resolution per wavelength.Measurement error is reduced as bit depth is increased in the same manner as an ordinary imaging system; however, a faithful object intensity image can be reconstructed in a case of resolution of much less than 8 bits.The numerical results also show that using a low-bit image sensor causes a large error in phase measurement; therefore, an image sensor with more than 9 bits is desirable in the case of 3D shape measurement with phase information at multiple wavelengths. Numerical Analysis of the Wavelength Resolution Against Dynamic Range of Holograms The relation of wavelength resolution to bit resolution of holograms was numerically investigated.It was assumed that the optical setup is based on three-wavelength phase-shifting DH using a monochrome image sensor and a mirror with a piezo actuator under the following conditions.It was assumed that three-wavelength WSPS-DH with six wavelength-multiplexed holograms [50,51] was used and the mirror was moved 0 nm, 61 nm, ±488 nm, and ±732 nm sequentially.A color image and a rough surface were set as amplitude and phase distributions in 3D space, respectively.To investigate image quality quantitatively, CC and RMSE of the reconstructed images were calculated.It was initially assumed that the three wavelengths of light sources were λ 1 = 640 nm, λ 2 = 532 nm, and λ 3 = 488 nm.After that, wavelength λ 1 was set as 607, 589, 561, 556, 552, 546, 540, 534, 533, or 532.5 nm to investigate wavelength resolution of WSPS-DH.The wavelengths were determined from commercially available continuous wave (CW) lasers with long coherence lengths.Pixel pitch was 2.2 µm, and the number of pixels was 512 × 512.The wavelength resolution under the three conditions was investigated: an image sensor having 8-, 12-, and 16-bit resolutions.To investigate wavelength resolution of WSPS-DH under ideal conditions, no random noise such as incoherent stray light and dark-current noise was added to holograms. Reconstructed images obtained by this numerical simulation are shown in Figure 6, and graphs of calculated RMSE and CC of the amplitude and phase images at λ 2 are plotted in Figure 7. High CC means that faithful images were obtained and low RMSE indicates that multiwavelength 3D image measurement was high precision.The numerical results clarify that high CC was obtained even when the wavelength difference was less than 10 nm when an 8-bit image sensor was used.However, in the case an 8-bit image sensor was used, it was difficult to observe an object image clearly when the wavelength difference was within 2 nm.The difference between phase shifts added at neighboring wavelengths was small and the wavelength dependency of the intensity change induced by the wavelength-dependent phase shifts also became small.It is considered that an 8-bit image sensor could not detect weak wavelength dependency of the intensity change by the quantization.In contrast, in the cases of using an image sensor with 12-and 16-bit resolutions, object waves were successfully reconstructed because the image sensor captured weak wavelength dependency of intensity changes by phase shifts.An image sensor with 16 bits can record smaller intensity changes; therefore, higher CC and lower RMSE were obtained.These results indicate that wavelength resolution can be improved by increasing the bit depth of an image sensor.It is worth noting that this feature is characteristic of WSPS-DH.Thus, a guideline for selecting an appropriate image sensor was confirmed successfully.measurement was high precision.The numerical results clarify that high CC was obtained even when the wavelength difference was less than 10 nm when an 8-bit image sensor was used.However, in the case an 8-bit image sensor was used, it was difficult to observe an object image clearly when the wavelength difference was within 2 nm.The difference between phase shifts added at neighboring wavelengths was small and the wavelength dependency of the intensity change induced by the wavelength-dependent phase shifts also became small.It is considered that an 8-bit image sensor could not detect weak wavelength dependency of the intensity change by the quantization.In contrast, in the cases of using an image sensor with 12-and 16-bit resolutions, object waves were successfully reconstructed because the image sensor captured weak wavelength dependency of intensity changes by phase shifts.An image sensor with 16 bits can record smaller intensity changes; therefore, higher CC and lower RMSE were obtained.These results indicate that wavelength resolution can be improved by increasing the bit depth of an image sensor.It is worth noting that this feature is characteristic of WSPS-DH.Thus, a guideline for selecting an appropriate image sensor was confirmed successfully. Discussion The reason for a color shift in the numerical simulation is discussed.In comparison with the experimental results, the results of the numerical simulation show that the color of the reconstructed images shifts remarkably when bit depth of wavelength-multiplexed holograms is low.Here, the value of the wavelength difference is focused on, and λ2 is set to 561 nm instead of 532 nm to adjust the wavelength difference for three wavelengths in the numerical simulation described in Section 3.2.The numerical results when setting the wavelengths to 488, 561, and 640 nm are shown in Figure 8.The images indicate that at least 2-bit resolution per hologram at a wavelength is required, and a color 3D image with a small color shift by using an image sensor that has more than 2-bit resolution per wavelength.However, color shift was obviously decreased by increasing the difference of neighboring wavelengths λ2 and λ3.This trend can be explained by the fact that, as described in Section 2, WSPS-DH is enabled by the wavelength dependency of the intensity change induced by wavelength-dependent phase shifts of interference light.When the difference between λ2 and λ3 was small, the effect for wavelength-dependent intensity change also became small.Selective extraction of object waves at λ2 and λ3 were difficult as the bit resolution was decreased because the wavelengthdependent intensity change was small and an image sensor with low bit resolution was not able to detect the change.As a result, the CC of λ1 was relatively high and the RMSE was relatively low, so the object-intensity image at λ1 was clearly reconstructed in comparison to those at λ2 and λ3.Quantitative evaluations shown in Figure 5 supported this finding because the CC was higher and the RMSE was lower at λ1.In contrast, in the simulation shown in Figure 8, the wavelengthdependent intensity change became large by increasing wavelength-dependent phase shifts, and therefore each of three object waves was reconstructed from holograms with 4-bit resolution.As a result, the color was improved.From the experimental results presented in Section 3.1 and the numerical results presented in this section, it is clear that the color 3D-image sensing can be achieved when using an image sensor with more than 2-bit resolution per wavelength. Discussion The reason for a color shift in the numerical simulation is discussed.In comparison with the experimental results, the results of the numerical simulation show that the color of the reconstructed images shifts remarkably when bit depth of wavelength-multiplexed holograms is low.Here, the value of the wavelength difference is focused on, and λ 2 is set to 561 nm instead of 532 nm to adjust the wavelength difference for three wavelengths in the numerical simulation described in Section 3.2.The numerical results when setting the wavelengths to 488, 561, and 640 nm are shown in Figure 8.The images indicate that at least 2-bit resolution per hologram at a wavelength is required, and a color 3D image with a small color shift by using an image sensor that has more than 2-bit resolution per wavelength.However, color shift was obviously decreased by increasing the difference of neighboring wavelengths λ 2 and λ 3 .This trend can be explained by the fact that, as described in Section 2, WSPS-DH is enabled by the wavelength dependency of the intensity change induced by wavelength-dependent phase shifts of interference light.When the difference between λ 2 and λ 3 was small, the effect for wavelength-dependent intensity change also became small.Selective extraction of object waves at λ 2 and λ 3 were difficult as the bit resolution was decreased because the wavelength-dependent intensity change was small and an image sensor with low bit resolution was not able to detect the change.As a result, the CC of λ 1 was relatively high and the RMSE was relatively low, so the object-intensity image at λ 1 was clearly reconstructed in comparison to those at λ 2 and λ 3 .Quantitative evaluations shown in Figure 5 supported this finding because the CC was higher and the RMSE was lower at λ 1 .In contrast, in the simulation shown in Figure 8, the wavelength-dependent intensity change became large by increasing wavelength-dependent phase shifts, and therefore each of three object waves was reconstructed from holograms with 4-bit resolution.As a result, the color was improved.From the experimental results presented in Section 3.1 and the numerical results presented in this section, it is clear that the color 3D-image sensing can be achieved when using an image sensor with more than 2-bit resolution per wavelength. Conclusions The quality of reconstructed images in relation to dynamic range of holograms generated by WSPS-DH was investigated.Quantitative, experimental, and numerical results clarified the required bit resolution to obtain a multiwavelength holographic image and the relationship between the wavelength resolution and dynamic range of an image sensor.Experimental and numerical results indicate that 2-bit resolution per hologram at a wavelength is required to obtain a multiwavelength 3D-object intensity image at least, and a color 3D image with a smaller color shift can be reconstructed when the sensor has more than a 2-bit resolution per wavelength.More than 3 bits per wavelength is sufficient for high-quality multiwavelength 3D imaging.Wavelength resolution can be improved by increasing bit depth of an image sensor, and this finding is characteristic of WSPS-DH.WSPS-DH will perform multiwavelength 3D imaging at high speed for low-light-intensity events.Accordingly, it will contribute to multispectral 3D imaging with high light-use efficiency and high wavelength resolution by using a monochrome image sensor with high dynamic range (such as an electron multiplying charge-coupled device (EM-CCD) camera) and an array of photo multipliers. Figure 3 . Figure 3. Quantitative evaluations of experimental results: (a) cross-correlations coefficient (CC) and (b) root-mean-square error (RMSE) of the reconstructed amplitude images. Figure 3 . Figure 3. Quantitative evaluations of experimental results: (a) cross-correlations coefficient (CC) and (b) root-mean-square error (RMSE) of the reconstructed amplitude images. Figure 3 . Figure 3. Quantitative evaluations of experimental results: (a) cross-correlations coefficient (CC) and (b) root-mean-square error (RMSE) of the reconstructed amplitude images. Figure 5 . Figure 5. Quantitative evaluations for numerical results: CC of reconstructed (a) amplitude and (b) phase images and RMSE of reconstructed (c) amplitude and (d) phase images. Figure 5 . Figure 5. Quantitative evaluations for numerical results: CC of reconstructed (a) amplitude and (b) phase images and RMSE of reconstructed (c) amplitude and (d) phase images. Figure 6 . Figure 6.Numerical results concerning wavelength resolution in relation to the bit depth of an image sensor.Reconstructed images when (a-d) 8-bit, (e-h) 12-bit, and (i-l) 16-bit image sensors were used.Wavelength differences are (a,e,i) 0.5 nm, (b,f,j) 1 nm, (c,g,k) 2 nm, and (d,h,l) 8 nm. Figure 6 . Figure 6.Numerical results concerning wavelength resolution in relation to the bit depth of an image sensor.Reconstructed images when (a-d) 8-bit, (e-h) 12-bit, and (i-l) 16-bit image sensors were used.Wavelength differences are (a,e,i) 0.5 nm, (b,f,j) 1 nm, (c,g,k) 2 nm, and (d,h,l) 8 nm. Figure 7 . Figure 7. Quantitative evaluations of reconstructed images at λ2: CC of reconstructed (a) amplitude and (b) phase images, and RMSE of reconstructed (c) amplitude and (d) phase images. Figure 7 . Figure 7. Quantitative evaluations of reconstructed images at λ 2 : CC of reconstructed (a) amplitude and (b) phase images, and RMSE of reconstructed (c) amplitude and (d) phase images.
7,840
2018-11-28T00:00:00.000
[ "Physics", "Engineering" ]
Dynamic Ocular Surface and Lacrimal Gland Changes Induced in Experimental Murine Dry Eye Dry eye disease can be a consequence of lacrimal gland insufficiency in Sjögren’s Syndrome or increased tear film evaporation despite normal lacrimal gland function. To determine if there is a correlation between severity effects in these models and underlying pathophysiological responses, we compared the time dependent changes in each of these parameters that occur during a 6 week period. Dry eye was induced in 6-week-old female C57BL/6 mice by exposing them to an Intelligently Controlled Environmental System (ICES). Sixty mice were housed in ICES for 1, 2, 4 and 6 weeks respectively. Twelve were raised in normal environment and received subcutaneous injections of scopolamine hydrobromide (SCOP) 3 times daily for 5 days. Another sixty mice were housed in a normal environment and received no treatment. Corneal fluorescein staining along with corneal MMP-9 and caspase-3 level measurements were performed in parallel with the TUNEL assay. Interleukin-17(IL-17), IL-23, IL-6, IL-1, TNF-α, IFN-γ and TGF-β2 levels were estimated by real-time PCR measurements of conjunctival and lacrimal gland samples (LGs). Immunohistochemistry of excised LGs along with flow cytometry in cervical lymph nodes evaluated immune cell infiltration. Light and transmission electron microscopy studies evaluated LGs cytoarchitectural changes. ICES induced corneal epithelial destruction and apoptosis peaked at 2 weeks and kept stable in the following 4 weeks. In the ICES group, lacrimal gland proinflammatory cytokine level increases were much lower than those in the SCOP group. In accord with the lower proinflammatory cytokine levels, in the ICES group, lacrimal gland cytosolic vesicular density and size exceeded that in the SCOP group. ICES and SCOP induced murine dry eye effects became progressively more severe over a two week period. Subsequently, the disease process stabilized for the next four weeks. ICES induced local effects in the ocular surface, but failed to elicit lacrimal gland inflammation and cytoarchitectural changes, which accounts for less dry eye severity in the ICES model than that in the SCOP model. Introduction Dry eye (DE) disease therapeutic management can be limited to providing palliative relief since its underlying mechanisms are not fully understood [1][2][3]. Topical ophthalmic cyclosporine can be an effective treatment as it targets immunopathological mechanisms. In recent years, the identification of an immune component to this disease sparked efforts to delineate how infiltrating immune cells give rise to this condition [4]. There are two major types of dry eye: aqueous-deficient dry eye (ADDE) and evaporative dry eye (EDE) models used for this purpose [5]. ADDE is characterized by a lack of tear production and secretion by the lacrimal glands [6], while EDE is caused by excessive tear evaporation, which leads to tear film instability with normal tear production. Conditions that underlie the development of the EDE model are becoming more prevalent in the human environment. They include increased exposure to environmental stresses such as excessive air conditioner mediated temperature lowering and emerging dependence on visual display terminal (VDT) usage for work and recreation [7]. In addition, more individuals are seeking symptomatic relief from this syndrome because they are becoming more aware of the potential hazards to ocular health maintenance by leaving this disease untreated. The EDE model mimics anterior ocular surface dryness caused by excessive tear film evaporation resulting from housing animals in a stable and constant low humidity environment having high airflow and constant ambient temperature of about 22°C, whereas scopolamine hydrobromide (SCOP) model is a tear deficient model that mimics declines in lacrimal gland secretory activity resulting from immune cell infiltration in Sjögren's Syndrome. The latter model is established by repeated injections of scopolamine, which by blocking acetylcholine-induced parasympathetic lacrimal gland secretory activity elicits functional and pathologic changes in the ocular surface [8][9][10][11]. The pathogenic events underlying the immune components in these two disease models leading to chronic inflammation are unique for each of the two models. Unraveling the immune cell response dynamics provides promise for identifying potential novel drug targets to better control this disease. Dry eye (DE) is frequently characterized by variable amounts of ocular surface inflammation in the SCOP animal model [12]. This response is associated with enhanced proinflammatory cytokine expression (e.g., IL-1, IL-6, IL-8, TNF-alpha) along with compromise of ocular surface epithelial integrity and tear film secretion and content deficiencies [13,14]. Even though there is widespread and large inflammatory cell infiltration in the lacrimal gland in Sjögren's patients, there are no reports describing inflammation infiltration in the EDE model [15][16][17]. We compare here in the SCOP and EDE dry eye disease mouse models the time dependent changes for up to 6 weeks in proinflammatory IL, TNF-α and IFNγ and anti-inflammatory TGFβ-2 conjunctival gene expression. Along with profiling these changes, we evaluated the associated effects of these model-induced stresses on corneal epithelial barrier function as well as integrity, apoptotic activity and lacrimal gland cytoarchitecture. Even though in the EDE model the increases in lacrimal gland proinflammatory gene expression were less than those in conjunctival tissues of the SCOP model, in the lacrimal glands secretory vesicle retention was more evident in the EDE than the SCOP model. Taken together, proinflammatory increases in gene expression of Th1-and Th17-associated cytokines underlie much of the immunological responses in these two different models of dry eye disease. Stabilization of the increases in proinflammatory cytokine expression after two weeks suggests that concomitant rises in antiinflammatory lymphocytes counter any further increases from occurring during the subsequent month of study. Methods Animals All procedures were approved by the Animal Care and Ethics Committee of Wenzhou Medical College, Zhejiang, China. The animals were humanely killed with an overdose of a mixture of ketamine and xylazine. All procedures were performed in accordance with the Association of Research and Vision in Ophthalmology (ARVO) statement for the Use of Animals in Ophthalmic and Vision Research. A total of 132 female C57BL/6 mice (age range, 4-6 weeks) were supplied by the Animal Breeding Unit of Wenzhou Medical College. ICES-induced murine dry eye model ICES was established to induce dry eye as previously described [8][9][10]. This system was characterized with humidity of 13.1AE3.5%, airflow of 2.2AE0.2 m/s, and temperature of 22AE2°C. An alternating 12-hour light-dark cycle (8 AM to 8 PM) was employed. Water and food were made available ad libitum. Grouping Sixty mice were housed in ICES for 1, 2, 4 and 6 weeks respectively, and served as part of the experimental group (E). Twelve were maintained in a normal laboratory environment and received 3 times daily subcutaneous injections of 0.1 mL of 5 mg/mL scopolamine hydrobromide for 5 days (SCOP, Sigma-Aldrich Corp., St Louis, MO) [18], and served as SCOP group (SCOP). Another sixty mice were also housed in the normal laboratory environment (room temperature of 23AE2°C, relative humidity of 60%AE10%) but received no treatment and were designated as the normal control group (N). Corneal fluorescein staining Fluorescein staining was performed on each group by instilling 0.5 ml of 5% fluorescein solution into the inferior conjunctival sac using a micropipette. The stained area was assessed and graded using the 2007 Dry Eye Work Shop (DEWS) recommended grading system by a masked observer. The corneas were rated from 0 to 4 with the cornea surface divided into five regions (0 dot, Grade 0; 1-5 dots, Grade 1; 6-15 dots, Grade 2; 16-30 dots, Grade 3; and 30 dots, Grade 4). The total score from the five regions was recorded. TUNEL assay DNA fragmentation detected by TUNEL assay was evaluated by laser scanning confocal microscopy using frozen corneal tissue sections. Mice eyes from each group were excised. Corneal section slides were fixed with 4% paraformaldehyde in PBS at room temperature for 10 minutes. After fixation, they were permeabilized with Triton-X (0.1% in PBS, Sigma, St Louis, USA) for 10 minutes and then 50 ml (5 ml Enzyme solution in 45 ml Label solution) TUNEL reaction mixture (In Site cell Death Detection Kit, Roche, Mannhein, Germany) was applied and incubated for 1 hour at 37°C in a humidified atmosphere. Counter staining with DAPI (1:1000 dilution) was followed for 30 minutes. Sections were covered with antifade mounting medium and sealed with a cover slip for microscopic observation. RNA isolation and real-time PCR Total RNA from conjunctivas and lacrimal glands was extracted (RNeasy mini kit (50x), Qiagen, Crawley, U.K.) according to the manufacturer's instructions. Samples within each group were pooled. The RNA concentration was measured based on its optical density at 260 nm and stored at −80°C before use. cDNA was synthesized from 1 mg of total RNA using random primer and Moloney Murine Leukemia Virus reverse transcriptase. Quantitative real-time polymerase chain reaction (qRT-PCR) analysis was employed using the Power SYBR Green PCR Master Mix (Applied Biosystems, Paisley, UK) and Applied Biosystems 7500 Real-Time PCR System (Applied Biosystems). The primers are provided in Table 1. Assays were performed in duplicate and repeated three times using different samples from different experiments. The RT-PCR results were analyzed using the comparative threshold cycle (CT) method and normalized with glyceraldehyde 3-phosphate dehydrogenase (GAPDH) as an endogenous reference. Histological Analysis Each entire lacrimal gland was fixed in 10% formalin. After dehydration, the specimens were embedded in paraffin, cross-sectioned, and stained with hematoxylin-eosin reagent and viewed under a microscope (Imager.Z1; Carl Zeiss Meditec, Oberkochen, Germany). To prevent experimental bias, all of the photographs were taken at random and assessed by two independent researchers in a blind manner using Photoshop CS4 (Adobe Systems Inc, Tokyo, Japan) and software ImageJ 1.46r (National Institute of Health). Transmission electron microscopy (TEM) LG tissue was fixed with 2.5% glutaraldehyde in 0.1 M phosphate buffer (pH 7.4) for 1 hour. Samples were then post-fixed in 1% osmium tetroxide in 0.1 M phosphate buffer at 4°C for one Table 1. Primers used for quantitative RT-PCR. Gene Accession no. hour. The LG was dehydrated in graded ethyl alcohol series and embedded in Epoc 812. An ultrathin section was cut using a RT-7000 (RMC,USA), stained with uranyl acetate and lead citrate, and then examined with transmission electron microscopy (H-7500;HITACHI, Japan). Isolation of Cervical Lymph Nodes Superficial cervical lymph nodes (CLNs) from each group were surgically excised, compresssed between two sterile frosted glass slides, and made into a single-cell suspension. Cell populations were individually collected, centrifuged at 1000 rpm for 5 minutes, filtered, and resuspended. Single cells were processed for flow cytometry as described below. Statistical Analysis Statistical analyses were performed using SPSS 13.0 software. One-way ANOVA with Bonferroni correction was used for comparison among groups with Gaussian distributed values. The Mann-Whitney U test was used to compare non-normally distributed values between groups. p<0.05 was considered statistically significant. ICES Induces Corneal Epithelial Disruption Fluorescein staining assessed changes in corneal epithelial integrity. In the ICES group, after 1 week there was a slight increase in the staining score which peaked at 2 weeks (P< 0.001) and was invariant for the next 4 weeks (Fig. 1). To evaluate if losses in the ICES group of tight junctional barrier function and epithelial integrity were accompanied by increases in MMP-9 expression, its staining pattern was also evaluated. In parallel with the development of corneal fluorescein staining, MMP-9 became the most intense after 2 weeks of ICES, which was unchanged during the following 4 and 6 weeks (Fig. 2). We also compared if there were differences in the development of MMP-9 expression in the ICES and SCOP groups. In the SCOP group, the MMP-9 expression was greater at all times than in the ICES group. ICES Induces Corneal Epithelium Apoptosis Caspase-3 immunofluorescence and TUNEL analyses were performed to evaluate the effect of ICES exposure on corneal epithelial apoptosis. Fig. 3A shows that caspase-3 expression increased after the first 2 weeks. It peaked already at 2-weeks and remained invariant at the 4-week and 6-week time points. Similarly, TUNEL results also demonstrated that the apoptotic cells increased in the ICES group, but remained unchanged at 2-, 4-and 6-weeks. (c.f. Fig. 3B, 3C). On the other hand, corneal epithelial apoptosis in the SCOP group seemed more pronounced at all of the same time points as those in the ICES group expect for a lack of a difference after 6 weeks. ICES Stimulates Inflammatory Cytokine Production in the Conjunctiva and Lacrimal Gland Levels of conjunctival IL-17, IL-23, IL-6, IL-1β, TNF-α mRNA transcripts peaked after 2 weeks in the ICES group without any further change during the subsequent 4 weeks. In contrast, conjunctival IFN-γ and TGF-β2 levels increased in the ICES group and peaked at 6-weeks. However, the gene transcript levels of all of these cytokines in the SCOP group were higher than those in the ICES group at all the time points (Fig. 4A). The pattern changes in the transcript levels of all of these cytokines in the lacrimal gland of the ICES group are comparable to those in the conjunctiva. However, in the SCOP group at all the time points their levels were much higher compared to those in the ICES group (Fig. 4B). Lacrimal Gland inflammatory cell infiltration In order to further characterize differences in LG inflammation between the ICES and SCOP groups, we determined if increases in different inflammatory immune cell numbers correspond with rises in proinflammatory cytokine gene expression. Accordingly, we compared in the ICES and N groups CD4, CD8α immunohistochemistry (predominantly expressed on the surface of cytotoxic T cells), CD11b (a marker of monocyte/macrophage lineage), CD103 (a marker of intraepithelial lymphocytes) and CD45 (a marker of all leukocytes and largely naive T lymphocytes) positive T lymphocyte cell numbers (Fig. 5). CD8α cells in the ICES group were less than in the normal group (N). On the other hand, CD103 increased significantly after 1 week in the ICES group and remained elevated at the same level after 2-, 4-and 6-weeks, while CD4 levels in the ICES group remained at the baseline level at all the times. ICES group CD11b cells were the only ones that increased after 2 weeks. In the 2-week group, CD45 cell levels rose after 2 and 4-weeks. However, the numbers of infiltrated CD4, CD11b, CD103, CD45 cells in the SCOP group were much larger than those in the ICES group at all the times (Fig. 5). Lacrimal Gland structural changes induced by ICES H&E staining showed that the acini of the lacrimal gland in the ICES group became slightly larger than those in the normal group after 2 weeks, without any further enlargement at either 4 or 6 weeks. But these changes were in sharp contrast with those occurring in the SCOP group. In the latter group, the acini were largely atrophic, replaced by fibrotic tissue and lymphocytic infiltrates (Fig. 6). We then performed TEM to further resolve differences between lacrimal gland ultrastructural morphology in the ICES and N groups. In the ICES group at 1 week, there were more secretory vesicles (SVs) in the lacrimal gland epithelial cell cytoplasm, compared with the N group. They remained abundant in the ICES group at 2-, 4-and 6-weeks. On the contrary, in the SCOP group, they became atrophic (Fig. 7). ICES Does Not Cause Inflammation in Adjacent Lymph Nodes Flow cytometry analysis of CLN cells stained for CD4 and CD8 was performed. Similar percentages of CD4+ lymphocytes were observed in these cells in the N, ICES and SCOP groups at all the different times. CD8+ lymphocyte percentages in the ICES group at 2-, 4-and 6-weeks were in all cases invariant at its baseline level, while in the SCOP group its level was even lower than in the N group (Fig. 8). Discussion This study was performed to further characterize and validate the usefulness of two different murine dry eye models of human DE disease. The ICES model is believed to model evaporative dry eye disease whereas the scopolamine model mimics Sjögren's syndrome mediated lacrimal gland fibrosis and autoimmune rejection resulting in aqueous deficiency [8,9,17]. Our evaluation entailed contrasting and comparing the increases in proinflammatory cytokine gene expression, MMP-9 immunostaining, apoptosis, immune cell lacrimal gland infiltration as well as evaluation of the changes in lacrimal gland morphology at the light and electron microscopic levels that occur for up to 6 weeks after imposing either ICES or SCOP treatment conditions with effect. ICES elicited alterations in inflammation and apoptosis in the conjunctival epithelium, which mimic some of the changes occurring in human patients suffering from DE disease [19][20][21][22]. ICES also caused some changes in LGs structure and inflammation that were different from SCOP models. On the other hand, the SCOP model mimics in many ways the Sjögren's syndrome condition in which the lacrimal gland undergoes immunorejection, atrophy as a consequence of larger increases in immune cell infiltration followed by rises in proinflammatory gene expression levels. This is associated with a more profound inflammatory response by the conjunctival epithelial cells along with losses in corneal epithelial integrity and rises in apoptosis. Our studies substantiate earlier indications that monitoring declines in ocular surface health induced by ICES for up to 2 weeks is sufficient to characterize DE disease development since during subsequent 4 weeks of observation DE indications almost stabilized. Nevertheless, our study provides a broader base for delineating the immunopathogenic changes resulting in the development of dry eye disease in two different relevant murine models. Our cataloging of the events underlying the plateauing of proinflammatory cytokine expression and immune cell infiltration between 2 and 6 weeks suggests that this stasis may be due to increases in anti-inflammatory cytokine expression which counterbalance the initial surge in proinflammatory cytokine expression. Inflammation, corneal epithelial destruction and apoptosis can be induced in DE development [23][24][25][26][27][28][29][30]. We found that ICES induced losses in corneal epithelial integrity and apoptosis in a time dependent manner, which increased in the first 2 weeks and then remained invariant in the following 4 weeks. The peak level of ICES induced declines in corneal epithelial integrity Dynamic Changes Induced in Experimental Murine Dry Eye and increases in apoptosis occurred at 2 weeks, which were comparable to those caused by scopolamine injection at 5 days. Maintenance of healthy ocular immune microenvironment is dependent on a delicate balance between the factors eliciting proinflammatory and antiinflammatory events. This entails preventing proinflammatory lymphocytes (Th1 and Th17 types) from infiltrating into the eye to elicit increases in proinflammatory cytokine expression that overwhelms the ability of antiinflammatory lymphocytes (Th2 types and Tregs) to counter inflammation through rises in the release of suppressive interleukins (e.g.IL-4 and IL-10) and TGFβ-2 [31][32][33][34][35][36][37][38][39][40]. In accordance with the ocular surface symptoms, the transcriptional level of conjunctival pro-inflammatory cytokines including Th17 cell associated cytokine (IL-6, IL-23, and IL-17), IL-1β and TNFα rose and peaked at 2 weeks, which then remained invariant for up to 6 weeks. While the Th1 cell associated cytokine (IFN-γ) and the Treg (regulatory T cells) cell related cytokine (TGF-β2) displayed a different trend, which continuously increased up to 6 weeks. It is possible that the active Treg cell activation counteracted the elevated Th17 cell responses during the later 4 weeks, resulting in the 4-week plateau period of the ICES induced dry eye model. The immune suppressive functions of TGF-β-2 and Treg cells are extensively studied [41,42]. Earlier studies found that TGF-β-2 could suppress T-cell proliferation by inhibiting the production of IL-2, a lymphokine known to potently activate T cells, NK cells, and other types of cells of the immune system [43]. Recently, TGF-β-2 was identified to be critical for the induction of IL-17 producing cells under inflammatory conditions [44][45][46][47]. Such evidence suggests that a functional balance between Tregs and effector T cells is vital to maintain efficient immune responses needed for preserving ocular surface health. We speculate that the plateau period from 2 weeks to 6 weeks of ICES was induced by the balanced status between Tregs and effector T cells. De Paiva CS et al found significantly higher levels of IL-23 after 5 days of exposure to a desiccation stress. IL-6, IL-17 (both at 5 days and 10 days), IFN-γ (at 10 days) transcripts were higher in the conjunctiva of DE mice than the N group. TGF-β1 levels in conjunctival lysates increased significantly at 10 days, whereas TGF-β2 did not change [22]. In another study, higher levels of IL-17A, TGF-β1, TGF-β2, IL-6, IL-23, and IL-1 mRNA transcripts were observed in the corneal epithelium and conjunctiva of dry eye mice [48]. These results are consistent for the most part with ours except for somewhat larger increases in TGF-β2 levels in the aforementioned study. Pitcher et al proposed that elevated levels of IL-17A, IL-17R, IFN-γ, IL-6, IL-1β, and TNF-α transcripts were noted in SCOP2D mice and IFN-γ, TGF-β1, and IL-18R transcripts in SCOP5D mice. MMP-9, TGF-β2, did not change significantly in the SCOP model at any time point from 2 to 5 days [17]. In the lacrimal gland, the increases in proinflammatory cytokine gene expression levels exhibited similar trends to those occurring in the conjunctiva. However, the levels were significantly lower than those of the SCOP treated mice. Consistently, the CD4, CD11b, CD103 biomarker levels of infiltrating inflammatory cells including CD45 cells were also much higher in the SCOP group. In the SCOP model, influx of CD4 T cells occurred into the parenchyma and periductal regions of the lacrimal gland, which is possibly associated with declines in acinar cell secretory activity. This pattern of changes is similar to that seen in SS patients. Such declines enhances exposure of lacrimal autoantigens to resident antigen presenting cells and initiates an autoimmune reaction. On the other hand, ICES induced local effects are restricted to the ocular surface, rather than mediating lacrimal gland inflammation and disruption of its cytoarchitecture. These differences may account for why pathology in the SCOP model are so much more severe than that in the ICES model. The SCOP model may be relevant to the condition in which cholinergic blockade induced by M3R autoantibodies in SS contributes to lacrimal gland inflammation. Because these autoantibodies appear capable of inhibiting cholinergic signaling as do anticholinergic agents such as scopolamine, it is possible that prolonged autoantibody-mediated cholinergic blockade could also promote lacrimal gland inflammation and secretory dysfunction. Ultrastructural morphology analysis of the lacrimal gland showed that ICES caused increases in the number of secretory vesicles (SVs) in the cytoplasm of the epithelial cells, while those in the SCOP group were largely atrophic. Excessive accumulation of SVs, may be attributable to element and fluid entrapment. One possibility is that a decline in tear fluid secretion is essentially due to a decline in fluid secretion instead of fluid absorption into the gland. In contrast, the mechanism of SCOP-induced dry eye is due to both impaired tear production and secretion caused by impaired cholinergic support of lacrimal gland function. Previous studies suggest that excessive SV accumulation in the lacrimal gland may contribute to the reduced tear secretion in some VDT users presenting with DE symptomology [49]. So it is possible that the ICES induced dry eye model, which mimics VDT dry eye patients, may cause tear secretion to decline due to suppression of SV content release. Taken together, ICES induced murine dry eye develops from an initial surge in proinflammatory cytokine expression and immune cell infiltration that reaches a plateau after 2 weeks. It is sufficient to limit studies to this duration for the purpose of gaining additional insight into the pathogenic mechanisms that underlie DE disease development. Furthermore such an undertaking may lead to the identification of novel drug targets whose modulation will provide better control of the immune responses underlying this disease. On the other hand, to more clearly delineate the development of antiinflammatory mediator expression in these models, it may be more effective to extend the measuring period beyond two weeks. Such an extension may make it easier to better characterize their involvement in countering rises in proinflammatory cytokine expression and stabilizing DE disease progression.
5,500.6
2015-01-15T00:00:00.000
[ "Biology", "Medicine" ]
Fintech credit, credit information sharing and bank stability: some international evidence Abstract This study relies on an aggregate dataset of 73 countries from 2013 to 2018 to investigate the nexus between fintech credit, credit information sharing on bank stability. We document several significant findings. First, our evidence implies that fintech credit tends to improve bank stability. This suggests that as fintech credit grows, it certainly competes with banks, but it also strengthens banks’ stability. Second, credit information sharing increases bank stability. Thirdly, it is found that the impact of fintech credit on bank stability may depend on credit information sharing. Specifically, the presence of credit information sharing institutions may facilitate the positive effect of fintech credit on bank stability. This result remains unchanged to the introduction of alternative regression, as well as an alternative dependent variable. Finally, policy implications are discussed based on the findings of the research. Introduction Although traditional lenders such as banks and other financial intermediaries remain the primary source of funds for borrowers in most markets, new financial institutions have emerged and gained traction recently, including fintech lending models that have evolved in many economies (Cornelli et al., 2020). The establishment and development of fintech has significantly impacted banking systems (Petralia et al., 2019). Fintech has now become widespread in many financial areas such as credit, deposit, capital-raising, payment, and investment. Fintech firms have been competing with traditional financial firms, thus impacting performance and risk-taking behaviors and stimulating innovations of the latter (An & Rau, 2019;Cheng & Qu, 2020;Guo & Shen, 2016;Qiao et al., 2018;Phan et al., 2020;R. Wang et al., 2020). At the same time, the growth of fintech credit volume has been impressive recently. From around USD 9.9 billion in 2013, the volume has grown to over USD 298 billion in 2018 (Cornelli et al., 2020), a growth rate of over 97% per annum. Indeed, traditional banks have lost their market share in main markets such as residential mortgages to these new competitors (Buchak et al., 2018). While still small overall, fintech credit is now a global phenomenon, and central banks and public authorities have begun to use information on fintech credit volume to observe economic and financial conditions, to guide monetary policy decisions, and to set macroprudential policies, such as the countercyclical capital buffer (Cornelli et al., 2020). In spite of the development of fintech credit and its perceived significant function towards the banking system, the influences of fintech credit on the financial systems are little understood (Li This study contributes to the current literature in several ways. First, we examine the link between fintech credit and bank stability, using the volume of credit provided by fintech firms, composed by Cornelli et al. (2020), as a measure of fintech credit. This helps expand considerably the literature examining the competition between fintech lenders and financial intermediaries since previous related studies use more general proxies for fintech (R. Wang et al., 2020;Y. Wang et al., 2021;Phan et al., 2020;Lee, 2015;Cheng & Qu, 2020). As a consequence, the particular influence of fintech lenders on the bank stability cannot be uncovered. Secondly, fintech credit firms use their models and algorithms to extract information from various sources, and the information is considered quite useful in assessing the creditworthiness of customers (Berg et al., 2020;Frost et al., 2019). Meanwhile, traditional banks could be more dependent on the information sharing bureaus to reduce information issues in the credit market. Furthermore, Kowalewski and Pisany (2021) argue that traditional credit data from information sharing bureaus should be considered cautiously, as it has the potential impact on the relationships between banks and fintech credit firms. Therefore, this implies there should be some moderating effect of credit information sharing on the relationship between fintech credit and bank stability, which has not been investigated. This study provides insights into the joint effect of credit information sharing on the relationship between fintech credit and bank stability to void this gap. Through this, we are able to establish whether banks and fintech rivals cooperate or compete and whether this affects the stability of banks. Finally, we provide a range of approaches to ensure the robustness of the research findings and discuss some implications to improve the stability of banks in the context of the co-existence between banks and fintech lenders. The remaining of our study is structured as follows. Section 2 discusses theories and relevant studies on the activities of fintech and its impact on banking systems. Section 3 outlines research methodology, where we propose testable hypotheses, estimation strategies, empirical models, and variable definitions. Sections 4 and 5 present the estimation results of the models. Section 6 concludes the paper with policy implications and suggestions for future research directions. Credit information sharing and bank stability Adverse selection and moral hazard resulting from information asymmetry negatively affect the banking sector by reducing the efficiency in the provision of credit and causing nonperforming loans (Freixas & Rochet, 1997;Jappelli & Pagano, 2002;Stiglitz & Weiss, 1987). Therefore, information sharing bureaus can be the essential tools to reduce information-related issues in the credit markets (Triki & Gajigo, 2014). Consistently, credit information sharing agencies have been shown to play a vital role in the development of banking systems (Barth et al., 2009). The previous studies suggest that information sharing might positively affect banking soundness by addressing moral hazard, adverse selection and risk of over-indebtedness (Doblas-Madrid & Minetti, 2013;Guérineau & Léon, 2019). Regarding the first channel, information sharing institutions can lessen borrowers' moral hazard and boost borrowers' incentives to repay the loans because information sharing motivates debtors to behave (Jappelli & Pagano, 2002). According to Pagano and Jappelli (1993), the second channel is that information sharing among banks assists in reducing the risks and the lending interest rate, as well as adverse selection. Finally, information sharing can lower the risk of over-indebtedness, which is the third channel. Previous studies find that information sharing is conducive to the soundness of the banking sector. Credit information sharing decreases credit risk (Jappelli & Pagano, 2002;Kusi et al., 2017), default rates Houston et al., 2010;Padilla & Pagano, 2000;Vercammen, 1995), and banking system fragility (Guérineau & Léon, 2019). For instance, Jappelli and Pagano (2002) argue that when banks share information about borrowers, credit risk is lower and the level of bank credit is higher. Kusi et al. (2017) render evidence that private and public credit bureaus decrease the credit risk of banks in African countries. Houston et al. (2010) suggest that in markets with information sharing among creditors, bank profitability improves and default rates lower. The findings of show that information sharing bureaus lessen default rates in developing countries. Padilla and Pagano (2000) and Vercammen (1995) report that information sharing can curtail the borrower hold-up issues and boost borrower discipline, therefore decreasing the default rate of borrowers. In addition, Guérineau & Léon (2019) provide evidence that credit information sharing bureaus help tackle financial instability for both developed and developing countries. Therefore, our testable hypothesis is as follows: H1: credit information sharing is positively associated with bank stability. Literature review on fintech and bank stability On the one hand, many studies have praised fintech for its potential to enhance financial services through improving service quality and business structures, rendering transactions more affordable, more secure, and comfier (Begenau et al., 2018;Chen et al., 2019;Chiu & Koeppl, 2019;Fuster et al., 2019;Li et al., 2017;Vasiljeva & Lukanova, 2016;Zhu, 2019). Furthermore, fintech can support commercial banks regarding diversification strategies (Yao & Song, 2021). Li et al. (2017) argue that there exists a positive association between the growth of fintech activities and the stock returns of banks. Furthermore, it appears that fintech lenders do not aim to substitute financial institutions entirely, as the former's market share is larger in jurisdictions characterized by higher bank credit denial rates and lower consumer credit scores (De Roure et al., 2019). De Roure et al. (2019) also show that P2P lending platforms target at risky and less profitable customers, so they can help improve the stability of banks. On the other hand, following the consumer hypothesis and the disruptive innovation hypothesis, the development of fintech could negatively affect the banking sector. The former hypothesis suggests that, by responding to similar consumer demands, fintech-provided services can replace the incumbent services served by existing financial institutions (Aaker & Keller, 1990). According to the "disruptive innovation hypothesis", market entrants applying innovative technologies to provide more affordable and accessible services are highly competitive in the market (Christensen, 1997). Some studies have opined that the rise of information technology could mean challenges to commercial banks because banks are slower in adopting new technologies (Brandl & Hornuf, 2017;Laven & Bruggink, 2016). Traditional institutions have lost market share to fintech credit, as the latter is more leniently regulated and enjoys better technological advantages (Buchak et al., 2018). Fintechs process lending applications faster without enhanced credit risks, compared to traditional credit institutions (Fuster et al., 2019). Further, fintech credit also responds more elastically to shocks on the demand side and has a higher ability to refinance (Y. Wang et al., 2021). Regarding payment settlement, fintech allows mobile payments with much lower costs, reducing the longterm and unique advantages of commercial banks (Berger et al., 1999). Moreover, cloud computing can store and handle customer data efficiently, and support payments better (Y. Wang et al., 2021). Phan et al. (2020) investigate fintech in Indonesia and show that fintech negatively impacts bank performance. R. Wang et al. (2020) find evidence that fintech intensifies the risk-taking of Chinese banks. However, the above nexus is heterogeneous depending on different bank characteristics, e.g., efficiency and size. Against these backgrounds, it is clear that in general fintech firms can impose an impact on bank stability in either direction. Researchers have rarely examined the link between a specific activity of fintech-fintech credit-and banking systems. Buchak et al. (2018) is the first and only study to find that fintech activity in residential mortgages filled the declining activity of traditional banks when they encountered more regulatory burdens. Nonetheless, this study did not investigate the impact of fintech lenders on banks' stability. Therefore, we anticipate that fintech firms cater to unserved customers or those that are of lower quality to the banks. We expect a less negative impact of fintech credit on banks stability. To summarize, our hypothesis, therefore, is as follows: H2: Fintech credit has an impact on bank stability Both credit information sharing and fintech can affect bank operations, but there are key differences between traditional banks and fintech credit firms. Fintech credit firms are the platforms that solve problems of asymmetric information through their screening practices by collecting non-traditional data (digital data) such as e-commerce data, payment data and data from social media. Previous studies show that digital data are at least as useful as traditional credit information from information sharing bureaus (Berg et al., 2020;Frost et al., 2019;Gambacorta et al., 2019). Meanwhile, information sharing bureaus are the tools that could be used by traditional banks to reduce information-related issues in the credit market, such as moral hazard and adverse selection. Huang et al. (2020) show that information provided by fintech firms can effectively substitute credit registry information in risk screening. In contrast, Berg et al. (2020) show that digital data using by fintech lenders complements rather than substitutes for traditional credit from information sharing companies, suggesting that lenders (fintech firms or banks) can make superior lending decisions when using information from both sources (credit bureau and digital data). Kowalewski and Pisany (2021) also suggest that there is a large room for cooperation between fintechs and banks, where fintechs would provide technological solutions for banks. With a more privileged banks' access to credit data from credit information sharing bureaus, banks should be more prone to leverage on the support of fintech firms to reap the highest benefit possible. To summarize, the points discussed above suggest there should be some moderating effect of credit information sharing on the relationship between fintech credit and bank stability. We provide insights into the joint effect of credit information sharing on the relationship between fintech credit and bank stability to void this gap. Through this, we are able to establish whether banks and fintech rivals cooperate or compete and whether this affects the stability of banks. All things considered, our third hypothesis as: Data collection and processing We collect data from a number of sources. The aggregate data on fintech credit are provided by Cornelli et al. (2020), covering 73 countries between 2013 to 2018. The aggregate banking system data and macroeconomic variables are obtained from Financial Development & Structure Dataset (FDSD) and World Development Indicator dataset (World Bank, 2019). Our choice of the period under investigation is driven by data availability. Empirical models To verify the impact of fintech credit and credit information upon bank stability, our empirical research model is as follows: We modify equation (1) by adding an interaction term between fintech and information sharing to examine the joint effect of these two factors on bank stability: The variables in equations (1) and (2) are defined below. The Z-score is used to assess bank stability, as indicated by the literature (Lepetit et al., 2008;Stiroh & Rumble, 2006). Higher Z-scores indicate more financial stability and lower overall bank risk: Where: ROA is return on total assets. SDROA ip is the standard deviation of return on total assets over the examined period (Köhler, 2015;Stiroh, 2004). EQTA it is calculated as the ratio of equity to total assets. Independent variables Fintech i,t ,-the ratio of fintech credit to GDP of the country i in year t-is calculated from the dataset obtained from Cornelli et al. (2020). This is a standardized measure to control for the effect of the size of the economy in providing fintech credit. CIS is the measure of credit information sharing. Following Barth et al. (2009) and Triki and Gajigo (2014) and others, we resort to depth of credit information index and private credit bureaus and public credit registries (CI_index, PCB and PCR, respectively) in order to gauge the level of credit information sharing. Bank characteristics CIR (the cost-to-income ratio) is a measurement of bank efficiency. Studies on the influence of bank efficiency on bank stability tend to offer mixed evidence at best. The skimping hypothesis argues that banks are prone to see a decrease in bank stability (Berger & DeYoung, 1997). On the other hand, the "bad management" hypothesis argues that cost inefficiency is likely to lead to higher levels of bank instability. LIQ is calculated as the ratio of bank liquid reserves to total assets to measure bank liquidity. Previous studies find that banks with higher liquidity levels are more likely to have better stability (Tran et al., 2020). External factors Bank stability is also subject to external macroeconomic factors such as economic growth, inflation, banking system development, banking system concentration, and corruption discussed further below. GDP is the annual real GDP growth rate. This variable is included to control for the economic cycle effect. Previous literature shows that economic growth is positively related to bank stability (Baselga-Pascual et al., 2015;Köhler, 2015). Inflation (INF) is the inflation rate. Inflation is believed to influence bank stability (Baselga-Pascual et al., 2015;Köhler, 2015). Prior literature shows that inflation is negatively related to bank stability (Baselga-Pascual et al., 2015;Köhler, 2015). BSD (banking system development) measures financial development (Demirgüç-Kunt & Huizinga, 2000). This is calculated as the ratio of bank credit to GDP. The influence of financial development on bank stability tends to offer mixed evidence. Espenlaub et al. (2012) and Williams and Nguyen (2005) show that financial development can reduce bank risk, on the contrary, Vithessonthi (2014) highlighted the positive effect of financial development on bank risk. Concentration (the ratio of assets of the five largest banks to total assets of commercial banks) is included to account for industry concentration. Banks with high market power can engage in riskier activities, according to the concentration-fragility hypothesis (Boyd & de Nicoló, 2005). CCI (control of corruption index), is added to control for corruption effect. CCI has a value that runs from −2.5 to 2.5. Higher values of CCI denote less corruption.). Several empirical studies suggest that corruption imposes a negative impact on bank stability (Bougatef, 2015;Tran et al., 2020). As for the estimation strategy, in line with Claessens et al. (2018), Rau (2020), and Cornelli et al. (2020), we use the pooled ordinary least squares (OLS) and further control for heteroskedasticity. To ascertain the robustness of research findings, we further examine the impact of fintech credit and credit information sharing on bank stability by constructing a model where the period of proxies of bank stability is one period behind that of independent variables. In line with previous research (Kowalewski & Pisany, 2021), this approach is an effort to address the potential endogeneity that comes from the two-way relationship between the explained and explanatory variables. Finally, we use an alternative dependent variable proxy (non-performing loan ratio) to ensure the robustness of the findings. This is also an effort to address the concern raised in Lapteacru (2016): Zscore is not a perfect proxy for bank stability/risk due to unrealistic assumption of returns on assets. Table 1 describes the variables in the model. For the dependent variable, the mean of Zscore is 3.64. For the whole sample, the ratio of fintech credit to GDP is about 0.04% on average. This implies a modest size compared to a much larger scale of credit provided by traditional financial lenders. Table 2 gives the pair-wise correlation coefficients of the variables. Fintech credit and credit information sharing have positive associations with Zscore. Also, the low coefficients between pairs of variables suggesting that the problem of multicollinearity is not a concern for the sample. Nevertheless, these correlations do not constitute a valid basis for the statistical inferences; as a consequence, we continue by estimating models to empirically examine the hypotheses. Table 3 provides empirical results on the impact of fintech credit and credit information sharing on bank stability. We find that fintech credit has a positive impact on Z-score. Thanks to the ability to deploy technology to exploit big data and tackle information asymmetry it can now reach unserved populations better or those that have little chance of being catered by banks due to poor credit history. Therefore, if the banking system cannot absorb these low-quality borrowers (e.g., who lack collateral), fintech credit or other types of shadow banks, represented by P2P lending, can be a substitute (Buchak et al., 2018), and this may spur financial inclusion. Despite the fact that fintech credit would take some market share away from banks, it will not be able to completely replace bank lending in the near future (Thakor, 2020). Firstly, as documented in Thakor (2020), p. 2P lenders are more likely to benefit from more risky borrowers and those unserved by banks. Therefore, they could take away some market share and profits but not all. Also, if the risky borrowers have been approached by fintech lenders, banks could become safer. In the long term, banks would respond to fintech lenders by building their own online lending platforms either by creating their own platforms or partnering with these fintech firms. So, overall, fintech development mitigates risk more than reduce bank return (Thakor, 2020). Furthermore, from the borrowers' perspectives, when fierce competition between banks and others in lending business happens, loans are cheaper for borrowers, which results in lowering borrowing costs and reducing the borrowers' incentive to engage in risk-shifting. Therefore, default risk could be reduced and the financial stability would be improved (Thakor, 2020). Source: Calculated from the dataset factor analysis (so what they built represents general fintech firms), whilst our study uses a proxy of fintech credit to GDP, inherited from Cornelli et al. (2020), which relates more directly to the activities of fintech lenders. This is also a significant extension to the current literature in the field of the competition between fintech firms and banks. Fintech credit, credit information sharing and bank stability When credit information sharing is proxied by the depth of credit information index, we find that credit information sharing increases bank stability. This result is in line with the findings from ; Guérineau & Léon (2019); Kusi et al. (2017). Column (2) in Table 3 provides evidence on the impact of credit information sharing on bank stability through private credit bureaus and public credit registries (PCB and PCR). The result shows that credit information sharing (through private credit bureaus-PCB) is positive and significant influence on bank stability, this result is in line with the findings from and Kusi et al. (2017). Whereas, credit information sharing (through public credit registries-PCR) is positive and insignificantly related to bank stability. These results suggest that PCB may play a more significant role compared to PCR. Peria and Singh (2014) also suggest that credit bureau reforms are more efficient in providing the necessary credit information to the market, compared to credit registry reforms. For bank characteristics, the cost-to-income ratio (CIR) is not significant with bank stability. LIQ has a positive coefficient on Zscore, indicating that higher liquidity assets increase banking system stability. For macroeconomic factors, GDP and INF are found to exert negative impacts on bank stability. Stronger rates of economic growth and inflation, in contrast, are found to lower Zscore of the banking system, which disagrees with the 'cyclical nature of bank risk" view. These findings are in Wang et al. (2020) and the literature that the instability accumulated during economic expansions leads to lower bank stability during recessions (Jiménez et al., 2006). Industry concentration (Concentration) has a negative impact on bank stability. This agrees with the concentration-fragility hypothesis, which claims that banks with high market power can engage in riskier activities (Boyd & de Nicoló, 2005). Interaction between fintech credit and credit information sharing on bank stability Next, we investigate the joint impact of fintech credit and credit information sharing on bank stability by examining the interaction term between these two factors. Table 4 reports the estimation results of equation (2). Overall, the effects of the control variables are significant and consistent with the estimation results of equation (1). Consistent with Table 3, Table 4 reports that the depth of credit information index and that credit information sharing (through private credit bureaus-PCB) are positively related to Zscore, implying that credit information sharing tends to enhance bank stability. Regarding the interaction between fintech credit and credit information sharing (through the depth of credit information index and private credit bureaus), we find that the interaction terms are significantly related to bank stability. This result suggests that the presence of credit information sharing institutions could enhance the positive effect of fintech credit on bank stability. Robustness checks To ascertain the robustness of research findings, we further examine the impact of fintech credit and credit information sharing on bank stability by: (1) constructing a model where the period of proxies of bank stability is one period behind that of independent variables, as an effort to address the potential endogeneity that comes from the two-way relationship between the explained and explanatory variables (Tables 5 and 6); (2) using the non-performing loans in place of the Zscore as a bank stability variable (see, Davis et al., 2020; Tables 7 and 8). The results from our first robustness check are in line with those reported earlier, as shown in Tables 5 and 6. Fintech credit has a positive impact on Zscore. The positive coefficient of the interaction term supports the argument that the presence of efficient credit information sharing institutions could enhance the positive effect of fintech credit on bank soundness. Finally, the coefficients of all other control variables are consistent with those estimated earlier. In terms of the dependent variable constructed by taking non-performing loans (Tables 7 and 8), we find that fintech credit is negatively related to non-performing loans, suggesting that fintech credit enhance bank stability. The interaction between fintech credit and credit information sharing (through the depth of credit information index and private credit bureaus) is negatively and significantly related to non-performing loans, suggesting the presence of efficient credit information sharing institutions could enhance the positive effect of fintech credit on bank soundness. Meanwhile, all other control variables are similar to the prior setting. In general, using alternative regression and controlling for another independent proxy does not change the main results of the paper. Conclusion Using the aggregate dataset of 73 countries from 2013 to 2018, this study is to investigate whether fintech credit exerts an impact on bank stability. We document some significant findings. First, there is a positive link between fintech credit and bank stability. These results suggest that as fintech grows, it competes with banks, but it also benefits banks in terms of stability. Second, we argue that the effect of fintech credit on bank stability may depend on credit information sharing. We find fintech credit would impose a more positive influence on bank stability with the presence of efficient credit information sharing institutions. These results are robust to regression models with alternative dependent variables. Regardless of the rise of fintech credit and its perceived effect on the banking system, the effects of fintech credit on the financial system are not well understood (Li et al., 2017;Phan et al., 2020). Particularly, the assessments of the links between fintech credit on bank stability are scarce. We also provide insights into the joint effect of credit information sharing and fintech credit on bank stability. Therefore, this research would provide a much more comprehensive and generalizable result on the influence of fintech credit on the banking system. From our findings, it is clear that the impact of fintech credit on bank stability is moderated by credit information sharing. This finding implies that banks could leverage on the technological solutions from fintechs to extract more data from different sources. As pointed out in previous studies, the combination of data from digital footprints and credit information sharing bureaus could improve significantly the ability to predict defaults. As a result, in the presence of fintech lenders, credit information sharing entities are still playing a favourable role in enhancing bank stability, and they should not be ignored. To extend this research, other studies may verify the impact of big-tech credit and other forms of fintech firms on bank stability when the relevant data are more available. This will help to comprehend whether different types of fintech firms affect bank stability differently. Also, it would be safer to test the relationship using some other proxies for bank stability, even though we have used two proxies in this study, due to the shortcomings of any single proxy of bank stability.
6,202
2022-09-07T00:00:00.000
[ "Economics", "Business" ]
Non-binary codes approach on the performance of short-packet full-duplex transmissions ABSTRACT INTRODUCTION During the age of advanced wireless communications such as 5G and beyond, in conjunction with internet of things (IoT) applications, one of the main goals is to facilitate communication among massive connections of new devices and empower them to make decisions autonomously.This is achieved through the utilization of a range of technologies and interconnection of a vast number of devices [1]- [3].The primary focus encompasses two key services: ultra-reliable low latency communication (uRLLC) and massive machine-type communication (mMTC) [4].Moreover, it is certain that the fast development of wireless and mobile communication leads to the requirement of spectrum efficiency and high data rates [5], [6].Consequently, an efficient spectrum sharing technique called full-duplex (FD) transmission has been proposed [5], [7] by using the time-frequency resource at the same time for transmission and reception.Due to the efficient usage of resource and outstanding performance compared with traditional methods, FD transmission has many applications to the modern transmission networks, not only for data transmission but also for security maintenance [8]- [11].However, the influence of self-interference (SI) should be carefully and fully suppressed as much as possible to achieve the best performance of FD transmission, especially in short-packet FD transmission and IoT applications. Since 2018, the third generation partnership project (3GPP) has introduced the quasi-cyclic low density parity check (QC-LDPC) as the standard codes of 5G new radio (NR) [12]- [14] because of their  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol.14, No. 2, April 2024: 1683-1690 1684 higher error correction performance and powerful decoding.However, the traditional binary LDPC codes have a drawback in high order of modulation such as 16-QAM or 64-QAM [15].Therefore, an extended version of LDPC codes, which works over higher Galois field () where > 2, has been proposed and called as non-binary low density parity check (NB-LDPC) codes [16].It has been proved by many researches that NB-LDPC outperforms their binary counterpart in the case of short code length and higher order of modulation [17], [18].In the present era, NB-LDPC codes are increasingly seen as a promising coding method for mMTC devices within 5G networks or sensors used in IoT applications that handle limited data transmission [19].However, it is important to address and mitigate the issue of excessive complexity and latency during the decoding process.Therefore, in this article, to reduce the impacts of SI component, high order of modulation as well as complexity and latency of decoding process, a NB-LDPC blind feedback process combining channel estimations and decoding algorithm is proposed and implemented.In particular, this algorithm uses an iterative process to simultaneously suppress SI component of FD transmission, estimate intended channel and decode intended messages.This paper's contributions can be summarized as follows.First, we demonstrate that the proposed technique provides a better solution than the traditional NB-LDPC without using a feedback algorithm.Second, we show that the proposed technique is less sensitive to the increase of high order of modulation compared to binary LDPC blind and semi-blind feedback methods.Last but not least, we emphasize that the suggested algorithm also shows robustness in reliability and power consumption for both short-packet FD transmissions and high order modulation communications.The remaining of this article is organized as follows.Section 2 briefly describes the system models of traditional NB-LDPC codes FD transmission without feedback and the proposed NB-LDPC blind feedback.Section 3 shows numerical results and discussions.Finally, the conclusions will be highlighted in Section 4. SYSTEM MODEL AND PROPOSED METHOD 2.1. Traditional NB-LDPC codes FD transmission without feedback model Let us consider a short-packet FD transmission between Alice and Bob as shown in Figure 1, we shortly call them as A and B, respectively.Two antennas are equipped at the transceiver of B for simultaneously transmitting and receiving messages in FD operation.Assuming that NB-LDPC codes [12] are used at transceivers of A and B. We further make an assumption that the intended channel gains between A and B is ℎ and the SI channel gains at B is ℎ .Both channels are independent and identically distributed complex random variables with (0, 1).Normally, two components such as line-of-sight (LoS) and non-line of sight (NLoS) are available on the SI channel of FD transmissions.By the implementation of passive and analog suppression techniques, it becomes possible to substantially reduce the impact of LoS component, while also keep the influence of reflections, as demonstrated in [20], [21].Consequently, this results in a reason to model SI channel as Rayleigh fading in the digital domain.Moreover, we also assume that the NB-LDPC encoding processes are the same at transmitter of A and B as symmetric properties.Indeed, the binary input message with length in (2) will be converted into () as [17], where is the order of M-PSK modulator, and then encoded by NB-LDPC codes over ().After that, the codeword will be modulated by M-PSK modulation process to form the transmitted signal [] before passing to the digital-to-analog conversion (DAC) process to obtain the continuous time signal ().Without loss of generality by by-passing the distances between A and B and also the pair of antennas at B, the received signal at B can be given by: Non-binary codes approach on the performance of short-packet full-duplex … (Bao Quoc Vuong) where () is the complex Gaussian background noise at B with (0, 2 ).Furthermore, let us denote = / 2 as self-interference to noise ratio given by the SI channel at B and = / 2 as the signal to noise ratio (SNR) at B, where , are the transmitted power of A and B, respectively. At the receiver of B, the signal () will go to the ADC process to convert back to a discrete time domain signal, [𝑛].Here, the residual quantization noise error can be suppressed by assuming that the and digital-to-analog conversion/analog-to-digital conversion (DAC/ADC) processes at transceiver have enough bit resolution and voltage dynamic range, which has been studied in [22].The other hardware impairments and synchronization problems are also not indicated in this paper.An adaptive filter using the recursive least square (RLS) algorithm with the forgetting factor = 0.999 is then applied to estimate SI channel ℎ ̂ , we call digital self-interference cancellation (DSIC) process.Since B knows its transmitted signal [𝑛], this value can be used to eliminate the SI component to obtain: Then, at the equalizer process, the recursive least square-constant modulus algorithm (RLS-CMA) blind method [23] is used to estimate intended channel ℎ ̂ and obtain equalized signal ̃ ′ [𝑛].After that, this signal will go to demodulation and decoding processes and convert back from () to (2) to achieve the binary output.In the decoding process, the sum of product algorithm (SPA) in log domain is performed based on the LLR belief sequence received from the M-PSK demodulation process, as studied in [24], [25].For this model, we shortly call it as "NB-LDPC without feedback". Proposed NB-LDPC blind feedback model NB-LDPC codes still remain a high complexity of decoder, although they give a robust performance compared to the conventional turbo-codes and binary LDPC codes, especially in high order modulation as indicated in [26].Hence, it is not a possible option for transmitting short-packet FD transmission, as it fails to obtain the required level of accuracy in estimating SI channel.As a result, a NB-LDPC iterative process to simultaneously suppress SI component, estimate intended channel and decode messages is proposed, called as "NB-LDPC blind feedback" method.The flowchart of this method at the receiver of B is shown in Figure 2, and it contains three basic stages as follows: RESULTS AND DISCUSSION In this section, the mean square error (MSE) and bit-error-rate (BER) performances on different orders of M-PSK modulator are implemented using Monte Carlo simulations on MATLAB.A comparison with the LDPC blind feedback [27] and the LDPC semi-blind feedback [28] is also indicated.Based on the Rayleigh distribution, the SI and intended channels are generated independently in each transmission packet and are fixed with three taps and four taps, respectively, and the fading coefficients follow the ITU Radiocommunication Sector (ITU-R) channel model [29].NB-LDPC codes are used with the code rate =1/2 as a particular example to illustrate the performance of our proposed algorithm and 10 6 is the total transmission frames.Moreover, in this section, it is necessary to determine a lower bound as a benchmark for assessing the robustness of the NB-LDPC feedback method in relation to the MSE and BER performances.In particular, for the MSE performance, we assume that all of the transmission message symbols from A are known at B, the system then performs the SI channel estimation at stage 1 and only perform the equalization process at stage 2 to get the estimated intended channel ℎ ̂ l.At stage 3, the known symbols from A will be used for re-encode and re-modulation processes then they will be performed a filter process with the estimated intended channel to obtain the intended signal ̂ and finally carry out subtraction in the incoming iterations.For the BER performance, the previously ideal estimation of SI channel and intended channel is used to perform SI cancellation and SPA decoding processes with one iteration ( = 1).We shortly call this assumption as "Ideal Case". MSE performances The MSE of the SI channel and the intended channel are respectively given by ( 3) and ( 4) [30]. Figure 3 and Figure 4 show the MSE performances of SI channel and intended channel versus SNR (dB) for different orders of M-PSK modulator, respectively.It can be clearly observed that the SNR increases will lead to the MSE decreases on both SI channel ( ) and intended channel ( ).The results illustrate that the NB-LDPC blind feedback method approximately reaches to the lower bound (the Ideal Case) at all regions of SNR, regardless of the order of M-PSK modulation.It can also be seen that the proposed NB-LDPC blind feedback method has a better performance than the LDPC blind feedback scheme studied in [27], especially for high order of modulation.In particular, for 8 -PSK and 16-PSK, the NB-LDPC feedback curves converge quickly to saturation error floor, i.e.Non-binary codes approach on the performance of short-packet full-duplex … (Bao Quoc Vuong) 1687 BER performances First of all, Figure 5 illustrates the BER performance versus SNR (dB) for different values of iterations.It can be clearly observed that the SNR increases will significantly lead to the BER decreases.It can also see that the BER of NB-LDPC blind feedback method when = 4, = 1 is quite closed to that when = 10, = 1 and also nearly achieve the BER of Ideal Case by using the ideal channel estimation of SI channel and intended channel in Figure 3. Furthermore, it is unnecessary to increase the number of iterations in order to reduce the latency in SPA decoding process.Therefore, it indicates that the possible choice for convergence performance is when = 4, = 1 to save the computation and power consumption.Moreover, the suggested NB-LDPC blind feedback method also gives a better performance than the traditional NB-LDPC without feedback.For example, at = 10 −5 , the gap is about 3 dB when = 2, = 1 and about 8 dB when = 4, = 1, although the traditional NB-LDPC without feedback uses = 20 iterations to perform the decoding process.Therefore, this result shows the efficient usage of NB-LDPC blind feedback method to significantly improve the reliability and also reduce the complexity and latency (by reducing mostly the decoding iteration , which takes a lot of time in whole transmission process) for practical uses in 5G transmissions and IoT applications. In study [27], the LDPC blind feedback method has a worst performance in low region of SNR (≤ 0 dB) due to the high error of decoding at the first iteration leading to consequence of higher error in next iterations.So, the authors continuously proposed a second method called LDPC semi-blind feedback method by using at least four pilot symbols added to the transmission message at transmitter [27].These pilot symbols are then used for SI and intended channels estimation as well as feedback loops.Subsequently, we will illustrate a comparison of the proposed NB-LDPC blind feedback method to these two LDPC blind feedback [27] and semi-blind feedback (using four pilot symbols) [27] methods, as shown in Figure 6.It can be observed that the suggested NB-LDPC blind feedback method shows a wonderful performance not only in high region of SNR but also in low region of SNR, although the pilot symbols are not necessary to implement in this method, compared to LDPC blind feedback [27] and semi-blind feedback [27] methods.Last but not least, the BER performance of the proposed NB-LDPC blind feedback method, the LDPC blind and semi-blind feedback methods versus (dB) for different order of modulation M-PSK, are compared in Figure 7.It indicates that with a small order of M-PSK modulation, i.e.QPSK, all methods are appropriated to each other and closed to the Ideal Case.However, when the order of modulation increases, i.e. 8-PSK and 16-PSK, the gaps between the proposed NB-LDPC blind feedback curve and the LDPC blind and semi-blind feedback curve are bigger when SNR is increased.Indeed, the proposed NB-LDPC blind feedback is still closed to the Ideal case, i.e. at = 10 −3 on 16-PSK, the proposed NB-LDPC feedback needs only 23 dB in SNR while the two LDPC methods require about 27 to 30 dB to obtain that result.Therefore, the result is possible in practical applications in high order communications since the proposed NB-LDPC blind feedback method is less sensitive to the increase of order of modulation. Processing time and computation complexity In this section, we compare the processing time and computational complexity, which are two important factors to quantify the effectiveness of the proposed method.For processing time, the MATLAB Version 2023a was run on a computer with the hardware configuration of 12 th Gen Intel(R) Core(TM) i7-12700 2.10 GHz and a memory of 16 GB of RAM.For the simulation specification, we use = 30 dB, 10 6 transmission frames, = 132 symbols and 8-PSK modulation.We also fix = 4, and = 1 for the maximum number of joint iteration and decoding iteration of all methods at all levels of the SNR, the processing time is nearly the same.So, this set up is used to calculate the processing time to achieve BER performance at the specified SNR level, = 20 dB.Furthermore, the computational complexity is also calculated as the summation of the number of computations of the processes such as encoder/decoder processes, modulation/demodulation processes, channel estimation processes, as illustrated in [27], [28]. Based on the results in Table 1, it was observed that the LDPC semi-blind method in [28] implements the fastest result and less number of operations because it only uses pilot symbols rather than using temporary decoding and encoding to form the feedback loop as our NB-LDPC blind feedback and LDPC blind feedback method in [27].NB-LDPC blind feedback method is slightly larger than LDPC blind feedback because of higher complexity in the decoding processing over ().However, NB-LDPC blind feedback method does not need pilot symbols and it also shows a better performance in high order of modulation as well as low region of SNR.The advantages and disadvantages of all methods are summarized in Table 2. Therefore, based on various applications and purposes, the use of three methods should be considered carefully to achieve an optimal solution. CONCLUSION This paper proposed a blind feedback process that combined channels estimation and decoding algorithm by taking the approach of NB-LDPC codes over higher Galois field.Although the processing time and computational complexity are slightly higher than two LDPC feedback methods, NB-LDPC codes show its robustness in many factors.Indeed, the results show that the proposed technique provides a better performance in MSE and BER than both conventional NB-LDPC without feedback and traditional LDPC codes, especially in high order of modulation and low region of SNR without using pilot symbols.Consequently, based on various applications and purposes, NB-LDPC codes are a promise technique and it should be carefully considered to achieve an optimal solution in short-packet FD transmissions and high order modulation communications. Figure 1 . Figure 1.Traditional NB-LDPC codes FD transmission without feedback model Figure 2 . Figure 2. Proposed NB-LDPC blind feedback model Figure3and Figure4show the MSE performances of SI channel and intended channel versus SNR (dB) for different orders of M-PSK modulator, respectively.It can be clearly observed that the SNR increases will lead to the MSE decreases on both SI channel ( ) and intended channel ( ).The results illustrate that the NB-LDPC blind feedback method approximately reaches to the lower bound (the Ideal Case) at all regions of SNR, regardless of the order of M-PSK modulation.It can also be seen that the proposed NB-LDPC blind feedback method has a better performance than the LDPC blind feedback scheme studied in[27], especially for high order of modulation.In particular, for 8 -PSK and 16-PSK, the NB-LDPC feedback curves converge quickly to saturation error floor, i.e. 10 −5 (for 8-PSK) or 10 −3 (for 16-PSK) at = 30 dB, while the LDPC blind feedback curves require more SNR to obtain those values. Table 1 . Processing time and computation complexity Table 2 . Processing time and computation complexity Non-binary codes approach on the performance of short-packet full-duplex … (Bao QuocVuong)1689
3,995.6
2024-04-01T00:00:00.000
[ "Engineering", "Computer Science" ]